This page describes the Vision Locate Node in detail.
Vision Locate Node
The Vision Locate node is used to execute a vision based behavior, typically identifying the position (or location) of a trained snapshot within the field of of view of the camera and update the locator frame accordingly. For details on how to use the vision functionality, refer to the tutorial.
Composite - has 0 or more children.
Auto generated and user editable
- track cycle time
Creates a shared data variable that automatically tracks the cycle time of the vision inspect execution.
- Failure Timeout
How long to wait for detection and re-location to complete. If it doesn’t occur within that time frame, the task will proceed to the following node without completing any of its children.
This menu gives the list of available snapshots from which to choose. Snapshots are created using the Snapshot Editor. A User can click on the Snapshot icon to open the Snapshot Gallery.
- Object Settings
- similarity threshold
Similarity controls how closely the new image of the object must match the initial training of the object for a successful recognition to occur. The higher the similarity setting, the more selective the system will be when looking for objects.
Enabling the symmetrical option makes tracking squares, rectangles, and circles more consistent. When tracking an object which is highly symmetrical, this option should be enabled. Enabling symmetrical automatically overrides the Start Angle and End Angle options.
- Start angle (only available for static and moving detection types)
Counter clockwise rotational difference from the object's trained position which will be detected.
- End angle (only available for static and moving detection types)
Clockwise rotational difference from the object's trained position which will be detected.
- Locator Settings
Controls how many images Sawyer will take of the object before determining its location. ‘Fast’ minimizes the time it takes to scan, but may sacrifice some accuracy for moving detection. ‘Accurate’ takes the most pictures, which is precise yet time consuming. ‘Balanced’ is between the two. Selecting ‘Advanced’ allows for manual control of the number of images (value can be set between 2 and 25).
- ignore orientation
When true, the Move To nodes which use the vision node’s object frame as their parent will ignore the angle of the part. Therefore, the angle of the part will not affect moves associated with it. Use this when the orientation of the gripper does not matter (e.g. when using a single cup vacuum gripper) or when the orientation of the place does not matter (e.g. with a circular part).
- locator type
A ‘Static’ locator should be used to pick a part that does not move. If the part is moving at a constant velocity, such as on a conveyor, use the ‘Moving’ locator type.
Note: The ‘Moving’ locator type is only applicable to arm camera locator snapshots.
Moving Locator Settings
A ‘moving’ locator will predict where the location of the part will be using either Time or Distance.
- Predict Location
The system will predict where the object will be at the end of the specified action time window by measuring the velocity of the moving part. It will also update the vision frame to the location based on the observed speed and time.
- arrive within
Must be greater than or equal to the total time it takes from the object being detected until the part is grasped. Typically, not only must the move be completed by this time, but also the actuation of the gripper as well. If the pick is happening too late, try increasing this number. If the arm arrives early and has to wait a long time for the part to arrive, try decreasing this number, but make sure the arm always arrives a little bit before the part.
- actuation time
Time allotted for the gripper to actuate when calculating the timed move. The value of "arrive within" minus the actuation time is the allotted time for the robot to complete all the moves necessary to get to the object's predicted location. For example, if the value of "arrive within" is 10 seconds and the actuation time is 1 second, the move needs to be completed in 9 seconds. The valid range for this variable is 0 to 10 seconds. If the pick is happening too late, try increasing this number. Gripper actuation time is typically less than 1 second.
The Vision node updates the vision frame by adding the action distance to the detected location of the object. However, no velocity estimation occurs, so it is up to the user to determine the distance traveled on the conveyor (e.g., by reading an encoder and converting the encoder output into a distance metric), and have logic in the tree such that the action is complete when the user's logic has determined that the part has traveled the distance specified by action distance.
- action distance
The distance between the object detected location and the expected location.
Press ‘TEST’ to open the Test Window.
- Go To
When the ‘arm pose’ toggle is on in the Vision Locate Node Editor, pressing ‘GO TO’ will move the arm to the snapshot pose. Move the object in the arm camera’s field of view to test if the detection is as expected. A green box should cover the object when the detection is successful. The larger green box outlines the search area.
Press ‘RELOCATE’ when the test is done, and an object frame will be updated.
- update children
When the ‘update children’ toggle is on, pressing ‘RELOCATE’ will update all the child nodes that use the object frame corresponding to this locator node. Disable ‘update children’ if the moves are already in the correct location and only the object frame needs to be updated to match. For example, if a move is trained based on an object which has moved since being located, disabling the ‘Update Children' option allows the frame of the object to be updated without altering the arm pose of this Move To node.