CADMatch¶
Introduction¶
The CADMatch component is an optional component of the rc_visard.
Note
The component is optional and requires a separate CADMatch license to be purchased.
This component provides an out-of-the-box perception solution for 3D object detection and grasping. CADMatch targets the detection of 3D objects based on a CAD template for picking with a general gripper. The objects can be located in a bin or placed arbitrarily in the field of view of the camera.
For the CADMatch component to work, special object templates are required for each type of object to be detected. Please get in touch with the Roboception support (Contact) to order a template for your CAD file.
The CADMatch component offers:
- A dedicated page on the rc_visard Web GUI for easy setup, configuration, testing, and application tuning.
- A REST-API interface and a KUKA Ethernet KRL Interface.
- The definition of regions of interest to select relevant volumes in the scene (see Region of interest functionality).
- A load carrier detection functionality for bin-picking applications (see Load carrier functionality), to provide grasps for objects inside a bin only.
- The definition of compartments inside a load carrier to provide grasps for specific volumes of the bin only.
- The definition of grasp points for each template via an interactive visualization in the Web GUI
- Support for static and robot-mounted cameras and. optional integration with the Hand-eye calibration component, to provide grasps in the user-configured external reference frame.
- Sorting of grasps according to reachability so that the best ones are returned first.
Setting of grasp points¶
The CADMatch component detects 3D objects in a scene based on a CAD template and returns the
poses of the object origins. To use CADMatch directly in a robot application, grasp points
can be defined for each template. A grasp point represents the desired position and orientation
of the robot’s TCP (Tool Center Point) to grasp an object as shown in
fig-cadmatch-grasp-points
Each grasp consists of an id
which must be
unique within all grasps for an object template, the template_id
representing the template the grasp
should be attached to, and the pose
in the coordinate frame of the object template.
Grasp points can be set via the REST-API interface,
or by using the interactive visualization in the Web GUI. The rc_visard can store up to 50 grasp
points per template.
Setting grasp points in the Web GUI¶
The rc_visard Web GUI provides an intuitive and interactive way of defining grasp points for object templates. In a first step, the object template has to be uploaded to the rc_visard. This can be done on the CADMatch page in the Modules tab of the Web GUI by clicking on add new Template in the Templates and Grasps section of the CADMatch page. Once the template upload is complete, a dialog with a 3D visualization of the object for adding or editing grasp points is shown. The same dialog appears when editing an existing template.
This dialog provides two ways for setting grasp points:
- Adding grasps manually: By clicking on the + symbol, a new grasp is placed in the object origin. The grasp can be given a unique name which corresponds to its ID. The desired pose of the grasp can be entered in the fields for Position and Roll/Pitch/Yaw which are given in the coordinate frame of the object template represented by the long x, y and z axes in the visualization. The grasp point can be placed freely with respect to the object template - inside, outside or on the surface. The grasp point and its orientation are visualized in 3D for verification.
- Adding grasps interactively: Grasp points can be added interactively by first clicking on the Add Grasp button in the upper left corner of the visualization and then clicking on the desired point on the object template visualization. The grasp is attached to the object surface. The grasp orientation is a right-handed coordinate system and is chosen such that its z axis is normal to the surface pointing inside the object at the grasp position. The position and orientation in the object coordinate frame is displayed on the right. The position and orientation of the grasp can also be changed interactively. In case Snap to surface is enabled in the visualization (default), the grasp can be moved along the object surface by clicking on the Translate button in the visualization and then clicking on the grasp point and dragging it to the desired position. The orientation of the grasp around the surface normal can also be changed by choosing Rotate and then rotating the grasp with the cursor. In case Snap to surface is disabled, the grasp can be translated and rotated freely in all three dimensions.
If the object template has symmetries, the grasps which are symmetric to the defined grasps can be displayed by clicking on Show symmetric grasps.
Setting grasp points via the REST-API¶
Grasp points can be set via the REST-API interface
using the set_grasp
or set_all_grasps
service calls
(see Services).
In the CADMatch component a grasp consists of the template_id
of the template
the grasp should be attached to, an id
uniquely identifying the grasp point and
the pose
with position
in meters and orientation
as quaternion in the coordinate frame of
the object template.
Setting the preferred orientation of the TCP¶
The CADMatch component determines the reachability of grasp points based on the preferred orientation of the
gripper or TCP. The preferred orientation can be set via the set_preferred_orientation
service call or on
the CADMatch page in the Web GUI.
The resulting direction of the TCP’s z axis is used to reject grasps which cannot be reached by the gripper.
Furthermore, the preferred orientation is used to sort the reachable grasps. Grasps sorting is performed based on a combination of
- the matching score of the object the grasp is located on, and
- the distance from the camera along the z axis of the preferred orientation of the TCP.
The preferred orientation can be set in the camera coordinate frame or in the external coordinate frame, in case a hand-eye calibration is available. If the preferred orientation is specified in the external coordinate frame and the sensor is robot mounted, the current robot pose has to be given to each object detection call, so that the preferred orientation can be used for filtering and sorting the grasps on the detected objects. If no preferred orientation is set, the z axis of the left camera is used as the preferred orientation of the TCP.
Detection of objects¶
The CADMatch component requires an object template for object detection. This template contains information about the 3D shape of the object and prominent edges that can be visible in the camera images.
The object detection is a two-stage process consisting of a prior estimation step and a pose refinement step. First, a pose prior is computed based on the appearance of the object in the camera images. Second, the pose is refined by using the 3D point cloud and edges in the camera image. For this to work, the objects to detect must be visible in both left and right camera images.
For triggering the object detection, in general, the following information must be provided to the CADMatch component:
- The template ID of the object to be detected in the scene.
- The coordinate frame in which the poses of the detected objects and the grasp points shall be returned (ref. Hand-eye calibration).
Optionally, further information can be given to the CADMatch component:
- The ID of the load carrier which contains the items to be detected.
- A compartment inside the load carrier where to detect objects (as in ItemPick’s Computation of grasps).
- The ID of the region of interest where to search for the load carriers if a load carrier is set. Otherwise, the ID of the region of interest where to search for the objects.
- The current robot pose in case the camera is mounted on the robot and
the chosen coordinate frame for the poses is
external
, or the preferred orientation is given in the external frame, or the chosen region of interest is defined in the external frame. - Collision detection information: The ID of the gripper to enable collision checking and optionally a pre-grasp offset to define a pre-grasp position. The collision check requires a separate CollisionCheck license to be purchased. Details on collision checking are given below in CollisionCheck.
On the Web GUI the detection can be tested in the Try Out section of the CADMatch component’s page.
The detected objects are returned in a list of matches
. Each detected object
includes a uuid
(Universally Unique Identifier) and the
timestamp
of the oldest image that was used to detect it.
The pose
of a detected object corresponds to the pose of the origin of the object template used for detection.
Furthermore, the matching score
is given to indicate the quality of the detection.
If the chosen template also has grasp points attached
(see Setting of grasp points),
a list of grasps
for all objects sorted by their reachability
(see Setting the preferred orientation of the TCP)
is returned in addition to the list of detected objects.
The grasp poses are given in the desired coordinate frame. There are references between the detected objects
and the grasps via their uuids
.
Note
The first detection call with a new object template takes longer than the following detection calls, because the object template has to be loaded into the CADMatch component first.
Interaction with other components¶
Internally, the CADMatch component depends on, and interacts with other on-board components as listed below.
Note
All changes and configuration updates to these components will affect the performance of the CADMatch components.
Stereo camera and Stereo matching¶
The CADMatch component makes internally use of the following data:
- Rectified images from the Stereo camera component
(
rc_stereocamera
); - Disparity, error, and confidence images from the Stereo matching component
(
rc_stereomatching
).
The quality
parameter of the stereo matching component must be set to Medium
or higher (see Parameters). We recommend Full
or High
quality for using CADMatch.
All processed images are guaranteed to be captured after the component trigger time.
Estimation of gravity vector¶
For each load carrier detection or object detection inside a load carrier, the component estimates the gravity vector by subscribing to the rc_visard’s IMU data stream.
Note
The gravity vector is estimated from linear acceleration readings from the on-board IMU. For this reason, the CADMatch component requires the rc_visard to remain still while the gravity vector is being estimated.
IO and Projector Control¶
In case the rc_visard is used in conjunction with an external random dot projector and
the IO and Projector Control component (rc_iocontrol
),
it is recommended to connect the projector to GPIO Out 1 and set
the stereo-camera component’s acquisition mode to SingleFrameOut1
(see Stereo matching parameters), so that
on each image acquisition trigger an image with and without projector pattern is acquired.
Alternatively, the output mode for the GPIO output in use should be set to ExposureAlternateActive
(see Description of run-time parameters).
In either case,
the Auto Exposure Mode exp_auto_mode
should be set to AdaptiveOut1
to optimize the exposure
of both images (see Stereo camera parameters).
Hand-eye calibration¶
In case the camera has been calibrated to a robot, the CADMatch component
can automatically provide poses in the robot coordinate frame.
For the CADMatch node’s Services, the frame of the
output poses can be controlled with the pose_frame
argument.
Two different pose_frame
values can be chosen:
- Camera frame (
camera
). All poses provided by the components are in the camera frame, and no prior knowledge about the pose of the camera in the environment is required. This means that the configured regions of interest and load carriers move with the camera. It is the user’s responsibility to update the configured poses if the camera frame moves (e.g. with a robot-mounted camera). - External frame (
external
). All poses provided by the components are in the external frame, configured by the user during the hand-eye calibration process. The component relies on the on-board Hand-eye calibration component to retrieve the sensor mounting (static or robot mounted) and the hand-eye transformation. If the mounting is static, no further information is needed. If the sensor is robot-mounted, therobot_pose
is required to transform poses to and from theexternal
frame.
Note
If no hand-eye calibration is available, all pose_frame
values should be set to camera
.
All pose_frame
values that are not camera
or external
are rejected.
CollisionCheck¶
In case a CollisionCheck license is available, the collision checking can be easily enabled for
grasp computation of the CADMatch component by passing the ID of the used gripper and
optionally a pre-grasp offset to the
detect_object
service call. The gripper has to be
defined in the CollisionCheck component
(see Setting a gripper)
and details about collision checking are given in Collision checking within other modules.
If collision checking is enabled, only grasps which are collision free will be returned. However, the result image on top of the CADMatch tab of the Web GUI also shows colliding grasp points in red.
The CollisionCheck module’s run-time parameters affect the collision detection as described in CollisionCheck Parameters.
Parameters¶
The CADMatch component is called rc_cadmatch
in the REST-API and is represented by the CADMatch page in the Modules tab of the
Web GUI.
The user can explore and configure the rc_cadmatch
component’s run-time parameters, e.g. for development and testing, using the Web GUI or the
REST-API interface.
Parameter overview¶
This component offers the following run-time parameters:
Name | Type | Min | Max | Default | Description |
---|---|---|---|---|---|
edge_max_distance |
float64 | 0.5 | 5.0 | 2.0 | Maximum allowed distance in pixels between the template edges and the detected edges in the image |
edge_sensitivity |
float64 | 0.0 | 1.0 | 0.5 | Sensitivity of the edge detector |
load_carrier_crop_distance |
float64 | 0.0 | 0.05 | 0.005 | Safety margin in meters by which the load carrier inner dimensions are reduced to define the region of interest for detection |
load_carrier_model_tolerance |
float64 | 0.003 | 0.025 | 0.008 | Indicates how much the estimated load carrier dimensions are allowed to differ from the load carrier model dimensions in meters |
max_matches |
int32 | 1 | 20 | 10 | Maximum number of matches |
min_score |
float64 | 0.05 | 1.0 | 0.3 | Minimum score for matches |
Description of run-time parameters¶
Each run-time parameter is represented by a row on the Web GUI’s CADMatch page in the Modules tab. The name in the Web GUI is given in brackets behind the parameter name and the parameters are listed in the order they appear in the Web GUI:
max_matches
(Maximum Matches)- is the maximum number of objects to detect.
min_score
(Minimum Score)- is the minimum detection score after refinement. The higher this value, the better 2D edges and 3D point cloud must match the given template.
load_carrier_model_tolerance
(Model Tolerance)- see Parameters of the load carrier functionality.
load_carrier_crop_distance
(Crop Distance)- see Parameters of the load carrier functionality.
edge_sensitivity
(Edge Sensitivity)- is the sensitivity of the edge detector. The higher the value of this parameter, the more edges will be used for pose refinement.
edge_max_distance
(Maximum Edge Distance)- is the maximum allowed distance in pixels between the template edges and the detected edges in the image during the refinement step.
Status values¶
The rc_cadmatch
component reports the following status values:
Name | Description |
---|---|
data_acquisition_time |
Time in seconds required by the last active service to acquire images |
last_timestamp_processed |
The timestamp of the last processed dataset |
load_carrier_detection_time |
Processing time of the last load carrier detection in seconds |
object_detection_time |
Processing time of the last last object detection in seconds |
state |
The current state of the rc_cadmatch node |
The reported state
can take one of the following values.
State name | Description |
---|---|
IDLE | The component is idle. |
RUNNING | The component is running and ready for load carrier detection and object detection. |
FATAL | A fatal error has occurred. |
Services¶
The user can explore and call the rc_cadmatch
component’s services,
e.g. for development and testing, using the
REST-API interface or
the rc_visard
Web GUI.
Each service response contains a return_code
,
which consists of a value
plus an optional message
.
A successful service returns with a return_code
value of 0
.
Negative return_code
values indicate that the service failed.
Positive return_code
values indicate that the service succeeded with additional information.
The smaller value is selected in case a service has multiple return_code
values,
but all messages are appended in the return_code
message.
The following table contains a list of common codes:
Code | Description |
---|---|
0 | Success |
-1 | An invalid argument was provided |
-2 | An internal error occurred |
-3 | An internal time-out occurred |
-4 | Data acquisition took longer than the maximum allowed time of 5.0 seconds |
-8 | Not applicable, stereo quality must be at least Medium |
-9 | No valid license for the module |
-10 | New element could not be added as the maximum storage capacity of load carriers or regions of interest has been exceeded |
10 | The maximum storage capacity of load carriers or regions of interest has been reached |
11 | Existing data was overwritten |
100 | The requested load carrier was not detected in the scene |
101 | None of the detected grasps is reachable |
102 | The detected load carrier is empty |
103 | All detected grasps are in collision with the load carrier |
151 | The object template has a continuous symmetry |
999 | Additional hints for application development |
The CADMatch components offer the following services.
start
¶
Starts the component. If the command is accepted, the component moves to state
RUNNING
. Thecurrent_state
value in the service response may differ fromRUNNING
if the state transition is still in process when the service returns.This service has no arguments.
The definition for the response with corresponding datatypes is:
{ "accepted": "bool", "current_state": "string" }
stop
¶
Stops the component. If the command is accepted, the component moves to state
IDLE
. Thecurrent_state
value in the service response may differ fromIDLE
if the state transition is still in process when the service returns.This service has no arguments.
The definition for the response with corresponding datatypes is:
{ "accepted": "bool", "current_state": "string" }
set_region_of_interest
¶
get_regions_of_interest
¶
delete_regions_of_interest
¶
set_load_carrier
¶
see set_load_carrier.
get_load_carriers
¶
see get_load_carriers.
delete_load_carriers
¶
see delete_load_carriers.
detect_load_carriers
¶
see detect_load_carriers.
detect_filling_level
¶
see detect_filling_level.
set_preferred_orientation
¶
Persistently stores the preferred orientation of the gripper to compute the reachability of the grasps, which is used for filtering and sorting the grasps returned by the
detect_object
service (see Setting the preferred orientation of the TCP).The definition for the request arguments with corresponding datatypes is:
{ "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "pose_frame": "string" }The definition for the response with corresponding datatypes is:
{ "return_code": { "message": "string", "value": "int16" } }
get_preferred_orientation
¶
Returns the preferred orientation of the gripper to compute the reachability of the grasps, which is used for filtering and sorting the grasps returned by the
detect_object
service (see Setting the preferred orientation of the TCP).This service has no arguments.
The definition for the response with corresponding datatypes is:
{ "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "pose_frame": "string", "return_code": { "message": "string", "value": "int16" } }
set_grasp
¶
Persistently stores a grasp for the given object template on the rc_visard. All configured grasps are persistent over firmware updates and rollbacks.
The definition for the request arguments with corresponding datatypes is:
{ "grasp": { "id": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "template_id": "string" } }The definition for the response with corresponding datatypes is:
{ "return_code": { "message": "string", "value": "int16" } }Details for the definition of the
grasp
type are given in Setting of grasp points.
set_all_grasps
¶
Replaces the list of grasps for the given object template on the rc_visard.
The definition for the request arguments with corresponding datatypes is:
{ "grasps": [ { "id": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "template_id": "string" } ], "template_id": "string" }The definition for the response with corresponding datatypes is:
{ "return_code": { "message": "string", "value": "int16" } }Details for the definition of the
grasp
type are given in Setting of grasp points.
get_grasps
¶
Returns all configured grasps which have the requested
grasp_ids
and belong to the requestedtemplate_ids
. If nograsp_ids
are provided, all grasps belonging to the requestedtemplate_ids
are returned. If notemplate_ids
are provided, all grasps with the requestedgrasp_ids
are returned. If neither IDs are provided, all configured grasps are returned.The definition for the request arguments with corresponding datatypes is:
{ "grasp_ids": [ "string" ], "template_ids": [ "string" ] }The definition for the response with corresponding datatypes is:
{ "grasps": [ { "id": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "template_id": "string" } ], "return_code": { "message": "string", "value": "int16" } }
delete_grasps
¶
Deletes all grasps with the requested
grasp_ids
that belong to the requestedtemplate_ids
. If nograsp_ids
are provided, all grasps belonging to the requestedtemplate_ids
are deleted. Thetemplate_ids
list must not be empty.The definition for the request arguments with corresponding datatypes is:
{ "grasp_ids": [ "string" ], "template_ids": [ "string" ] }The definition for the response with corresponding datatypes is:
{ "return_code": { "message": "string", "value": "int16" } }
get_symmetric_grasps
¶
Returns all grasps that are symmetric to the given grasp. The first grasp in the returned list is the one that was passed with the service call. If the object template does not have an exact symmetry, only the grasp passed with the service call will be returned. If the object template has a continuous symmetry (e.g. a cylindrical object), only 12 equally spaced sample grasps will be returned.
The definition for the request arguments with corresponding datatypes is:
{ "grasp": { "id": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "template_id": "string" } }The definition for the response with corresponding datatypes is:
{ "grasps": [ { "id": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "template_id": "string" } ], "return_code": { "message": "string", "value": "int16" } }Details for the definition of the
grasp
type are given in Setting of grasp points.
detect_object
¶
Triggers the detection of objects as described in Detection of objects based on an object template.
Request:
The definition for the request arguments with corresponding datatypes is:
{ "collision_detection": { "gripper_id": "string", "pre_grasp_offset": { "x": "float64", "y": "float64", "z": "float64" } }, "load_carrier_compartment": { "box": { "x": "float64", "y": "float64", "z": "float64" }, "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } } }, "load_carrier_id": "string", "pose_frame": "string", "region_of_interest_id": "string", "robot_pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "template_id": "string" }Required arguments:
pose_frame
: see Hand-eye calibration.
template_id
: the ID of the template to be detected.Potentially required arguments:
robot_pose
: see Hand-eye calibration.Optional arguments:
load_carrier_id
: ID of the load carrier which contains the items to be detected.
load_carrier_compartment
: compartment inside the load carrier where to detect items.
region_of_interest_id
: ifload_carrier_id
is set, ID of the region of interest where to search for the load carriers. Otherwise, ID of the region of interest where to search for the objects.
collision_detection
: see Collision checking within other modules. The collision check requires a separate CollisionCheck license to be purchased.Response:
The definition for the response with corresponding datatypes is:
{ "grasps": [ { "id": "string", "match_uuid": "string", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "timestamp": { "nsec": "int32", "sec": "int32" }, "uuid": "string" } ], "load_carriers": [ { "id": "string", "inner_dimensions": { "x": "float64", "y": "float64", "z": "float64" }, "outer_dimensions": { "x": "float64", "y": "float64", "z": "float64" }, "overfilled": "bool", "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "rim_thickness": { "x": "float64", "y": "float64" } } ], "matches": [ { "grasp_uuids": [ "string" ], "pose": { "orientation": { "w": "float64", "x": "float64", "y": "float64", "z": "float64" }, "position": { "x": "float64", "y": "float64", "z": "float64" } }, "pose_frame": "string", "score": "float32", "template_id": "string", "timestamp": { "nsec": "int32", "sec": "int32" }, "uuid": "string" } ], "return_code": { "message": "string", "value": "int16" }, "timestamp": { "nsec": "int32", "sec": "int32" } }
grasps
: list of grasps on the detected objects. The grasps are ordered by their reachability starting with the grasp that can be reached most easily by the robot. Thematch_uuid
gives the reference to the detected object inmatches
this grasp belongs to.
load_carriers
: list of detected load carriers.
matches
: list of detected objects matching the template. Thescore
indicates how well the object matches the template. Thegrasp_uuids
refer to the grasps ingrasps
which are reachable on this object.
timestamp
: timestamp of the image set the detection ran on.
return_code
: holds possible warnings or error codes and messages.
save_parameters
¶
This service saves the currently set parameters persistently. Thereby, the same parameters will still apply after a reboot of the rc_visard. The node parameters are not persistent over firmware updates and rollbacks.
This service has no arguments.
The definition for the response with corresponding datatypes is:
{ "return_code": { "message": "string", "value": "int16" } }
reset_defaults
¶
This service resets all parameters of the component to its default values, as listed in above table. The reset does not apply to regions of interest and load carriers.
This service has no arguments.
The definition for the response with corresponding datatypes is:
{ "return_code": { "message": "string", "value": "int16" } }
Template Upload¶
For template upload, download and listing, special REST-API endpoints are provided. Up to 30 templates can be stored persistently on the rc_visard.