User Tools

Site Tools


software:uima:setup:rhdemo

Running RoboSherlock for the robohow demo

Important: Before starting anything, make sure that the Kinect on the robot is set to high resolution mode, and the robot is localized. High resolution is important to get correct poses for objects that are detected based on RGB images. To do this:

rosrun rqt_reconfigure rqt_reconfigure

Find kinect_head, select driver and change image_mode from VGA_30HZ to SXGA_15Hz.

Filtering the pointclouds/depth-image in order to get data only from the regions of interest (the two counter tops and the table), requires the robot to be localized. Slight errors in localization are OK, but larger errors lead to false oject detections.

Running the perception pipeline for the demo

To start a pipeline run:

rosrun iai_rs_cpp rs_runAE demo

Note: on the demo PC this can also be done by running rs_run <AEame>

Optional: you can run the realtime urdf filter, which will filter the robot from the 3D data

roslaunch realtime_urdf_filter realtime_urdf_filter.launch

Note1: if using the urdf filter, the config file path in the CollectionReader2.xml. To do this go to the project home folder and open CollectionReader2.xml. It located in the {PROJ_HOME}/descriptors/annotators/ folder. Look for the lines:

  <nameValuePair>
    <name>camera_config_file</name>
    <value>
    <!--    <string>config/config_Kinect_robot_urdf_filter.ini</string>-->
    <string>config/config_Kinect_robot.ini</string>
    </value>
  </nameValuePair>

Uncomment the line conatining urdf filter and comment the line after it.

Note2:Robot self filtering does not work on the demo PC yet.

Using MLNs

To use the results from the MLN based inferencing run:

roslaunch mln_query mln_query.launch

The mln atoms generator and the inferencer are by default in the pipeline, but the demo will run even we are not running it.

Pipeline definition

The pipeline for the demo is defined in an analasys engine called demo.xml. It can be found in ${PROJ_DIR}/descriptors/analysis_engines. The important part is:

<flowConstraints>
    <fixedFlow>
    <node>CollectionReader2</node>
    <node>URDFRegionFilter</node>
    <node>NormalEstimator</node>
    <node>PlaneAnnotator</node>
    <node>PointCloudClusterExtractor</node>
    <node>SpatulaSegmentation</node>
    <node>Cluster3DGeometryAnnotator</node>
    <node>PrimitiveShapeAnnotator</node>
 <!--      <node>ClusterTracker</node>-->
 <!--      <node>ClusterGogglesAnnotator</node>-->
    <node>SacModelAnnotator</node>
    <node>LinemodAnnotator</node>
    <node>ClusterColorHistogramCalculator</node>
    <node>MLNAtomsGenerator</node>
    <node>MLNInferencer</node>
    <node>PancakeAnnotator</node>
    <node>DisplayAnnotator</node>
    <node>ResultAdvertiserAnnotator</node>
    </fixedFlow>
</flowConstraints>

To turn on or off modules simply (un)comment them. No recompilation needed. All of the modules in the pipeline have their own configuration file, located in {PROJ_SOURCE}/descriptors/annotators. Parameters of the algorithms can be changed here (e.g. min cluster size, min. plane size, binary threshold limit etc.)

Troubelshooting

You can use the Visual output of RoboSherlock to see what the perception system thinks it sees. Select the window and press left and right to navigate through the results of different annotators

For debugging purposes, there is a module called TFBroadcaster, which will publish to tf the poses of the clusters that it sees. It is not recommended to run this modul during the actual demo!

Most common problem when perceiving an object in the wring pose is that the robot moved, but perception was still processing an old frame. Take care when you start listening for results! In it's current state the whole pipeline might take up to 5 seconds!!! (sorry bout that:D)

If spatulas are not seen try fiddleing with the lights :D

software/uima/setup/rhdemo.txt · Last modified: 2016/05/19 09:19 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki