====== Running RoboSherlock for the robohow demo ====== __Important:__ Before starting anything, make sure that the Kinect on the robot is set to high resolution mode, and the robot is localized. High resolution is important to get correct poses for objects that are detected based on RGB images. To do this: rosrun rqt_reconfigure rqt_reconfigure Find //kinect_head//, select //driver// and change //image_mode// from **VGA_30HZ** to **SXGA_15Hz**. Filtering the pointclouds/depth-image in order to get data only from the regions of interest (the two counter tops and the table), requires the robot to be localized. Slight errors in localization are OK, but larger errors lead to false oject detections. ==Running the perception pipeline for the demo== To start a pipeline run: rosrun iai_rs_cpp rs_runAE demo __Note:__ on the demo PC this can also be done by running //**rs_run** // __Optional:__ you can run the realtime urdf filter, which will filter the robot from the 3D data roslaunch realtime_urdf_filter realtime_urdf_filter.launch __Note1:__ if using the urdf filter, the config file path in the //CollectionReader2.xml//. To do this go to the project home folder and open //CollectionReader2.xml//. It located in the //{PROJ_HOME}/descriptors/annotators/// folder. Look for the lines: camera_config_file config/config_Kinect_robot.ini Uncomment the line conatining urdf filter and comment the line after it. __Note2__:Robot self filtering does not work on the demo PC yet. ==Using MLNs== To use the results from the MLN based inferencing run: roslaunch mln_query mln_query.launch The mln atoms generator and the inferencer are by default in the pipeline, but the demo will run even we are not running it. ==Pipeline definition== The pipeline for the demo is defined in an analasys engine called //demo.xml//. It can be found in //${PROJ_DIR}/descriptors/analysis_engines//. The important part is: CollectionReader2 URDFRegionFilter NormalEstimator PlaneAnnotator PointCloudClusterExtractor SpatulaSegmentation Cluster3DGeometryAnnotator PrimitiveShapeAnnotator SacModelAnnotator LinemodAnnotator ClusterColorHistogramCalculator MLNAtomsGenerator MLNInferencer PancakeAnnotator DisplayAnnotator ResultAdvertiserAnnotator To turn on or off modules simply (un)comment them. No recompilation needed. All of the modules in the pipeline have their own configuration file, located in //{PROJ_SOURCE}/descriptors/annotators//. Parameters of the algorithms can be changed here (e.g. min cluster size, min. plane size, binary threshold limit etc.) ===== Troubelshooting ===== You can use the Visual output of RoboSherlock to see what the perception system thinks it sees. Select the window and press //left// and //right// to navigate through the results of different annotators For debugging purposes, there is a module called TFBroadcaster, which will publish to tf the poses of the clusters that it sees. It is **not** recommended to run this modul during the actual demo! Most common problem when perceiving an object in the wring pose is that the robot moved, but perception was still processing an old frame. Take care when you start listening for results! In it's current state the whole pipeline might take up to **5 seconds**!!! (sorry bout that:D) If spatulas are not seen try fiddleing with the lights :D