This is an old revision of the document!
Table of Contents
Perception Tutorial
Tutorial describing first steps needed to start processing data from the Kinect, in the form of Point Clouds, using tools included in ROS Fuerte.
1. Prerequisite
Follow the steps described in Recommended procedure for installing ROS and Getting Started with ROS
If you are using a PrimeSense device (Xbox Kinect, Asus Xtion) check whether ros-fuerte-openni-kinect is installed. You can do this by running the following command in a terminal:
dpkg -l | grep ros-fuerte-openni-kinect
If not present install using:
sudo apt-get install ros-fuerte-openni-kinect
Check out the code presented at seminar in your ros-workspace from:
https://github.com/ai-seminar/perception-tutorials.git
Prerecorded bag file can be found in ../tutorial_pkg/data/.
2. Viewing and recording data from the Kinect
For a PrimeSense device: connect it to your PC and run:
roslaunch openni_launch openni.launch
Run rviz:
rosrun rviz rviz
Add a new PointCloud2 display type to rviz, and choose /camera/depth_registered_points in the topics field. If rviz returns errors, change Fixed Frame in Global Options from <Filxed Frame> to one of the available ones in the dropdown list (e.g. /camera_link)
Use rosbag to record data in a bag file, e.g.:
rosbag record /camera/depth_registered/points /tf
Note: /tf is needed if you want to view the recorded data using rviz. Play back a bag file using:
rosbag play filename.bag --loop
More detail and a more elegant way of saving data from a Kinect to bag files can be found http://www.ros.org/wiki/openni_launch/Tutorials/BagRecordingPlayback
- viewing with rviz (point clouds) and image_view (images)
- record using bag files or save clouds in *pcd files using tool in ros_pcl
- replay bag files, view them in rviz
- subscribing to the PointCloud topic from C++
Processing Point Clouds
- reading in a pcd file
- using PCLVisualizer
- removing nans
- filtering based on axes
- downsampling the Point Cloud
- RANSAC plane fitting
- extracting indices