Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
robocup19 [2018/11/29 21:49] s_7teji4robocup19 [2018/11/29 22:13] s_7teji4
Line 52: Line 52:
   Manipulation is the control of movements of joints and links of the robot. This allows the robot to make correct movements and accomplish orders coming from the upper layer. Depending on the values ​​received the robot can for example enter objects or deposit them.   Manipulation is the control of movements of joints and links of the robot. This allows the robot to make correct movements and accomplish orders coming from the upper layer. Depending on the values ​​received the robot can for example enter objects or deposit them.
 Giskardpy is an important library in the cacul of movements. The implementation of the manipulation is done thanks to the python language, ROS, and the iai_kinematic_sim and iai_hsr_sim libraries in addition to Giskardpy. Giskardpy is an important library in the cacul of movements. The implementation of the manipulation is done thanks to the python language, ROS, and the iai_kinematic_sim and iai_hsr_sim libraries in addition to Giskardpy.
 +</html>{{ :robocupfiles:toya02.png?nolink&300 |}} <html>
   </p>   </p>
   <p><b><u>Planning</u>:</b><br>   <p><b><u>Planning</u>:</b><br>
Line 57: Line 58:
   </p>   </p>
   <p><b><u>Perception</u>:</b><br>   <p><b><u>Perception</u>:</b><br>
-  </html>{{ :robocupfiles:objects_perception01.png?nolink&200 |}}<html> 
   The perception module has the task to process the visual data received by the robot's camera sensors. The cameras send point clouds forming the different objects in the robot's view port. By using frameworks such as RoboSherlock and OpenCV, the perception can calculate the geometric features of the scene. Based on this, it figures out, which point clouds could be objects of interest. It recognizes the shapes, colors and other features of objects and publishes this information to the other modules.   The perception module has the task to process the visual data received by the robot's camera sensors. The cameras send point clouds forming the different objects in the robot's view port. By using frameworks such as RoboSherlock and OpenCV, the perception can calculate the geometric features of the scene. Based on this, it figures out, which point clouds could be objects of interest. It recognizes the shapes, colors and other features of objects and publishes this information to the other modules.
 +  </html>{{ :robocupfiles:objects_perception01.png?nolink&200 |}}<html>
   </p>   </p>
   </div>   </div>
Line 64: Line 65:
  
 === Link to open source and research === === Link to open source and research ===
 +<html>
 + <ul style="list-style-type:disc">
 +  <li><a href="https://github.com/Suturo1819">SUTURO 18/19</a></li>
 +  <li><a href="https://github.com/SemRoCo/giskardpy">Giskardpy</a></li>
 +  <li><a href="https://github.com/code-iai">Code IAI</a></li>
 + </ul> 
 +</html>




Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: