Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
robocup19 [2018/11/29 16:09] s_7teji4robocup19 [2018/11/29 21:49] s_7teji4
Line 17: Line 17:
 </html> </html>
 === Team === === Team ===
-{{ :robocupfiles:toya01.jpg?nolink&100| Toya }}+{{ :robocupfiles:toya01.jpg?nolink&150| Toya }}
 <html> <html>
 +<br>
   <p align="center" display="inline-block">   <p align="center" display="inline-block">
     Alina Hawkin <br> hawkin[at]uni-bremen.de     Alina Hawkin <br> hawkin[at]uni-bremen.de
Line 37: Line 38:
     Vanessa Hassouna <br> hassouna[at]uni-bremen.de     Vanessa Hassouna <br> hassouna[at]uni-bremen.de
   </p>   </p>
-</html> +  <br> 
-=== Methodology and implementation===+  <br> 
 + </html>  
 +===Methodology and implementation===
 <html> <html>
   <div display="inline-block">   <div display="inline-block">
Line 44: Line 47:
     <p><b><u>Navigation</u>:</b><br>     <p><b><u>Navigation</u>:</b><br>
 In order to perform the task, the robot needs to be able to autonomously and safely navigate within the world. This includes the generation of a representation of its surroundings as a 2D map, so that a path can be calculated to navigate the robot from point A to point B. Collision avoidance of objects which are not accounted for within the map, e.g. movable objects like chairs or people who cross the path of the robot, have to be accounted for and reacted to accordingly as well. The combination of all these aspects allows for a safe navigation of the robot within the environment. In order to perform the task, the robot needs to be able to autonomously and safely navigate within the world. This includes the generation of a representation of its surroundings as a 2D map, so that a path can be calculated to navigate the robot from point A to point B. Collision avoidance of objects which are not accounted for within the map, e.g. movable objects like chairs or people who cross the path of the robot, have to be accounted for and reacted to accordingly as well. The combination of all these aspects allows for a safe navigation of the robot within the environment.
 +<iframe width="560" height="315" src="https://www.youtube.com/embed/pdhZrSwF7dA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
  </p>  </p>
   <p><b><u>Manipulation</u>:</b><br>   <p><b><u>Manipulation</u>:</b><br>
Line 53: Line 57:
   </p>   </p>
   <p><b><u>Perception</u>:</b><br>   <p><b><u>Perception</u>:</b><br>
 +  </html>{{ :robocupfiles:objects_perception01.png?nolink&200 |}}<html>
   The perception module has the task to process the visual data received by the robot's camera sensors. The cameras send point clouds forming the different objects in the robot's view port. By using frameworks such as RoboSherlock and OpenCV, the perception can calculate the geometric features of the scene. Based on this, it figures out, which point clouds could be objects of interest. It recognizes the shapes, colors and other features of objects and publishes this information to the other modules.   The perception module has the task to process the visual data received by the robot's camera sensors. The cameras send point clouds forming the different objects in the robot's view port. By using frameworks such as RoboSherlock and OpenCV, the perception can calculate the geometric features of the scene. Based on this, it figures out, which point clouds could be objects of interest. It recognizes the shapes, colors and other features of objects and publishes this information to the other modules.
   </p>   </p>




Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: