Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
robocup19 [2018/11/29 16:08] s_7teji4robocup19 [2018/11/29 16:12] s_7teji4
Line 17: Line 17:
 </html> </html>
 === Team === === Team ===
-{{ :robocupfiles:toya01.jpg?nolink&200| Toya }}+{{ :robocupfiles:toya01.jpg?nolink&150| Toya }}
 <html> <html>
 +<br>
   <p align="center" display="inline-block">   <p align="center" display="inline-block">
     Alina Hawkin <br> hawkin[at]uni-bremen.de     Alina Hawkin <br> hawkin[at]uni-bremen.de
Line 37: Line 38:
     Vanessa Hassouna <br> hassouna[at]uni-bremen.de     Vanessa Hassouna <br> hassouna[at]uni-bremen.de
   </p>   </p>
 +  <br>
 +  <br>
 </html> </html>
 === Methodology and implementation=== === Methodology and implementation===
Line 51: Line 54:
   <p><b><u>Planning</u>:</b><br>   <p><b><u>Planning</u>:</b><br>
   Planning connects perception, knowledge, manipulation and navigation by creating plans for the robot activities. Here, we develop generic strategies for the robot so that he can decide, which action should be executed in which situation and in which order. One task of planning is the failure handling and providing of recovery strategies.   To write the plans, we use the programming language lisp.   Planning connects perception, knowledge, manipulation and navigation by creating plans for the robot activities. Here, we develop generic strategies for the robot so that he can decide, which action should be executed in which situation and in which order. One task of planning is the failure handling and providing of recovery strategies.   To write the plans, we use the programming language lisp.
 +  </p>
 +  <p><b><u>Perception</u>:</b><br>
 +  The perception module has the task to process the visual data received by the robot's camera sensors. The cameras send point clouds forming the different objects in the robot's view port. By using frameworks such as RoboSherlock and OpenCV, the perception can calculate the geometric features of the scene. Based on this, it figures out, which point clouds could be objects of interest. It recognizes the shapes, colors and other features of objects and publishes this information to the other modules.
   </p>   </p>
   </div>   </div>




Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: