Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
teaching:gsoc2014 [2014/02/25 14:04] – [Topic 2: CRAM -- Symbolic Reasoning Tools with Bullet] tenorthteaching:gsoc2014 [2014/07/08 11:23] – [KnowRob -- Robot Knowledge Processing] tenorth
Line 9: Line 9:
 are available under BSD license, and partly (L)GPL. are available under BSD license, and partly (L)GPL.
  
 +If you are interested in working on a topic and meet its general criteria, you should have a look at the [[teaching:gsoc2014:application-template|Application Template page]].
 ===== KnowRob -- Robot Knowledge Processing ===== ===== KnowRob -- Robot Knowledge Processing =====
  
Line 25: Line 26:
 of applications, from understanding instructions from the Web (RoboHow), of applications, from understanding instructions from the Web (RoboHow),
 describing multi-robot search-and-rescue tasks (SHERPA), assisting elderly describing multi-robot search-and-rescue tasks (SHERPA), assisting elderly
-people in their homes (SRS) to industrial assembly tasks (SMERobotics).+people in their homes (SRS) to industrial assembly tasks ([[http://www.smerobotics.org|SMErobotics]]).
  
 KnowRob is an open-source project hosted at [[http://github.com/knowrob|GitHub]] KnowRob is an open-source project hosted at [[http://github.com/knowrob|GitHub]]
Line 51: Line 52:
 human robot interaction (SAPHARI). human robot interaction (SAPHARI).
 Further information, as well as documentation and application Further information, as well as documentation and application
-use-cases can be found at the [[http://www.cram-code.org/|CRAM+use-cases can be found at the [[http://www.cram-system.org/|CRAM
 website]]. website]].
  
Line 58: Line 59:
  
 ==== CRAM -- Virtual Robot Scenarios in Gazebo ==== ==== CRAM -- Virtual Robot Scenarios in Gazebo ====
 +{{  :teaching:gsoc:open_drawer.png?nolink&150|}}
 **Main Objective:** The development of scenarios and tasks **Main Objective:** The development of scenarios and tasks
 for human-sized robots. This is done using ROS, the Gazebo robot for human-sized robots. This is done using ROS, the Gazebo robot
Line 65: Line 67:
 and closing drawers and doors. and closing drawers and doors.
 This involves designing virtual environments for Gazebo and/or writing robot plans in Lisp using the CRAM high-level language, sending commands to virtual PR2 or REEM(-C) robots in Gazebo, and manipulating the artificial environment in there. The connection to an elaborate high-level system holds a lot of interesting opportunities. This involves designing virtual environments for Gazebo and/or writing robot plans in Lisp using the CRAM high-level language, sending commands to virtual PR2 or REEM(-C) robots in Gazebo, and manipulating the artificial environment in there. The connection to an elaborate high-level system holds a lot of interesting opportunities.
 +{{  :teaching:gsoc:reem_standing.png?nolink&150|}}
 The produced code will, when working in a simulated environment, be run on the real robot in our laboratory and become part of the high-level behaviour library for the connected robots. The produced code will, when working in a simulated environment, be run on the real robot in our laboratory and become part of the high-level behaviour library for the connected robots.
  
Line 72: Line 75:
 contributions to the software library that can be used as part of a contributions to the software library that can be used as part of a
 robot's control program.} robot's control program.}
 +
 +Contact: [[team/jan_winkler|Jan Winkler]]
  
 ==== Topic 2: CRAM -- Symbolic Reasoning Tools with Bullet ==== ==== Topic 2: CRAM -- Symbolic Reasoning Tools with Bullet ====
 {{  :teaching:gsoc:handle_detection_2.png?nolink&200|}} {{  :teaching:gsoc:handle_detection_2.png?nolink&200|}}
-**Main Objective:**Mapping the environment to the internal belief state representation and keeping track of changes in the environment to keep the belief state up to date based on manipulation and interaction tasks performed by the robot.\\   +**Main Objective:** Mapping the environment to the internal belief state representation and keeping track of changes in the environment to keep the belief state up to date based on manipulation and interaction tasks performed by the robot.\\ 
-**Task Difficulty:** Relatively simple, when making the existing track changes more robust, and more challenging when introducing new change tracking (like noting the angle of open doors after opening them and storing it in the belief state).\\  +Possible sub-projects: (*) reflecting in the belief state the changes in the environment such as "a drawer has been opened" (up to storing the precise angle the door is at after opening), (*) assuring the physical consistency of the belief state, i.e. correcting explicitly wrong data coming from sensors to closest logically sound values, (*) improving the representation of previously unseen objects in the belief state by, e.g. interpolating the point cloud into a valid 3D mesh, (*) improving the visualization of the belief state and the intentions of the robot such as, e.g., visualizing the navigation goals the robot generated before starting a navigation task, (*) many other ideas that we can discuss based on applicants' individual interests and abilities.\\ 
 +**Task Difficulty:** Relatively simple, when making the existing track changes more robust, and more challenging when introducing new change tracking.\\ 
 +{{  :teaching:gsoc:pr2_dishwasher.jpg?nolink&200|}}
 **Requirements:** At least basic understanding in functional programming is advisable (ideally Lisp), basic knowledge in ROS helps. Also a good understanding of geometric shapes and coordinate transformations helps.\\ **Requirements:** At least basic understanding in functional programming is advisable (ideally Lisp), basic knowledge in ROS helps. Also a good understanding of geometric shapes and coordinate transformations helps.\\
-**Expected Results:** We expect operational and robust contributions to the software library that can be used as part of robot's control+**Expected Results:** We expect operational and robust contributions to the software library that can be used as part of robot's control program.
  
 +For more information consult the [[http://cram-system.org/doc/reasoning/overview|documentation]].
 +
 +Contact: [[team/gayane_kazhoyan|Gayane Kazhoyan]]
 ==== Topic 3: KnowRob -- Reasoning about 3D CAD models of objects ==== ==== Topic 3: KnowRob -- Reasoning about 3D CAD models of objects ====
 <html><div style="float:right; margin-left:10px;"><iframe src="//player.vimeo.com/video/83977706" width="300" height="200" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div></html> <html><div style="float:right; margin-left:10px;"><iframe src="//player.vimeo.com/video/83977706" width="300" height="200" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div></html>
Line 106: Line 116:
 to the software library that can be used as part of a robot's control to the software library that can be used as part of a robot's control
 program. program.
 +
 +Contact: [[team/moritz_tenorth|Moritz Tenorth]]




Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: