Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
teaching:gsoc2015 [2015/03/19 07:24]
winkler [Google Summer of Code 2015]
teaching:gsoc2015 [2015/03/19 14:49]
gkazhoya [Google Summer of Code 2015]
Line 13: Line 13:
 If you are interested in working on a topic and meet its general criteria, you should have a look at the [[teaching:​gsoc2015:​application-template|Application Template page]]. If you are interested in working on a topic and meet its general criteria, you should have a look at the [[teaching:​gsoc2015:​application-template|Application Template page]].
  
-For a PDF-version of the ideas page, and a brief introduction of our research group, please see {{:teaching:gsoc:​15-googlesummerofcode.pdf|this document}}.+For a PDF-version of the ideas page, and a brief introduction of our research group, please see {{:​teaching:​15-googlesummerofcode.pdf|this document}}.
  
  
Line 108: Line 108:
  
 Contact: [[team/​andrei_haidu|Andrei Haidu]] Contact: [[team/​andrei_haidu|Andrei Haidu]]
 +
 +
 +==== Topic 3: CRAM -- Symbolic Reasoning Tools with Bullet ====
 +
 +{{  :​teaching:​gsoc:​handle_detection_2.png?​nolink&​200|}}
 +
 +**Main Objective:​** Extending a cognitive robot control framework (written in modern Common Lisp (SBCL) and partly C++) by mapping the environment of the robot to its internal world representation and tracking the changes in the environment to keep the internal world state up to date while the robot performs manipulation tasks in human environments.
 +
 +{{  :​teaching:​gsoc:​pr2_dishwasher.jpg?​nolink&​200|}}
 +
 +Possible sub-projects:​
 +
 +  * assuring the physical consistency of the belief state by utilizing the integrated Bullet physics engine, i.e. correcting explicitly wrong data coming from sensors to closest logically sound values;
 +  * improving the representation of previously unseen objects in the belief state by, e.g. interpolating the point cloud into a valid 3D mesh
 +{{  :​teaching:​gsoc:​to-reach.png?​nolink&​200|}}
 +  * improving the visualization of the belief state and the intentions of the robot in RViz (such as, e.g., highlighting the next object to be manipulated,​ printing textual information about the goals, etc.)
 +  * update the internal world state to reflect the changes in the environment such as “a drawer has been opened” (up to storing the precise angle the door is at after opening), and emitting corresponding semantically meaningful events, e.g. “object-fell-down” event is to be emitted when the z coordinate of the object drastically changes
 +  * many other ideas that we can discuss based on applicants’ individual interests and abilities.
 +
 +**Task Difficulty:​** Moderate, considering that most of the sub-projects deal with diving into a small chunk of an already existing system and extending it from within, as opposed to developing plug-ins and similar.
 +
 +**Requirements:​** Familiarity with functional programming paradigms. Minimal functional programming experience (preferred language is Lisp but experience in Haskel, Scheme, OCaml, Clojure or any programming language in general that supports functional programming,​ as, e.g., Julia or Java 8, will do). Experience with ROS (Robot Operating System) is a plus. Good understanding of geometric shapes and coordinate system transformations is a plus.
 +
 +**Expected Results:** We expect operational and robust contributions to the source code of the existing robot control system including minimal documentation and test coverage.
 +
 +Contact: [[team/​gayane_kazhoyan|Gayane Kazhoyan]]
 +
  
 ==== Topic 4: Multi-modal Big Data Analysis for Robotic Everyday Manipulation Activities ==== ==== Topic 4: Multi-modal Big Data Analysis for Robotic Everyday Manipulation Activities ====