Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
teaching:gsoc2018 [2018/01/21 20:29] balintbeteaching:gsoc2018 [2018/02/20 19:22] – [Topic 3: Unreal - ROS 2 Integration] ahaidu
Line 4: Line 4:
 ====== Google Summer of Code 2018 ====== ====== Google Summer of Code 2018 ======
 ~~NOTOC~~ ~~NOTOC~~
 +
 +In the following we shortly present the [[#software|open source frameworks]] that are participating for this year's Google Summer of Code. This can be useful if you would like to propose your own topic.
 +
 +For the **proposed topics** see [[#proposed_topics|section]] further below.
 +
 +For **Q/A** check out our [[https://gitter.im/iai_gsoc18/Lobby|gitter page]].
 +
 +
 +===== Software =====
 +
 ===== pracmln ===== ===== pracmln =====
  
Line 37: Line 47:
  
 RoboSherlock builds on top of the ROS ecosystem and is able to wrap almost any existing perception algorithm/framework, and allows easy and coherent combination of the results of these. The framework has a close integration with two of the most popular libraries used in robotic perception, namely OpneCV and PCL. More details about RoboSherlock can be found on the project [[http://robosherlock.org/|webpage]]. RoboSherlock builds on top of the ROS ecosystem and is able to wrap almost any existing perception algorithm/framework, and allows easy and coherent combination of the results of these. The framework has a close integration with two of the most popular libraries used in robotic perception, namely OpneCV and PCL. More details about RoboSherlock can be found on the project [[http://robosherlock.org/|webpage]].
 +
 +===== openEASE -- Web-based Robot Knowledge Service =====
 +
 +OpenEASE is a generic knowledge database for collecting and analyzing experiment data. Its foundation is the KnowRob knowledge processing system and ROS, enhanced by reasoning mechanisms and a web interface developed for inspecting comprehensive experiment logs. These logs can be recorded for example from complex CRAM plan executions, virtual reality experiments, or human tracking systems. OpenEASE offers interfaces for both, human researchers that want to visually inspect what has happened during a robot experiment, and robots that want to reason about previous task executions in order to improve their behavior.
 +
 +The OpenEASE web interface as well as further information and publication material can be accessed through its publicly available [[http://www.open-ease.org/|website]]. It is meant to make complex experiment data available to research fields adjacent to robotics, and to foster an intuition about robot experience data.
 +
 +===== RobCoG - Robot Commonsense Games =====
 +
 +[[http://robcog.org/|RobCoG]] (**Rob**ot **Co**mmonsense **G**ames) is a framework consisting of various open source games and plugins (https://github.com/robcog-iai) written in the Unreal Engine with the intention of collecting and equipping robots with commonsense and naive physics knowledge. Various Game prototypes are created where users are asked to execute kitchen related tasks. During gameplay developed game plugins automatically collecting symbolic and sub-symbolic data. The automatically annotated data is then stored in the web-based knowledge service [[http://www.open-ease.org/|openEASE]]. This allows robots to access it and reason about it.
 +
 +The games are split into two categories: (1) VR/Full Body Tracking with physics based interactions, where data as close as possible to reality is collected. The users are immersed in a virtual environment and are asked to perform tasks using natural movements. (2) FPS style, web-based games, where the users interact with objects using a keyboard and mouse. This allows for easy crowdsourcing capabilities since these games could be run from a browser (http://open-ease.org/robcogweb). The data will be less precise for more low level learning, however still valuable at a more higher level (e.g. positioning of objects, the order of executing actions etc.).
 +
 +
 +===== CRAM - Cognition-enabled Robot Executive =====
 +
 +CRAM is a software toolbox for the design, implementation and deployment of cognition-enabled plan execution on autonomous robots. CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed. This way CRAM-programmed autonomous robots are more flexible and general than control programs that lack such cognitive capabilities. CRAM does not require the whole reasoning domain to be stated explicitly in an abstract knowledge base. Rather, it grounds symbolic expressions into the perception and actuation routines and into the essential data structures of the control plans. CRAM includes a domain-specific language that makes writing reactive concurrent robot behavior easier for the programmer. It extensively uses the ROS middleware infrastructure.
 +
 +CRAM is an open-source project hosted on [[https://github.com/cram2/cram|GitHub.]] It has its own
 +[[http://cram-system.org|project page]] that provides extensive documentation 
 +and tutorials that help to get started.
  
  
Line 66: Line 97:
 **Requirements:** Good programming skills in the Python programming **Requirements:** Good programming skills in the Python programming
 language (CPython/Cython), experience in Artificial Intelligence and Machine Learning language (CPython/Cython), experience in Artificial Intelligence and Machine Learning
-(ideally SRL technques and logic)+(ideally SRL technques and logic). Knowledge about C/C++ will be very helpful.
  
 **Expected Results:** The core components of pracmln, i.e. the learning **Expected Results:** The core components of pracmln, i.e. the learning
Line 75: Line 106:
 **Contact:** [[team/daniel_nyga|Daniel Nyga]] **Contact:** [[team/daniel_nyga|Daniel Nyga]]
  
 +**Remarks:** If you have questions about this project in advance, about your application, qualification or ways to get started, please post your question in the [[https://gitter.im/iai_gsoc18/pracmln|pracmln gitter chat]]. Personal e-mails will not be answered. 
 +==== Topic 2: Flexible perception pipeline manipulation for RoboSherlock ====
  
-==== Topic 1Multi-modal Cluttered Scene Analysis in Knowledge Intensive Scenarios ====+{{  :teaching:gsoc:topic1_rs.png?nolink&145|}}
  
-{{  :teaching:gsoc:topic1_rs.png?nolink&200|}}+**Main Objective:** RoboSherlock is based on the unstructured information management paradigm and uses the uima library at it's core. The c++ implementation of this library is limited multiple ways. In this topic you will develop a module in order to flexibly manage perception pipelines by extending the current implementation to enable new modalities and  run pipelines in parallel. This involves implementing an API for pipeline and data handling that is rooted in the domain of UIMA
  
-**Main Objective:** In this topic we will develop algorithms that en- +**Task Difficulty:** The task is considered to be of medium difficulty.  
-able robots in a human environment to recognize objects in diffi- +   
-cult and challenging scenariosTo achieve this the participant will +**Requirements:** Good programming skills in C++ and basic knowledge of CMake and ROSExperience with PCLOpenCV is prefered.
-develop annotators for RoboSherlock that are particularly aimed at +
-object-hypotheses generation and merging. Generating a hypotheses +
-essentially means to generate regions/clusters in our raw data that +
-form a single object or object-part. In particular this entails the de- +
-velopment of segmentation algorithms for visually challenging scenes +
-or object properties, as the likes of transparent objects, or cluttered, +
-occluded scenesThe addressed scenarios include stackedoccluded +
-objects placed on shelves, objects in drawers, refrigerators, dishwash- +
-ers, cupboards etcIn typical scenarios, these confined spaces also +
-bare an underlying structure, which will be exploited, and used as +
-background knowledge, to aid perception (e.g. stacked plates would +
-show up as parallel lines using an edge detection). Specifically we +
-would start from (but not necessarly limit ourselves to) the implemen- +
-tation of two state-of-the-art algorithms described in recent papers:+
  
-[1] Aleksandrs Ecins, Cornelia Fermuller and Yiannis AloimonosCluttered Scene Segmentation Using the Symmetry ConstraintInternational Conference on Robotics and Automation(ICRA) 2016 +**Expected Results:** an extension to RoboShelrock that allows splitting and joingin pipelinesexecuting them in parallelmerging results from multiple types of cameras etc
-[2] Richtsfeld A., M ̈ +
-orwald T., Prankl J., Zillich M. and Vincze +
-M. - Segmentation of Unknown Objects in Indoor Environments. +
-IEEE/RSJ International Conference on Intelligent Robots and Sys- +
-tems (IROS), 2012.+
  
-**Task Difficulty:** The task is considered to be challenging, as it is still a hot research topic where general solutions do not exist.+**Assignement:** In order to be considered for this topic you need to solve a short programming assignement described [[https://gist.github.com/bbferka/06b645dfaec068f9fdc7352500583b80|here]] 
 + 
 +---- 
 + 
 +e-mail: [[team/ferenc_balint-benczedi|Ferenc Bálint-Benczédi]] 
 + 
 +chat:   [[https://gitter.im/iai_gsoc18/RoboSherlock|gitter]] 
 + 
 +==== Topic 3: Unreal - ROS 2 Integration ==== 
 + 
 +{{  :teaching:gsoc:ue_ros2.png?nolink&150|}} 
 + 
 +Since [[https://github.com/ros2/ros2/wiki|ROS2]] has cross platform support, it would be of a great advantage to wrap it as a module in the Unreal Engine framework. This would greatly improve communication between our RobCoG modules and the ROS world. As a further step the module should be extended to work under Linux as well. This can be done using the unreal build system ([[https://docs.unrealengine.com/latest/INT/Programming/UnrealBuildSystem/|UBT]]). 
 + 
 +**Task Difficulty:** The task is to be placed in the medium difficulty level, as it requires programming skills of various frameworks (ROS, Linux, Unreal Engine)
      
-**Requirements:** Good programming skills in C++ and basic knowledge of CMake and ROS. Experience with PCLOpenCV is prefered.+**Requirements:** Good programming skills in C++. Good knowledge of the Unreal Engine API. Experience with ROS, ROS 2c++ library linkage in Unreal Engine.
  
-**Expected Results:** Currently the RoboSherlock framework lacks good perception algorithms that can generate object-hypotheses in challenging scenarios(clutter and/or occlusion). The expected results are several software components based on recent advances in cluttered scene analysis that are able to successfully recognized objects in the scenarios mentioned in the objectives, or a subset of these.+**Expected Results** We expect to have an integrated communication level with ROS 2 and Unreal Engine on Windows and Linux side.
  
-Contact: [[team/ferenc_balint-benczedi|Ferenc Bálint-Benczédi]]+Contact: [[team/andrei_haidu|Andrei Haidu]] 
 +Chat: [[https://gitter.im/iai_gsoc18/unreal|Gitter]] 
 + 
 + 
 +==== Topic 4: Unreal Editor User Interface Development ==== 
 + 
 +{{  :teaching:gsoc:ue_editor.png?nolink&200|}} 
 + 
 +For this topic we would like to extend the modules from RobCoG with intuitive Unreal Engine Editor Panels. This would allow easier and faster manipulation/visualization of various parameters.  
 + 
 +**Task Difficulty:** The task is to be placed in the easy difficulty level, as it only requires familiarity with the [[https://docs.unrealengine.com/latest/INT/Programming/Slate/|SLATE]] framework from Unreal Engine. 
 +   
 +**Requirements:** Good programming skills in C++. Good knowledge of the Unreal Engine API. Experience with the [[https://docs.unrealengine.com/latest/INT/Programming/Slate/|SLATE]] framework. 
 + 
 +**Expected Results** We expect to have intuitive Unreal Engine UI Panels for editing, visualizing various RobCoG plugins data and features. 
 + 
 +Contact: [[team/andrei_haidu|Andrei Haidu]] 
 + 
 + 
 +==== Topic 5: Unreal openEASE Live Connection ==== 
 + 
 +{{  :teaching:gsoc:ue_oe.png?nolink&150|}} 
 + 
 +For this topic we would like to create a live connection between openEASE and RobCoG. A user should be able to connect to openEASE from the Unreal Engine Editor and perform various queries. For example to verify if the items from the Unreal Engine world are present in the ontology of the robot. It should be able to upload new data directly from the editor. 
 + 
 +**Task Difficulty:** The task is to be placed in the medium difficulty level, as it required knowledge of various frameworks/libraries (Unreal Engine, openEASE, c++ websocket communication) 
 +   
 +**Requirements:** Good programming skills in C++. Good knowledge of the Unreal Engine API. Experience with c++ websocket based communication. 
 + 
 +**Expected Results** We expect to have a live connection with between openEASE and the Unreal Engine editor. 
 + 
 +Contact: [[team/andrei_haidu|Andrei Haidu]], [[team/asil_kaan_bozcuoglu|Asil Kaan Bozcuoğlu]] 
 + 
 + 
 +==== Topic 6: CRAM -- Visualizing Robot's Simulation World in RViz ==== 
 + 
 +{{ :teaching:fetch-left-in-hand-cropped.png?nolink&200|}} 
 + 
 +**Main Objective:** CRAM includes a fast simulation engine for developers to test their newly written plans and for robots to try out different parameters of an action before executing it in the real world. Currently, the world is only visualized using raw OpenGL rendering. The objective of this topic is to visualize the robot's simulation world in the ROS visualization tool RViz, including the state of the robot itself, the objects surrounding it and the reasoning processes involved in action execution. 
 + 
 +**Task Difficulty:** The task itself is simple assuming good understanding of ROS principles and basic knowledge of RViz. To that the challenge of learning a small chuck of an existing system (CRAM) is added. So overall task difficulty is considered to be medium. 
 + 
 + 
 +{{ :teaching:fetch-left-in-hand-real-cropped.jpg?nolink&200|}} 
 + 
 +**Requirements:** 
 +  * Familiarity with functional programming paradigms: some functional programming experience is a requirement (preferred language is Lisp but Haskel, Scheme, OCaml, Clojure, Scala or similar will do); 
 +  * Experience with ROS (Robot Operating System). 
 + 
 +**Expected Results:** We expect operational and robust contributions to the source code of the existing robot control system including documentation. 
 + 
 +Contact: [[team/gayane_kazhoyan|Gayane Kazhoyan]] 
 + 
 +==== Topic 7: Robot simulation in Unreal Engine with PhysX ==== 
 + 
 +{{ :teaching:unreal_ros_pr2.png?200|}} 
 + 
 +**Main Objective:** The objective of the project is to enable physically enabled simulation of robots in [[https://www.unrealengine.com/|Unreal Engine]] using [[http://docs.nvidia.com/gameworks/content/gameworkslibrary/physx/apireference/files/hierarchy.html|PhysX]]
 + 
 +**Task Difficulty:** The task is to be placed in the hard difficulty 
 +level, as it requires programming skills of various frameworks (Unreal Engine, 
 +PhysX), expertise in robotic simulation and physics engines. 
 +   
 +**Requirements:** Good programming skills in C++. Good knowledge 
 +of the Unreal Engine and PhysX API. Experience in robotics and robotic simulation is a plus.
  
 +**Expected Results** We expect to be able to simulate robots in unreal, have support and able to control standard joints.
  
 +Contact: [[team/andrei_haidu|Andrei Haidu]]




Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: