gsoc2016logo.jpg

Google Summer of Code 2016

The software libraries that origin from our laboratory and are now used and supported by a larger user community are: the KnowRob system for robot knowledge processing, the CRAM framework for plan-based robot control, openEASE for collecting and analyzing experiment data and RoboSherlock for cognitive perception. In our group, we have a very strong focus on open source software and active maintenance and integration of projects. The systems we develop are available under BSD license, Apache v2.0 and partly (L)GPL.

For the proposed topics in the context of our work please refer to the section further below.

For a PDF-version of this years ideas page, and a brief introduction of our research group, please see this document.

When contacting us, please make sure you read the description of the topic you are interested in carefully. Only contact the person responsible for the topic / topics you are interested in. Please only ask topic-relevant specific questions, otherwise your emails will not be answered due to limited resources we have for processing the vast amount of GSoC inquiries. For more general questions please use our IRC channel.

KnowRob -- Robot Knowledge Processing

KnowRob is a knowledge processing system that combines knowledge representation and reasoning methods with techniques for acquiring knowledge from different sources and for grounding the knowledge in a physical system. It provides robots with knowledge to be used in their tasks, for example action descriptions, object models, environment maps, and models of the robot's hardware and capabilities. The knowledge base is complemented with reasoning methods and techniques for grounding abstract, high-level information about actions and objects in the perceived sensor data.

KnowRob became the main knowledge base in the ROS ecosystem and is actively being used in different academic and industrial research labs around the world. Several European research projects use the system for a wide range of applications, from understanding instructions from the Web (RoboHow), describing multi-robot search-and-rescue tasks (SHERPA), assisting elderly people in their homes (SRS) to industrial assembly tasks (SMErobotics).

KnowRob is an open-source project hosted at GitHub that also provides extensive documentation on its website – from getting-started guides to tutorials for advanced topics in robot knowledge representation.

CRAM -- Robot Plans

CRAM is a high-level system for designing and performing abstract robot plans to define intelligent robot behavior. It consists of a library of generic, robot platform independent plans, elaborate reasoning mechanisms for detecting and repairing plan failures, as well as interface modules for executing these plans on real robot hardware. It supplies robots with concurrent, reactive task execution capabilities and makes use of knowledge processing backends, such as KnowRob, for information retrieval.

CRAM builds on top of the ROS ecosystem and is actively developed as an open-source project on GitHub. It is the basis for high-level robot control in many parts of the world, especially in several European research projects covering applications from geometrically abstract object manipulation (RoboHow), multi-robot task coordination and execution (SHERPA), experience based task parametrization retrieval (RoboEarth), and safe human robot interaction (SAPHARI). Further information, as well as documentation and application use-cases can be found at the CRAM website.

openEASE -- Experiment Knowledge Database

OpenEASE is a generic knowledge database for collecting and analysing experiment data. Its foundation is the KnowRob knowledge processing system and ROS, enhanced by reasoning mechanisms and a web interface developed for inspecting comprehensive experiment logs. These logs can be recorded for example from complex CRAM plan executions, virtual reality experiments, or human tracking systems. OpenEASE offers interfaces for both, human researchers that want to visually inspect what has happened during a robot experiment, and robots that want to reason about previous task executions in order to improve their behavior.

The OpenEASE web interface as well as further information and publication material can be accessed through its publicly available website. It is meant to make complex experiment data available to research fields adjacent to robotics, and to foster an intuition about robot experience data.

RoboSherlock -- Framework for Cognitive Perception

RoboSherlock is a common framework for cognitive perception, based on the principle of unstructured information management (UIM). UIM has proven itself to be a powerful paradigm for scaling intelligent information and question answering systems towards real-world complexity (i.e. the Watson system from IBM). Complexity in UIM is handled by identifying (or hypothesizing) pieces of structured information in unstructured documents, by applying ensembles of experts for annotating information pieces, and by testing and integrating these isolated annotations into a comprehensive interpretation of the document.

RoboSherlock builds on top of the ROS ecosystem and is able to wrap almost any existing perception algorithm/framework, and allows easy and coherent combination of the results of these. The framework has a close integration with two of the most popular libraries used in robotic perception, namely OpneCV and PCL. More details about RoboSherlock can be found on the project webpage.

Proposed Topics

In the following, we list our proposals for the Google Summer of Code topics that contribute to the aforementioned open-source projects.

Topic 1: Multi-modal Cluttered Scene Analysis in Knowledge Intensive Scenarios

Main Objective: The main objective of this topic is to enable robots in a human environment to recognize objects in difficult and challenging scenarios. To achieve this the participant will develop software components for RoboSherlock, called annotators, that are aided by background knowledge in order to detect objects. These scenarios include stacked,occluded objects placed on shelves, objects in drawers, refrigerators, dishwashers, cupboards etc. In typical scenarios, these confined spaces also bare an underlying structure, which will be exploited, and used as background knowledge, to aid perception (e.g. stacked plates would show up as parallel lines using an edge detection).

Task Difficulty: The task is considered to be challenging, as it is still a hot research topic where general solutions do not exist.

Requirements: Good programming skills in C++ and basic knowledge of CMake. Experience with PCL, OpenCV is prefered. Knowledge of Prolog is a plus.

Expected Results: Currently the RoboSherlock framework lacks good perception algorithms that can generate object-hypotheses in challenging scenarios(clutter and/or occlusion). The expected results are several software components based on recent advances in cluttered scene analysis that are able to successfully recognized objects in the scenarios mentioned in the objectives, or a subset of these.

Contact: Ferenc Bálint-Benczédi

Topic 2: Realistic Grasping using Unreal Engine

Main Objective: The objective of the project is to implement var- ious human-like grasping approaches in a game developed using Unreal Engine.

The game consist of a household environment where a user has to execute various given tasks, such as cooking a dish, setting the table, cleaning the dishes etc. The interaction is done using various sensors to map the users hands onto the virtual hands in the game.

In order to improve the ease of manipulating objects the user should be able to switch during runtime the type of grasp (pinch, power grasp, precision grip etc.) he/she would like to use.

Task Difficulty: The task is to be placed in the easy difficulty level, as it requires less algorithmic knowledge and more programming skills.

Requirements: Good programming skills in C++. Good knowledge of the Unreal Engine API. Experience with skeletal control / animations / 3D models in Unreal Engine.

Expected Results We expect to enhance our currently developed robot learning game with realistic human-like grasping capabilities. These would allow users to interact more realistically with the given virtual environment. Having the possibility to manipulate objects of various shapes and sizes will allow to increase the repertoire of the executed tasks in the game. Being able to switch between specific grasps will allow us to learn grasping models specific to each manipulated object.

Contact: Andrei Haidu

Topic 3: Plan Library for Autonomous Robots performing Chemical Experiments

Main Objective: of this theme is to develop in Gazebo simulator a set of plan-based control programs which will equip an autonomous mobile robot to perform a set of typical manipulations within a chemistry laboratory. The set of plan-based control programs resulted at the end of the program will be tested on the real PR2 robot from the Institute for Artificial Intelligence of the University of Bremen, Germany.

The successful candidate will use the domain specific language of CRAM toolbox and code plan-based control programs which will enable the PR2 robot to perform manipulations like: simple grasping of different containers, screwing and unscrewing the cap of a test tube, pouring a substance from a container into another container, operating a centrifuge, etc.

In the first phase of the project the successful candidate will make sure he/she is familiar with the domain specific language of CRAM toolbox and the parameters of the plan-based control programs. This phase will culminate with the student having coded a simple complete and fully runnable plan-based control program.

In the second phase of the project together with the successful candidate we will decide the set of manipulations which will be implemented in order to enable the robot to perform a simple and complete chemical experiment.

In the last phase of the project, the plan-based control programs developed in the second phase will be put together and the complete chemical experiment will be tested and fixed until it runs successfully.

The set of plan-based control programs resulted at the end of the program will represent the execution basis of the future experiments which will be done at IAI in order to figure out how an autonomous robot can reproduce a chemical experiment represented with semantic web tools.

Requirements: The ideal candidate must be comfortable programming in LISP and familiar with the ROS concepts. The candidate familiar with the Gazebo simulator and CRAM toolbox will have a big plus.

Expected Results We expect to successfully code a library of plan-based control programs which will enable an autonomous robot to manipulate the typical chemical laboratory equipment and perform a small class of chemical experiments in Gazebo simulator.

Contact: Gheorghe Lisca

Topic 4: Multi-modal Interaction in Human-Robot Teams in Outdoor Environments

Main Objective: The objective of this project is to enable the external communication and interaction of a human with robots in the simulation in order to accomplish tasks together. To achieve this goal, the participant will develop an interface that enables to give written instructions, language instructions or instructions by clicking on an object in the environment that has to be performed by the robots. Furthermore, the history of the given instructions and the position of the robots such as paths and goal poses have to be stored in an action script in order to comprehend and understand the interpreted task.

Task Difficulty: The task is to be placed easy difficulty and requires good programming skills.

Requirements: The ideal candidate must have good programming skills in C++ and Python and shall be familiar with the ROS concept and the Gazebo Simulator.

Expected Results: We expect to enhance our currently developed human-robot simulation with an external human instead of a simulated human in order to ensure a realistic human-robot communication. The history of the task execution enables the understanding and reasoning about the behaviour that could support and improve human-robot interaction and understanding in real scenarios.

Contact: Fereshta Yazdani





Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: