Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
teaching:gsoc2018 [2018/01/17 17:46] – [Google Summer of Code 2018] nygateaching:gsoc2018 [2018/01/22 09:49] – [RobCoG - Robot Commonsense Games] ahaidu
Line 30: Line 30:
 package in the Python package index ([[https://pypi.python.org/pypi/pracmln|PyPI]]). package in the Python package index ([[https://pypi.python.org/pypi/pracmln|PyPI]]).
  
 +
 +===== RoboSherlock -- Framework for Cognitive Perception =====
 +
 +RoboSherlock is a common framework for cognitive perception, based on the principle of unstructured information management (UIM). UIM has proven itself to be a powerful paradigm for scaling intelligent information and question answering systems towards real-world complexity (i.e. the Watson system from IBM). Complexity in UIM is handled by identifying (or hypothesizing) pieces of
 +structured information in unstructured documents, by applying ensembles of experts for annotating information pieces, and by testing and integrating these isolated annotations into a comprehensive interpretation of the document.
 +
 +RoboSherlock builds on top of the ROS ecosystem and is able to wrap almost any existing perception algorithm/framework, and allows easy and coherent combination of the results of these. The framework has a close integration with two of the most popular libraries used in robotic perception, namely OpneCV and PCL. More details about RoboSherlock can be found on the project [[http://robosherlock.org/|webpage]].
 +
 +===== openEASE -- Web-based Robot Knowledge Service =====
 +
 +OpenEASE is a generic knowledge database for collecting and analyzing experiment data. Its foundation is the KnowRob knowledge processing system and ROS, enhanced by reasoning mechanisms and a web interface developed for inspecting comprehensive experiment logs. These logs can be recorded for example from complex CRAM plan executions, virtual reality experiments, or human tracking systems. OpenEASE offers interfaces for both, human researchers that want to visually inspect what has happened during a robot experiment, and robots that want to reason about previous task executions in order to improve their behavior.
 +
 +The OpenEASE web interface as well as further information and publication material can be accessed through its publicly available [[http://www.open-ease.org/|website]]. It is meant to make complex experiment data available to research fields adjacent to robotics, and to foster an intuition about robot experience data.
 +
 +===== RobCoG - Robot Commonsense Games =====
 +
 +[[http://robcog.org/|RobCoG]] (**Rob**ot **Co**mmonsense **G**ames) is a framework consisting of various open source games and plugins (https://github.com/robcog-iai) written in the Unreal Engine with the intention of collecting and equipping robots with commonsense and naive physics knowledge. Various Game prototypes are created where users are asked to execute kitchen related tasks. During gameplay developed game plugins automatically collecting symbolic and sub-symbolic data. The automatically annotated data is then stored in the web-based knowledge service [[http://www.open-ease.org/|openEASE]]. This allows robots to access it and reason about it.
 +
 +The games are split into two categories: (1) VR/Full Body Tracking with physics based interactions, where data as close as possible to reality is collected. The users are immersed in a virtual environment and are asked to perform tasks using natural movements. (2) FPS style, web-based games, where the users interact with objects using a keyboard and mouse. This allows for easy crowdsourcing capabilities since these games could be run from a browser (open-ease.org/robcogweb). The data will be less precise for more low level learning, however still valuable at a more higher level (e.g. positioning of objects, the order of executing actions etc.).
 +===== Proposed Topics =====
 +
 +In the following, we list our proposals for the Google Summer of Code topics that contribute to the aforementioned open-source projects.
 ==== Topic 1: Markov logic networks in Python ==== ==== Topic 1: Markov logic networks in Python ====
  
Line 38: Line 60:
 Python. The main objective of this project is to port the  Python. The main objective of this project is to port the 
 computationally heavy parts of the learning and inference algorithms to  computationally heavy parts of the learning and inference algorithms to 
-Cython (cython.org), an extension to Python that allows static +[[http://www.cython.org|Cython]], an extension to Python that allows static 
 compilation of Python modules to C libraries. Cython allows to add  compilation of Python modules to C libraries. Cython allows to add 
 static type declarations to Python, which can significantly speed up  static type declarations to Python, which can significantly speed up 
Line 62: Line 84:
  
 **Contact:** [[team/daniel_nyga|Daniel Nyga]] **Contact:** [[team/daniel_nyga|Daniel Nyga]]
-===== Proposed Topics ===== 
  
-In the following, we list our proposals for the Google Summer of Code topics that contribute to the aforementioned open-source projects.+ 
 +==== Topic 2: Felxible perception pipeline manipulation for RoboSherlock ==== 
 + 
 +{{  :teaching:gsoc:topic1_rs.png?nolink&200|}} 
 + 
 +**Main Objective:** RoboSherlock is based on the unstructured information management paradigm and uses the uima library at it's core. The c++ implementation of this library is limited multiple ways. In this topic you will develop a module in order to flexibly manage perception pipelines by extending the current implementation to enable new modalities and  run pipelines in parallel. This involves implementing an API for pipeline and data handling that is rooted in the domain of UIMA.  
 + 
 +**Task Difficulty:** The task is considered to be of medium difficulty 
 +   
 +**Requirements:** Good programming skills in C++ and basic knowledge of CMake and ROS. Experience with PCL, OpenCV is prefered. 
 + 
 +**Expected Results:** an extension to RoboShelrock that allows splitting and joingin pipelines, executing them in parallel, merging results from multiple types of cameras etc.  
 + 
 +Contact: [[team/ferenc_balint-benczedi|Ferenc Bálint-Benczédi]] 
 + 




Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: