Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2016/02/24 13:16] – gkazhoya | jobs [2017/10/24 07:16] – [Theses and Student Jobs] ahaidu | ||
---|---|---|---|
Line 1: | Line 1: | ||
~~NOTOC~~ | ~~NOTOC~~ | ||
- | =====Theses and Jobs===== | + | |
+ | =====Open researcher positions===== | ||
+ | |||
+ | == Researcher in the area of Knowledge bases and knowledge acquisition == | ||
+ | |||
+ | Position code A132/17. Please see [[http:// | ||
+ | |||
+ | |||
+ | == Researcher with background in VR programming == | ||
+ | |||
+ | Position code A133/17. Please see [[http:// | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | |||
+ | =====Theses and Student | ||
If you are looking for a bachelor/ | If you are looking for a bachelor/ | ||
+ | < | ||
== Lisp / CRAM support assistant (HiWi) == | == Lisp / CRAM support assistant (HiWi) == | ||
Technical support for the group for Lisp and the CRAM framework. \\ | Technical support for the group for Lisp and the CRAM framework. \\ | ||
- | 5 hours per week for up to 1 year (paid). | + | 8+ hours per week for up to 1 year (paid). |
Requirements: | Requirements: | ||
Line 15: | Line 33: | ||
Contact: [[team: | Contact: [[team: | ||
+ | --></ | ||
- | == Integrating PR2 in the Unreal Game Engine Framework (BA)== | + | == Integrating PR2 in the Unreal Game Engine Framework (BA/MA/HiWi)== |
{{ : | {{ : | ||
Line 30: | Line 49: | ||
Contact: [[team: | Contact: [[team: | ||
- | == Kitchen Activity Games in a Realistic Robotic Simulator (BA/MA)== | ||
- | {{ : | ||
- | Developing new activities and improving the current simulation framework done under the [[http://gazebosim.org/ | + | == Realistic Grasping using Unreal Engine (BA/MA/HiWi) == |
- | Requirements: | + | {{ : |
- | * Good programming skills in C/C++ | + | |
- | * Basic physics/ | + | |
- | * Gazebo simulator basic tutorials | + | |
- | Contact: | + | The objective of the project is to implement var- |
+ | ious human-like grasping approaches in a game developed using [[https:// | ||
- | == Integrating Eye Tracking | + | The game consist of a household environment where a user has to execute various given tasks, such as cooking a dish, setting the table, cleaning the dishes etc. The interaction is done using various sensors to map the users hands onto the virtual hands in the game. |
- | {{ : | + | |
- | Integrating | + | In order to improve |
+ | be able to switch | ||
+ | grasp, precision grip etc.) he/she would like to use. | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C++ | ||
+ | * Good knowledge of the Unreal Engine API. | ||
+ | * Experience with skeletal control / animations / 3D models in Unreal Engine. | ||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * Gazebo simulator basic tutorials | ||
- | Contact: [[team:andrei_haidu|Andrei Haidu]] | + | Contact: [[team/andrei_haidu|Andrei Haidu]] |
- | == Hand Skeleton Tracking Using Two Leap Motion Devices | + | == Kitchen Activity Games in a Realistic Robotic Simulator |
- | {{ :research:leap_motion.jpg? | + | {{ :research:gz_env1.png? |
- | Improving | + | Developing new activities and improving |
- | + | ||
- | The tracked hand can then be used as input for the Kitchen Activity Games framework. | + | |
Requirements: | Requirements: | ||
* Good programming skills in C/C++ | * Good programming skills in C/C++ | ||
- | |||
- | Contact: [[team: | ||
- | |||
- | == Fluid Simulation in Gazebo (BA/MA)== | ||
- | {{ : | ||
- | |||
- | [[http:// | ||
- | |||
- | Currently there is an [[http:// | ||
- | |||
- | The computational method for the fluid simulation is SPH (Smoothed-particle Dynamics), however newer and better methods based on SPH are currently present | ||
- | and should be implemented (PCISPH/ | ||
- | |||
- | The interaction between the fluid and the rigid objects is a naive one, the forces and torques are applied only from the particle collisions (not taking into account pressure and other forces). | ||
- | |||
- | Another topic would be the visualization of the fluid, currently is done by rendering every particle. For the rendering engine [[http:// | ||
- | |||
- | Here is a [[https:// | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * Interest in Fluid simulation | ||
* Basic physics/ | * Basic physics/ | ||
- | * Gazebo simulator | + | * Gazebo simulator basic tutorials |
Contact: [[team: | Contact: [[team: | ||
- | |||
- | |||
- | == Automated sensor calibration toolkit (BA/MA)== | ||
- | |||
- | Computer vision is an important part of autonomous robots. For robots the image sensors are the main source of information of the surrounding world. Each camera is different, even if they are from the same production line. For computer vision, especially for robots manipulating their environment, | ||
- | |||
- | The topic for this thesis is to develop an automated system for calibrating cameras, especially RGB-D cameras like the Kinect v2. | ||
- | |||
- | {{ : | ||
- | The system should: | ||
- | * be independent of the camera type | ||
- | * estimate intrinsic and extrinsic parameters | ||
- | * calibrate depth images (case of RGB-D) | ||
- | * integrate capabilities from Halcon [1] | ||
- | * operate autonomously | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in Python and C/C++ | ||
- | * ROS, OpenCV | ||
- | |||
- | [1] http:// | ||
- | |||
- | Contact: [[team: | ||
- | |||
- | == On-the-fly 3D CAD model creation (MA)== | ||
- | |||
- | Create models during runtime for unknown textured objets based on depth and color information. Track the object and update the model with more detailed information, | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * strong background in computer vision | ||
- | * ROS, OpenCV, PCL | ||
- | |||
- | Contact: [[team: | ||
- | |||
- | == Simulation of a robots belief state to support perception(MA) == | ||
- | |||
- | Create a simulation environment that represents the robots current belief state and can be updated frequently. Use off-screen rendering to investigate the affordances these objects possess, in order to support segmentation, | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * strong background in computer vision | ||
- | * Gazebo, OpenCV, PCL | ||
- | |||
- | Contact: [[team: | ||
- | |||
- | == Multi-expert segmentation of cluttered and occluded scenes == | ||
- | |||
- | Objects in a human environment are usually found in challenging scenes. They can be stacked upon eachother, touching or occluding, can be found in drawers, cupboards, refrigerators and so on. A personal robot assistant in order to execute a task, needs to detect these objects and recognize them. In this thesis a multi-modal approach to interpreting cluttered scenes is going to be investigated, | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * strong background in 3D vision | ||
- | * basic knowledge of ROS, OpenCV, PCL | ||
- | |||
- | Contact: [[team: | ||
- | |||
- |
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: