Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2014/05/13 09:29] – tenorth | jobs [2015/06/19 12:58] – winkler | ||
---|---|---|---|
Line 3: | Line 3: | ||
If you are looking for a bachelor/ | If you are looking for a bachelor/ | ||
+ | |||
+ | |||
+ | == HiWi-Position: | ||
+ | |||
+ | When dealing with real-world robot tasks, simulation that is close to reality is key to test behavior-driven, | ||
+ | |||
+ | Requirements: | ||
+ | * Experience in ROS | ||
+ | * Passion for Robotics | ||
+ | * Ideally programming skills in Lisp, Prolog, and Java | ||
+ | |||
+ | Contact: [[team: | ||
Line 57: | Line 69: | ||
- | == Depth-Adaptive Superpixels | + | == Kitchen Activity Games in a Realistic Robotic Simulator |
+ | {{ : | ||
- | We are currently investigating a new set of sensors (RGB-D-T), which is a combination of a kinect with a thermal image camera. Within this project we want to enhance | + | Developing |
- | Since the current implementation of DASP is not very performant for high resolution images, there are several possibilities options for doing a project | + | Requirements: |
+ | * Good programming skills in C/C++ | ||
+ | * Basic physics/ | ||
+ | * Gazebo simulator basic tutorials | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == Integrating Eye Tracking in the Kitchen Activity Games (BA/MA)== | ||
+ | {{ : | ||
+ | |||
+ | Integrating the eye tracker | ||
Requirements: | Requirements: | ||
- | | + | * Good programming skills in C/C++ |
- | | + | * Gazebo simulator basic tutorials |
- | * Experience with CUDA is helpful | + | |
- | Contact: [[team:jan-hendrik_worch|Jan-Hendrik Worch]] | + | Contact: [[team:andrei_haidu|Andrei Haidu]] |
+ | == Hand Skeleton Tracking Using Two Leap Motion Devices (BA/MA)== | ||
+ | {{ : | ||
- | == Tools for knowledge acquisition from the Web (BA/MA/HiWi) == | + | Improving |
- | There are several options | + | The tracked hand can then be used as input for the Kitchen Activity Games framework. |
- | knowledge from Web sources like online shops, repositories of object models, | + | |
- | recipe databases, etc. | + | |
Requirements: | Requirements: | ||
- | * Programing | + | * Good programming |
- | | + | |
- | * Depending | + | Contact: [[team: |
+ | |||
+ | == Fluid Simulation in Gazebo | ||
+ | {{ : | ||
+ | |||
+ | [[http:// | ||
+ | |||
+ | Currently there is an [[http:// | ||
+ | |||
+ | The computational method for the fluid simulation is SPH (Smoothed-particle Dynamics), however newer and better methods based on SPH are currently present | ||
+ | and should be implemented (PCISPH/ | ||
+ | |||
+ | The interaction between the fluid and the rigid objects | ||
+ | |||
+ | Another topic would be the visualization of the fluid, currently is done by rendering every particle. For the rendering engine [[http:// | ||
+ | |||
+ | Here is a [[https:// | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C/C++ | ||
+ | * Interest in Fluid simulation | ||
+ | * Basic physics/ | ||
+ | * Gazebo simulator and Fluidix basic tutorials | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | |||
+ | == Automated sensor calibration toolkit (BA/MA)== | ||
+ | |||
+ | Computer vision is an important part of autonomous robots. For robots the image sensors are the main source of information of the surrounding world. Each camera is different, even if they are from the same production line. For computer vision, especially for robots manipulating their environment, | ||
+ | |||
+ | The topic for this thesis is to develop an automated system for calibrating cameras, especially RGB-D cameras like the Kinect v2. | ||
+ | |||
+ | {{ : | ||
+ | The system should: | ||
+ | * be independent of the camera type | ||
+ | * estimate intrinsic and extrinsic parameters | ||
+ | * calibrate depth images (case of RGB-D) | ||
+ | * integrate capabilities from Halcon [1] | ||
+ | * operate autonomously | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in Python and C/C++ | ||
+ | * ROS, OpenCV | ||
+ | |||
+ | [1] http:// | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == On-the-fly 3D CAD model creation (MA)== | ||
+ | |||
+ | Create models during runtime for unknown textured objets based on depth and color information. Track the object and update the model with more detailed information, completing it's 3D model from multiple views improving redetection. Using the robots manipulator pick up the object and complete the model by viewing it from multiple viewpoints. | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C/C++ | ||
+ | * strong background in computer vision | ||
+ | * ROS, OpenCV, PCL | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == Simulation of a robots belief state to support perception(MA) == | ||
+ | |||
+ | Create a simulation environment that represents the robots current belief state and can be updated frequently. Use off-screen rendering to investigate the affordances these objects possess, in order to support segmentation, | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C/C++ | ||
+ | * strong background in computer vision | ||
+ | * Gazebo, OpenCV, PCL | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == Multi-expert segmentation of cluttered and occluded scenes == | ||
+ | |||
+ | Objects in a human environment are usually found in challenging scenes. They can be stacked upon eachother, touching or occluding, can be found in drawers, cupboards, refrigerators and so on. A personal robot assistant in order to execute a task, needs to detect these objects and recognize them. In this thesis a multi-modal approach to interpreting cluttered scenes is going to be investigated, | ||
- | Contact: [[team: | + | Requirements: |
+ | * Good programming skills in C/C++ | ||
+ | * strong background in 3D vision | ||
+ | * basic knowledge of ROS, OpenCV, PCL | ||
+ | Contact: [[team: |
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: