Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2014/08/29 11:12] – [Theses and Jobs] ahaidu | jobs [2015/03/19 12:56] – [Theses and Jobs] balintbe | ||
---|---|---|---|
Line 56: | Line 56: | ||
[2] http:// | [2] http:// | ||
- | |||
- | == Depth-Adaptive Superpixels (BA/MA)== | ||
- | {{ : | ||
- | We are currently investigating a new set of sensors (RGB-D-T), which is a combination of a kinect with a thermal image camera. Within this project we want to enhance the Depth-Adaptive Superpixels (DASP) to make use of the thermal sensor data. Depth-Adaptive Superpixels oversegment an image taking into account the depth value of each pixel. | ||
- | |||
- | Since the current implementation of DASP is not very performant for high resolution images, there are several options for doing a project in this field like reimplementing DASP using CUDA, investigating how thermal data can be integrated, ... | ||
- | |||
- | Requirements: | ||
- | * Basic knowledge of image processing | ||
- | * Good programming skills in C/C++. | ||
- | * Experience with CUDA is helpful | ||
- | |||
- | Contact: [[team: | ||
- | |||
- | |||
- | == Physical Simulation of Humans (BA/MA)== | ||
- | {{ : | ||
- | |||
- | For tracking people, the use of particle filters is a common approach. However, the quality of those filters heavily depends on the way particles are spread. In this thesis, a library for the physical simulation of a human model is to be implemented. | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * Optional: Experience in working with physics libraries such as Bullet | ||
- | |||
- | Contact: [[team: | ||
== Kitchen Activity Games in a Realistic Robotic Simulator (BA/ | == Kitchen Activity Games in a Realistic Robotic Simulator (BA/ | ||
Line 140: | Line 115: | ||
Contact: [[team: | Contact: [[team: | ||
+ | |||
+ | |||
+ | == Automated sensor calibration toolkit (MA)== | ||
+ | |||
+ | Computer vision is an important part of autonomous robots. For robots the image sensors are the main source of information of the surrounding world. Each camera is different, even if they are from the same production line. For computer vision, especially for robots manipulating their environment, | ||
+ | |||
+ | The topic for this master thesis is to develop an automated system for calibrating cameras, especially RGB-D cameras like the Kinect v2. | ||
+ | |||
+ | The system should be: | ||
+ | * independent of the camera type | ||
+ | * estimate intrinsics and extrinsics | ||
+ | * have depth calibration (case of RGBD) | ||
+ | * integrate capabilities from Halcon [1] | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in Python and C/C++ | ||
+ | * ROS, OpenCV | ||
+ | |||
+ | [1] http:// | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == On-the-fly 3D CAD model creation (MA)== | ||
+ | |||
+ | Create models during runtime for unknown textured objets based on depth and color information. Track the object and update the model with more detailed information, | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C/C++ | ||
+ | * strong background in computer vision | ||
+ | * ROS, OpenCV, PCL | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == Simulation of a robots belief state to support perception(MA) == | ||
+ | |||
+ | Create a simulation environment that represents the robots current belief state and can be updated frequently. Use off-screen rendering to investigate the affordances these objects possess, in order to support segmentation, | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C/C++ | ||
+ | * strong background in computer vision | ||
+ | * Gazebo, OpenCV, PCL | ||
+ | |||
+ | Contact: [[team: | ||
+ | |||
+ | == Multi-expert segmentation of cluttered and occluded scenes == | ||
+ | |||
+ | Objects in a human environment are usually found in challenging scenes. They can be stacked upon eachother, touching or occluding, can be found in drawers, cupboards, refrigerators and so on. A personal robot assistant in order to execute a task, needs to detect these objects and recognize them. In this thesis a multi-modal approach to interpreting cluttered scenes is going to be investigated, | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C/C++ | ||
+ | * strong background in 3D vision | ||
+ | * basic knowledge of ROS, OpenCV, PCL | ||
+ | |||
+ | Contact: [[team: | ||
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: