Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2015/03/19 12:56] – [Theses and Jobs] balintbe | jobs [2019/03/04 11:46] – [Theses and Student Jobs] haidu | ||
---|---|---|---|
Line 1: | Line 1: | ||
~~NOTOC~~ | ~~NOTOC~~ | ||
- | =====Theses and Jobs===== | ||
- | If you are looking for a bachelor/ | ||
+ | =====Open researcher positions===== | ||
+ | =====Theses and Student Jobs===== | ||
+ | If you are looking for a bachelor/ | ||
- | == GPU-based Parallelization of Numerical Optimization Techniques (BA/MA/HiWi)== | + | < |
+ | == Lisp / CRAM support assistant (HiWi) == | ||
- | In the field of Machine Learning, numerical optimization techniques play a focal role. However, as models grow larger, traditional implementations on single-core CPUs suffer from sequential execution causing a severe slow-down. In this thesis, state-of-the-art GPU frameworks | + | Technical support for the group for Lisp and the CRAM framework. \\ |
+ | 8+ hours per week for up to 1 year (paid). | ||
Requirements: | Requirements: | ||
- | * Skills | + | * Good programming skills |
- | * Good programming skills in Python and C/C++ | + | * Basic ROS knowledge |
- | Contact: [[team: | + | The student will be introduced to the CRAM framework at the beginning of the job, which is a robot programming framework written in Lisp. The student will then be responsible for assisting not familiar with the framework people, explaining them the parts they don't understand and pointing them to the relevant documentation sources. |
- | == Online Learning of Markov Logic Networks for Natural-Language Understanding (MA)== | + | Contact: [[team: |
+ | --></ | ||
- | Markov Logic Networks (MLNs) combine the expressive power of first-order logic and probabilistic graphical models. In the past, they have been successfully applied to the problem of semantically interpreting and completing natural-language instructions from the web. State-of-the-art learning techniques mostly operate in batch mode, i.e. all training instances need to be known in the beginning of the learning process. In context of this thesis, online learning methods for MLNs are to be investigated, | ||
- | Requirements: | + | == 3D Model / Material / Lightning Developer |
- | * Experience in Machine Learning. | + | {{ : |
- | * Experience with statistical relational learning | + | |
- | * Good programming skills in Python. | + | |
- | Contact: [[team: | + | Developing and improving existing 3D models in Blender / Maya (or other). Importing the models in Unreal Engine, where the Materials and Lightning should be improved to be close as possible to realism. |
- | + | Bonus: Working with state of the art 3D Scanners | |
- | ==HiWi-Position: Knowledge Representation & Language Understanding for Intelligent Robots== | + | |
- | + | ||
- | In the context | + | |
- | are investigating methods for combining multimodal sources of knowledge (e.g. video, natural-language recipes or computer games), in order to enable mobile robots to autonomously acquire new high level skills like cooking meals or straightening up rooms. | + | |
- | + | ||
- | The Institute for Artificial Intelligence is hiring a student researcher for the | + | |
- | development and the integration of probabilistic methods in AI, which enable intelligent robots to understand, interpret and execute natural-language instructions from recipes from the World Wide Web. | + | |
- | + | ||
- | This HiWi-Position can serve as a starting point for future Bachelor' | + | |
- | + | ||
- | Tasks: | + | |
- | * Implementation of an interface to the Robot Operating System (ROS). | + | |
- | * Linkage of the knowledge base to the executive of the robot. | + | |
- | * Support | + | |
Requirements: | Requirements: | ||
- | * Studies in Computer Science | + | * Experience with Blender / Maya (or other) |
- | * Basic skills in Artificial Intelligence | + | * Knowledge of Unreal Engine material / lightning development |
- | * Optional: basic skills in Probability Theory | + | |
- | * Optional: basic skills in Machine Learning | + | |
- | * Good programming skills in Python and Java | + | |
- | Hours: 10-20 h/week | ||
- | Contact: [[team: | ||
- | [1] www.robohow.eu\\ | + | Contact: |
- | [2] http://www.youtube.com/watch?v=0eIryyzlRwA | + | < |
+ | == Integrating PR2 in the Unreal Game Engine Framework (BA/MA/HiWi)== | ||
+ | {{ : | ||
- | + | Integrating the [[https://www.willowgarage.com/ | |
- | == Kitchen Activity Games in a Realistic Robotic Simulator (BA/MA/HiWi)== | + | |
- | {{ : | + | |
- | + | ||
- | Developing new activities and improving the current simulation framework done under the [[http://gazebosim.org/|Gazebo]] robotic simulator. Creating a custom GUI for the game, in order to launch new scenarios, save logs etc. | + | |
Requirements: | Requirements: | ||
* Good programming skills in C/C++ | * Good programming skills in C/C++ | ||
* Basic physics/ | * Basic physics/ | ||
- | * Gazebo simulator | + | * Basic ROS knowledge |
+ | * UE4 basic tutorials | ||
Contact: [[team: | Contact: [[team: | ||
- | == Integrating Eye Tracking in the Kitchen Activity Games (BA/MA)== | ||
- | {{ : | ||
- | Integrating the eye tracker in the [[http://gazebosim.org/ | + | == Realistic Grasping using Unreal Engine (BA/MA/HiWi) == |
- | Requirements: | + | {{ : |
- | * Good programming skills in C/C++ | + | |
- | * Gazebo simulator basic tutorials | + | |
- | Contact: | + | The objective of the project is to implement var- |
+ | ious human-like grasping approaches in a game developed using [[https:// | ||
- | == Hand Skeleton Tracking Using Two Leap Motion Devices (BA/MA)== | + | The game consist of a household environment where a user has to execute various given tasks, such as cooking a dish, setting the table, cleaning the dishes etc. The interaction is done using various sensors to map the users hands onto the virtual hands in the game. |
- | {{ : | + | |
- | Improving | + | In order to improve |
+ | be able to switch during runtime | ||
+ | grasp, precision grip etc.) he/she would like to use. | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C++ | ||
+ | * Good knowledge | ||
+ | * Experience with skeletal control / animations / 3D models in Unreal Engine. | ||
- | The tracked hand can then be used as input for the Kitchen Activity Games framework. | ||
- | Requirements: | + | Contact: [[team/ |
- | * Good programming skills in C/C++ | + | --></html> |
- | Contact: [[team: | ||
- | == Fluid Simulation in Gazebo | + | == Unreal Engine Editor Developer |
- | {{ :research:fluid.png?200|}} | + | {{ :research:unreal_editor.png?150|}} |
- | [[http://gazebosim.org/|Gazebo]] currently only supports rigid body physics engines (ODE, Bullet etc.), however in some cases fluids are preferred in order to simulate as realistically as possible the given environment. | + | Creating new user interfaces (panel customization) for various internal plugins using the Unreal C++ framework |
- | Currently there is an [[http:// | ||
- | |||
- | The computational method for the fluid simulation is SPH (Smoothed-particle Dynamics), however newer and better methods based on SPH are currently present | ||
- | and should be implemented (PCISPH/ | ||
- | |||
- | The interaction between the fluid and the rigid objects is a naive one, the forces and torques are applied only from the particle collisions (not taking into account pressure and other forces). | ||
- | |||
- | Another topic would be the visualization of the fluid, currently is done by rendering every particle. For the rendering engine [[http:// | ||
- | |||
- | Here is a [[https:// | ||
Requirements: | Requirements: | ||
- | * Good programming skills in C/C++ | + | * Good C++ programming skills |
- | * Interest in Fluid simulation | + | * Familiar with the [[https:// |
- | * Basic physics/rendering engine knowledge | + | * Familiar with Unreal Engine API |
- | * Gazebo simulator and Fluidix basic tutorials | + | |
Contact: [[team: | Contact: [[team: | ||
- | == Automated sensor calibration toolkit (MA)== | ||
- | Computer vision is an important part of autonomous robots. For robots the image sensors are the main source of information of the surrounding world. Each camera is different, even if they are from the same production line. For computer vision, especially for robots manipulating their environment, | + | == OpenEASE rendering |
- | The topic for this master thesis is to develop an automated system for calibrating cameras, especially RGB-D cameras like the Kinect v2. | ||
- | The system should be: | + | Implmenting |
- | * independent of the camera type | + | |
- | * estimate intrinsics and extrinsics | + | |
- | * have depth calibration (case of RGBD) | + | |
- | * integrate capabilities from Halcon | + | |
- | Requirements: | + | Requirements: |
- | * Good programming skills in Python and C/C++ | + | * Good C++ programming skills |
- | * ROS, OpenCV | + | * Familiar |
- | + | * Familiar with HTML5 and JavaScript | |
- | [1] http:// | + | * Familiar with the [[https://docs.unrealengine.com/ |
- | + | * Familiar with basic ROS communication | |
- | Contact: [[team: | + | |
- | + | ||
- | == On-the-fly 3D CAD model creation (MA)== | + | |
- | + | ||
- | Create models during runtime for unknown textured objets based on depth and color information. Track the object and update the model with more detailed information, | + | |
- | + | ||
- | Requirements: | + | |
- | * Good programming skills in C/C++ | + | |
- | * strong background in computer vision | + | |
- | * ROS, OpenCV, PCL | + | |
- | + | ||
- | Contact: | + | |
- | + | ||
- | == Simulation of a robots belief state to support perception(MA) == | + | |
- | + | ||
- | Create a simulation environment that represents the robots current belief state and can be updated frequently. Use off-screen rendering to investigate the affordances these objects possess, in order to support segmentation, | + | |
- | + | ||
- | Requirements: | + | |
- | * Good programming skills in C/C++ | + | |
- | * strong background in computer vision | + | |
- | * Gazebo, OpenCV, PCL | + | |
- | + | ||
- | Contact: [[team: | + | |
- | + | ||
- | == Multi-expert segmentation of cluttered and occluded scenes == | + | |
- | + | ||
- | Objects in a human environment are usually found in challenging scenes. They can be stacked upon eachother, touching or occluding, can be found in drawers, cupboards, refrigerators and so on. A personal robot assistant in order to execute a task, needs to detect these objects and recognize them. In this thesis a multi-modal approach to interpreting cluttered scenes is going to be investigated, | + | |
- | + | ||
- | Requirements: | + | |
- | * Good programming skills in C/C++ | + | |
- | * strong background in 3D vision | + | |
- | * basic knowledge of ROS, OpenCV, PCL | + | |
- | + | ||
- | Contact: [[team: | + | |
+ | Contact: [[team: |
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: