Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2015/03/19 13:20] – [Theses and Jobs] raider | jobs [2019/04/09 08:03] – [Theses and Student Jobs] haidu | ||
---|---|---|---|
Line 1: | Line 1: | ||
~~NOTOC~~ | ~~NOTOC~~ | ||
- | =====Theses and Jobs===== | ||
- | If you are looking for a bachelor/ | ||
+ | =====Open researcher positions===== | ||
+ | =====Theses and Student Jobs===== | ||
+ | If you are looking for a bachelor/ | ||
- | == GPU-based Parallelization of Numerical Optimization Techniques | + | == Knowledge-enabled PID Controller for 3D Hand Movements in Virtual Environments |
- | In the field of Machine Learning, numerical optimization techniques play a focal role. However, as models grow larger, traditional implementations on single-core CPUs suffer from sequential execution causing a severe slow-down. In this thesis, state-of-the-art GPU frameworks (e.g. CUDA) are to be investigated | + | Implementing |
+ | of the human user will be mapped | ||
+ | The controller should | ||
Requirements: | Requirements: | ||
- | | + | * Good C++ programming skills |
- | | + | * Familiar with PID controllers |
+ | * Experience with simulators/game engines is recommended | ||
+ | * Experience with Unreal Engine | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
+ | |||
+ | Contact: [[team: | ||
- | Contact: [[team: | ||
- | == Online Learning of Markov Logic Networks for Natural-Language Understanding | + | < |
+ | == Lisp / CRAM support assistant | ||
- | Markov Logic Networks (MLNs) combine | + | Technical support for the group for Lisp and the CRAM framework. \\ |
+ | 8+ hours per week for up to 1 year (paid). | ||
Requirements: | Requirements: | ||
- | * Experience | + | * Good programming skills |
- | * Experience with statistical relational learning (e.g. MLNs) is helpful. | + | * Basic ROS knowledge |
- | * Good programming skills in Python. | + | |
- | Contact: [[team: | + | The student will be introduced to the CRAM framework at the beginning of the job, which is a robot programming framework written in Lisp. The student will then be responsible for assisting not familiar with the framework people, explaining them the parts they don't understand and pointing them to the relevant documentation sources. |
+ | Contact: [[team: | ||
+ | --></ | ||
+ | == Mesh Editing / Mesh Segmentation/ | ||
+ | {{ : | ||
- | ==HiWi-Position: | + | Editing and cutting |
- | + | ||
- | In the context of the European research project RoboHow.Cog [1,2] we | + | |
- | are investigating methods for combining multimodal sources of knowledge (e.g. video, natural-language recipes or computer games), in order to enable mobile robots to autonomously acquire new high level skills like cooking meals or straightening up rooms. | + | |
- | + | ||
- | The Institute for Artificial Intelligence is hiring | + | |
- | development and the integration of probabilistic methods | + | |
- | + | ||
- | This HiWi-Position can serve as a starting point for future Bachelor' | + | |
- | + | ||
- | Tasks: | + | |
- | * Implementation of an interface to the Robot Operating System (ROS). | + | |
- | * Linkage of the knowledge base to the executive of the robot. | + | |
- | * Support for the scientific staff in extending and integrating components onto the robot platform PR2. | + | |
Requirements: | Requirements: | ||
- | * Studies | + | * Good knowledge |
- | * Basic skills in Artificial Intelligence | + | * Familiar with Blender / Maya (or other) |
- | * Optional: basic skills in Probability Theory | + | |
- | * Optional: basic skills in Machine Learning | + | |
- | * Good programming skills in Python and Java | + | |
- | Hours: 10-20 h/week | + | Contact: [[team/mona_abdel-keream|Mona Abdel-Keream]] |
- | Contact: [[team:daniel_nyga|Daniel Nyga]] | + | == 3D Model / Material / Lightning Developer (Student Job / HiWi)== |
+ | | ||
- | [1] www.robohow.eu\\ | + | Developing and improving existing 3D models in Blender |
- | [2] http://www.youtube.com/ | + | |
+ | Bonus: Working with state of the art 3D Scanners [[https:// | ||
- | == Kitchen Activity Games in a Realistic Robotic Simulator | + | Requirements: |
- | {{ : | + | * Experience with Blender / Maya (or other) |
+ | * Knowledge of Unreal Engine material | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | Developing new activities and improving the current simulation framework done under the [[http:// | ||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * Basic physics/ | ||
- | * Gazebo simulator basic tutorials | ||
Contact: [[team: | Contact: [[team: | ||
- | == Integrating | + | < |
- | {{ :research:eye_tracker.png?200|}} | + | == Integrating |
+ | {{ :research:unreal_ros_pr2.png?100|}} | ||
- | Integrating | + | Integrating the [[https:// |
Requirements: | Requirements: | ||
* Good programming skills in C/C++ | * Good programming skills in C/C++ | ||
- | * Gazebo simulator | + | * Basic physics/ |
+ | * Basic ROS knowledge | ||
+ | * UE4 basic tutorials | ||
Contact: [[team: | Contact: [[team: | ||
- | == Hand Skeleton Tracking Using Two Leap Motion Devices (BA/MA)== | ||
- | {{ : | ||
- | Improving the skeletal tracking offered by the [[https://developer.leapmotion.com/ | + | == Realistic Grasping using Unreal Engine (BA/MA/HiWi) == |
- | The tracked hand can then be used as input for the Kitchen Activity Games framework. | + | {{ : |
- | Requirements: | + | The objective of the project is to implement var- |
- | * Good programming skills | + | ious human-like grasping approaches |
- | Contact: [[team: | + | The game consist of a household environment where a user has to execute various given tasks, such as cooking a dish, setting the table, cleaning the dishes etc. The interaction is done using various sensors to map the users hands onto the virtual hands in the game. |
- | == Fluid Simulation in Gazebo | + | In order to improve the ease of manipulating objects the user should |
- | {{ :research: | + | be able to switch during runtime the type of grasp (pinch, power |
+ | grasp, precision grip etc.) he/she would like to use. | ||
+ | |||
+ | Requirements: | ||
+ | * Good programming skills in C++ | ||
+ | * Good knowledge of the Unreal Engine API. | ||
+ | * Experience with skeletal control / animations / 3D models in Unreal Engine. | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | [[http:// | ||
- | Currently there is an [[http:// | + | Contact: |
+ | --></html> | ||
- | The computational method for the fluid simulation is SPH (Smoothed-particle Dynamics), however newer and better methods based on SPH are currently present | ||
- | and should be implemented (PCISPH/ | ||
- | The interaction between the fluid and the rigid objects is a naive one, the forces and torques are applied only from the particle collisions | + | == Unreal Engine Editor Developer |
+ | {{ : | ||
- | Another topic would be the visualization of the fluid, currently is done by rendering every particle. For the rendering engine | + | Creating new user interfaces (panel customization) for various internal plugins using the Unreal C++ framework |
- | Here is a [[https:// | ||
Requirements: | Requirements: | ||
- | * Good programming skills in C/C++ | + | * Good C++ programming skills |
- | * Interest in Fluid simulation | + | * Familiar with the [[https:// |
- | * Basic physics/ | + | * Familiar with Unreal Engine API |
- | * Gazebo simulator and Fluidix basic tutorials | + | * Familiar with version-control systems (git) |
+ | * Able to work independently with minimal supervision | ||
Contact: [[team: | Contact: [[team: | ||
- | == Automated sensor calibration toolkit (MA)== | ||
- | Computer vision is an important part of autonomous robots. For robots the image sensors are the main source of information of the surrounding world. Each camera is different, even if they are from the same production line. For computer vision, especially for robots manipulating their environment, | + | == OpenEASE rendering |
- | The topic for this thesis is to develop an automated system for calibrating cameras, especially RGB-D cameras like the Kinect v2. | ||
- | The system should: | + | Implmenting |
- | * be independent of the camera type | + | |
- | * estimate intrinsic and extrinsic parameters | + | |
- | * calibrate depth images (case of RGB-D) | + | |
- | * integrate capabilities from Halcon | + | |
- | * operate autonomously | + | |
- | Requirements: | + | Requirements: |
- | * Good programming skills | + | * Good C++ programming skills |
- | * ROS, OpenCV | + | * Familiar with Unreal Engine API |
+ | * Familiar with HTML5 and JavaScript | ||
+ | * Familiar with the [[https:// | ||
+ | * Familiar with basic ROS communication | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | [1] http:// | + | Contact: [[team:andrei_haidu|Andrei Haidu]] |
- | + | ||
- | Contact: [[team: | + | |
- | + | ||
- | == On-the-fly 3D CAD model creation (MA)== | + | |
- | + | ||
- | Create models during runtime for unknown textured objets based on depth and color information. Track the object and update the model with more detailed information, | + | |
- | + | ||
- | Requirements: | + | |
- | * Good programming skills in C/C++ | + | |
- | * strong background in computer vision | + | |
- | * ROS, OpenCV, PCL | + | |
- | + | ||
- | Contact: [[team: | + | |
- | + | ||
- | == Simulation of a robots belief state to support perception(MA) == | + | |
- | + | ||
- | Create a simulation environment that represents the robots current belief state and can be updated frequently. Use off-screen rendering to investigate the affordances these objects possess, in order to support segmentation, | + | |
- | + | ||
- | Requirements: | + | |
- | * Good programming skills in C/C++ | + | |
- | * strong background in computer vision | + | |
- | * Gazebo, OpenCV, PCL | + | |
- | + | ||
- | Contact: [[team: | + | |
- | + | ||
- | == Multi-expert segmentation of cluttered and occluded scenes == | + | |
- | + | ||
- | Objects in a human environment are usually found in challenging scenes. They can be stacked upon eachother, touching or occluding, can be found in drawers, cupboards, refrigerators and so on. A personal robot assistant in order to execute a task, needs to detect these objects and recognize them. In this thesis a multi-modal approach to interpreting cluttered scenes is going to be investigated, | + | |
- | + | ||
- | Requirements: | + | |
- | * Good programming skills in C/C++ | + | |
- | * strong background in 3D vision | + | |
- | * basic knowledge of ROS, OpenCV, PCL | + | |
- | + | ||
- | Contact: [[team:ferenc_balint-benczedi|Ferenc Balint-Benczedi]] | + |
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: