Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2015/03/19 13:27] – [Theses and Jobs] raider | jobs [2019/06/13 09:51] – [Theses and Student Jobs] abdelker | ||
---|---|---|---|
Line 1: | Line 1: | ||
~~NOTOC~~ | ~~NOTOC~~ | ||
- | =====Theses and Jobs===== | ||
- | If you are looking for a bachelor/ | ||
+ | =====Open researcher positions===== | ||
+ | =====Theses and Student Jobs===== | ||
+ | If you are looking for a bachelor/ | ||
- | == GPU-based Parallelization of Numerical Optimization Techniques (BA/ | ||
- | |||
- | In the field of Machine Learning, numerical optimization techniques play a focal role. However, as models grow larger, traditional implementations on single-core CPUs suffer from sequential execution causing a severe slow-down. In this thesis, state-of-the-art GPU frameworks (e.g. CUDA) are to be investigated in order implement numerical optimizers that substantially profit from parallel execution. | ||
- | |||
- | Requirements: | ||
- | * Skills in numerical optimization algorithms | ||
- | * Good programming skills in Python and C/C++ | ||
- | Contact: [[team: | ||
- | == Online Learning of Markov Logic Networks for Natural-Language Understanding | + | == Knowledge-enabled PID Controller for 3D Hand Movements in Virtual Environments |
- | Markov Logic Networks (MLNs) combine the expressive power of first-order logic and probabilistic graphical models. In the past, they have been successfully applied | + | Implementing a force-, velocity- and impulse-based PID controller for precise and responsive hand movements in a virtual environment. The virtual environment used in Unreal Engine in combination with Virtual Reality devices. The movements |
+ | of the human user will be mapped | ||
+ | The controller should be able to dynamically tune itself depending on the executed actions (opening/ | ||
Requirements: | Requirements: | ||
- | * Experience | + | |
- | * Experience with statistical relational learning | + | * Familiar with PID controllers and control theory |
- | * Good programming skills in Python. | + | |
+ | * Experience with Unreal Engine | ||
+ | * Familiar with version-control systems | ||
+ | * Able to work independently with minimal supervision | ||
- | Contact: [[team:daniel_nyga|Daniel Nyga]] | + | Contact: [[team:andrei_haidu|Andrei Haidu]] |
- | + | ||
- | + | ||
- | ==HiWi-Position: | + | |
- | + | ||
- | In the context of the European research project RoboHow.Cog [1,2] we | + | |
- | are investigating methods for combining multimodal sources of knowledge (e.g. video, natural-language recipes or computer games), in order to enable mobile robots to autonomously acquire new high level skills like cooking meals or straightening up rooms. | + | |
- | The Institute for Artificial Intelligence is hiring a student researcher for the | ||
- | development and the integration of probabilistic methods in AI, which enable intelligent robots to understand, interpret and execute natural-language instructions from recipes from the World Wide Web. | ||
- | This HiWi-Position can serve as a starting point for future Bachelor' | + | < |
+ | == Lisp / CRAM support assistant (HiWi) == | ||
- | Tasks: | + | Technical support for the group for Lisp and the CRAM framework. \\ |
- | * Implementation of an interface to the Robot Operating System (ROS). | + | 8+ hours per week for up to 1 year (paid). |
- | * Linkage of the knowledge base to the executive of the robot. | + | |
- | * Support | + | |
Requirements: | Requirements: | ||
- | * Studies | + | * Good programming skills |
- | * Basic skills in Artificial Intelligence | + | * Basic ROS knowledge |
- | * Optional: basic skills in Probability Theory | + | |
- | * Optional: basic skills in Machine Learning | + | |
- | * Good programming skills in Python and Java | + | |
- | Hours: 10-20 h/week | + | The student will be introduced to the CRAM framework at the beginning of the job, which is a robot programming framework written in Lisp. The student will then be responsible for assisting not familiar with the framework people, explaining them the parts they don't understand and pointing them to the relevant documentation sources. |
- | Contact: [[team:daniel_nyga|Daniel Nyga]] | + | Contact: [[team:gayane_kazhoyan|Gayane Kazhoyan]] |
+ | --></ | ||
- | [1] www.robohow.eu\\ | + | < |
- | [2] http://www.youtube.com/watch?v=0eIryyzlRwA | + | == Mesh Editing |
+ | {{ : | ||
+ | | ||
- | == Kitchen Activity Games in a Realistic Robotic Simulator | + | Requirements: |
- | {{ : | + | * Good knowledge |
+ | * Familiar with Blender / Maya (or other) | ||
- | Developing new activities and improving the current simulation framework done under the [[http:// | + | Contact: |
+ | --></ | ||
- | Requirements: | + | < |
- | * Good programming skills in C/C++ | + | == 3D Model / Material |
- | * Basic physics/rendering engine knowledge | + | {{ : |
- | * Gazebo simulator basic tutorials | + | |
- | Contact: [[team: | + | Developing and improving existing 3D models in Blender / Maya (or other). Importing the models in Unreal Engine, where the Materials and Lightning should be improved to be close as possible to realism. |
- | == Integrating Eye Tracking in the Kitchen Activity Games (BA/MA)== | + | Bonus: Working with state of the art 3D Scanners |
- | | + | |
- | + | ||
- | Integrating the eye tracker in the [[http://gazebosim.org/|Gazebo]] based Kitchen Activity Games framework and logging the gaze of the user during the gameplay. From the information typical activities should be inferred. | + | |
Requirements: | Requirements: | ||
- | * Good programming skills in C/C++ | + | * Experience with Blender |
- | * Gazebo simulator basic tutorials | + | * Knowledge of Unreal Engine material / lightning development |
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | Contact: [[team: | ||
- | == Hand Skeleton Tracking Using Two Leap Motion Devices (BA/MA)== | ||
- | {{ : | ||
- | |||
- | Improving the skeletal tracking offered by the [[https:// | ||
- | |||
- | The tracked hand can then be used as input for the Kitchen Activity Games framework. | ||
- | |||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
Contact: [[team: | Contact: [[team: | ||
+ | --></ | ||
- | == Fluid Simulation | + | < |
- | {{ :research:fluid.png?200|}} | + | == Integrating PR2 in the Unreal Game Engine Framework |
+ | {{ :research:unreal_ros_pr2.png?100|}} | ||
- | [[http://gazebosim.org/ | + | Integrating the [[https://www.willowgarage.com/pages/pr2/overview|PR2]] robot with [[http:// |
- | + | ||
- | Currently there is an [[http://gazebosim.org/tutorials? | + | |
- | + | ||
- | The computational method for the fluid simulation is SPH (Smoothed-particle Dynamics), however newer and better methods based on SPH are currently present | + | |
- | and should be implemented (PCISPH/ | + | |
- | + | ||
- | The interaction between the fluid and the rigid objects is a naive one, the forces and torques are applied only from the particle collisions (not taking into account pressure and other forces). | + | |
- | + | ||
- | Another topic would be the visualization of the fluid, currently is done by rendering every particle. For the rendering engine | + | |
- | + | ||
- | Here is a [[https://vimeo.com/104629835|video]] example of the current state of the fluid in Gazebo. | + | |
Requirements: | Requirements: | ||
* Good programming skills in C/C++ | * Good programming skills in C/C++ | ||
- | * Interest in Fluid simulation | ||
* Basic physics/ | * Basic physics/ | ||
- | * Gazebo simulator and Fluidix | + | * Basic ROS knowledge |
+ | * UE4 basic tutorials | ||
Contact: [[team: | Contact: [[team: | ||
- | == Automated sensor calibration toolkit | + | == Realistic Grasping using Unreal Engine |
- | Computer vision is an important part of autonomous robots. For robots the image sensors are the main source of information of the surrounding world. Each camera is different, even if they are from the same production line. For computer vision, especially for robots manipulating their environment, | + | {{ : |
- | The topic for this thesis | + | The objective of the project |
+ | ious human-like grasping approaches in a game developed using [[https:// | ||
- | {{ : | + | The game consist |
- | The system should: | + | |
- | * be independent | + | |
- | * estimate intrinsic and extrinsic parameters | + | |
- | * calibrate depth images (case of RGB-D) | + | |
- | * integrate capabilities from Halcon [1] | + | |
- | * operate autonomously | + | |
+ | In order to improve the ease of manipulating objects the user should | ||
+ | be able to switch during runtime the type of grasp (pinch, power | ||
+ | grasp, precision grip etc.) he/she would like to use. | ||
+ | | ||
Requirements: | Requirements: | ||
- | * Good programming skills in Python and C/C++ | + | * Good programming skills in C++ |
- | * ROS, OpenCV | + | * Good knowledge of the Unreal Engine API. |
+ | * Experience with skeletal control / animations / 3D models in Unreal Engine. | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | [1] http:// | ||
- | Contact: [[team: | + | Contact: [[team/ |
+ | --></ | ||
- | == On-the-fly 3D CAD model creation | + | < |
+ | == Unreal Engine Editor Developer | ||
+ | {{ : | ||
- | Create models during runtime | + | Creating new user interfaces (panel customization) |
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * strong background in computer vision | ||
- | * ROS, OpenCV, PCL | ||
- | Contact: [[team:thiemo_wiedemeyer|Thiemo Wiedemeyer]] | + | Requirements: |
+ | * Good C++ programming skills | ||
+ | * Familiar with the [[https:// | ||
+ | * Familiar with Unreal Engine API | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | == Simulation of a robots belief state to support perception(MA) == | + | Contact: [[team: |
- | Create a simulation environment that represents the robots current belief state and can be updated frequently. Use off-screen rendering to investigate the affordances these objects possess, in order to support segmentation, | ||
- | Requirements: | ||
- | * Good programming skills in C/C++ | ||
- | * strong background in computer vision | ||
- | * Gazebo, OpenCV, PCL | ||
- | Contact: [[team: | + | == OpenEASE rendering in Unreal Engine (BA/MA Thesis, Student Job / HiWi)== |
- | == Multi-expert segmentation of cluttered and occluded scenes == | ||
- | Objects in a human environment are usually found in challenging scenes. They can be stacked upon eachother, touching or occluding, can be found in drawers, cupboards, refrigerators and so on. A personal robot assistant in order to execute | + | Implmenting the rendering of the [[https:// |
- | Requirements: | + | Requirements: |
- | * Good programming skills in C/C++ | + | * Good C++ programming skills |
- | * strong background in 3D vision | + | * Familiar with Unreal Engine API |
- | * basic knowledge of ROS, OpenCV, PCL | + | * Familiar with HTML5 and JavaScript |
+ | * Familiar with the [[https:// | ||
+ | * Familiar with basic ROS communication | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
- | Contact: [[team:ferenc_balint-benczedi|Ferenc Balint-Benczedi]] | + | Contact: [[team:andrei_haidu|Andrei Haidu]] |
+ | --></ |
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: