Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
jobs [2015/10/15 09:22] – [Theses and Jobs] froggy86 | jobs [2023/01/28 08:17] – [Theses and Student Jobs] kording | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ~~NOTOC~~ | + | |
- | =====Theses and Jobs===== | + | =====Open researcher positions===== |
+ | |||
+ | We currently don't have any open positions for researchers. | ||
+ | =====Theses and Student | ||
If you are looking for a bachelor/ | If you are looking for a bachelor/ | ||
+ | < | ||
+ | == Physics-based grasping in VR with finger tracking(Student Job / HiWi) == | ||
- | == Kitchen Activity Games in a Realistic Robotic Simulator (BA/ | + | Implementing physics-based grasping models |
- | {{ : | + | using Manus VR. |
- | + | ||
- | Developing new activities and improving the current simulation framework done under the [[http:// | + | |
Requirements: | Requirements: | ||
- | * Good programming skills in C/C++ | + | * Good C++ programming skills |
- | * Basic physics/rendering engine knowledge | + | * Familiar with skeletal animations |
- | * Gazebo simulator basic tutorials | + | * Experience with simulators/physics-/game- engines |
+ | * Familiar with Unreal Engine API | ||
+ | * Familiar with version-control systems (git) | ||
+ | * Able to work independently with minimal supervision | ||
Contact: [[team: | Contact: [[team: | ||
+ | --></ | ||
- | == Integrating Eye Tracking in the Kitchen Activity Games (BA/MA)== | ||
- | {{ : | ||
- | Integrating | + | < |
+ | == Lisp / CRAM support assistant (HiWi) == | ||
+ | |||
+ | Technical support for the group for Lisp and the CRAM framework. | ||
+ | 8+ hours per week for up to 1 year (paid). | ||
Requirements: | Requirements: | ||
- | * Good programming skills in C/C++ | + | * Good programming skills in Common Lisp |
- | * Gazebo simulator basic tutorials | + | * Basic ROS knowledge |
- | Contact: [[team: | + | The student will be introduced to the CRAM framework at the beginning of the job, which is a robot programming framework written in Lisp. The student will then be responsible for assisting not familiar with the framework people, explaining them the parts they don't understand and pointing them to the relevant documentation sources. |
- | == Hand Skeleton Tracking Using Two Leap Motion Devices (BA/MA)== | + | Contact: [[team:gayane_kazhoyan|Gayane Kazhoyan]] |
- | | + | --></ |
- | Improving the skeletal tracking offered by the [[https://developer.leapmotion.com/|Leap Motion SDK]], by using two devices (one tracking vertically the other horizontally) and switching between them to the one that has the best current view of the hand. | + | < |
+ | == Mesh Editing | ||
+ | {{ : | ||
- | The tracked hand can then be used as input for the Kitchen Activity Games framework. | + | |
Requirements: | Requirements: | ||
- | * Good programming skills | + | * Good knowledge |
+ | * Familiar with Blender | ||
- | Contact: [[team: | + | Contact: [[team/ |
+ | --></ | ||
- | == Fluid Simulation in Gazebo (BA/MA)== | ||
- | {{ : | ||
- | [[http://gazebosim.org/|Gazebo]] currently only supports rigid body physics engines (ODE, Bullet etc.), however in some cases fluids are preferred in order to simulate as realistically as possible the given environment. | + | < |
+ | == 3D Animation and Modeling (Student Job / HiWi)== | ||
+ | | ||
- | Currently there is an [[http:// | + | Developing and improving existing or new 3D (static/skeletal) |
+ | models | ||
+ | models against Unreal Engine. | ||
- | The computational method for the fluid simulation is SPH (Smoothed-particle Dynamics), however newer and better methods based on SPH are currently present | + | Bonus: Working with state of the art 3D Scanners |
- | and should be implemented (PCISPH/ | + | |
- | + | ||
- | The interaction between the fluid and the rigid objects is a naive one, the forces and torques are applied only from the particle collisions (not taking into account pressure and other forces). | + | |
- | + | ||
- | Another topic would be the visualization | + | |
- | + | ||
- | Here is a [[https:// | + | |
Requirements: | Requirements: | ||
- | * Good programming skills in C/C++ | + | * Experience with Blender |
- | * Interest in Fluid simulation | + | * Knowledge of Unreal Engine material / lightning development |
- | * Basic physics/ | + | * Familiar with version-control systems (git) |
- | * Gazebo simulator and Fluidix basic tutorials | + | * Able to work independently with minimal supervision |
Contact: [[team: | Contact: [[team: | ||
+ | --></ | ||
+ | == Representing knowledge in a robot-agnostic ontology system for assistive robotics (MA Thesis) == | ||
- | == Automated sensor calibration toolkit | + | The thesis will be jointly supervised by German Aerospace Center |
- | Computer vision is an important part of autonomous robots. For robots | + | Summary: |
+ | * Investigate about existing ontologies, like the Socio-physical Model of Activities (SOMA) from the University | ||
+ | * Refactor an existing knowledge database (known as the Object DataBase, ODB) used by the DLR into an ontology. | ||
+ | * Use the ontology to design tasks in the assistive robotics domain using assistive robots provided by the DLR | ||
- | The topic for this thesis is to develop an automated system for calibrating cameras, especially RGB-D cameras like the Kinect v2. | + | Full offer: |
- | {{ :kinect2_calibration_setup_small.jpg?200|}} | + | {{ :teaching: |
- | The system should: | + | |
- | * be independent of the camera type | + | |
- | * estimate intrinsic and extrinsic parameters | + | |
- | * calibrate depth images (case of RGB-D) | + | |
- | * integrate capabilities from Halcon [1] | + | |
- | * operate autonomously | + | |
- | Requirements: | + | Contact: [[team: |
- | * Good programming skills in Python and C/C++ | + | |
- | * ROS, OpenCV | + | |
- | [1] http:// | + | == Integration of novel objects into Digital Twin Knowledge Bases (MA Thesis) == |
- | Contact: [[team: | + | In this thesis, the goal is to make a robotic system learn new objects automatically. |
+ | The system should be able to generate the necessary models required for re-detecting it again and also consult online information sources to automatically acquire knowledge about it. | ||
- | == On-the-fly 3D CAD model creation (MA)== | + | The focus of the thesis would be two-fold: |
+ | * Develop methods to automatically infer the object class of new objects. This would include perceiving it with state of the art sensors, constructing a 3d model of it and then infer the object class from online information sources. | ||
+ | * In the second step the system should also infer factual knowledge about the object from the internet and assert it into a robotic knowledgebase. Such knowledge could for example include the category of this product, typical object properties like its weight or typical location and much more. | ||
- | Create models during runtime for unknown textured objets based on depth and color information. Track the object and update the model with more detailed information, | ||
- | Requirements: | + | Requirements: |
- | * Good programming skills in C/C++ | + | * Knowledge about sensor data processing |
- | * strong background | + | * Interest |
- | * ROS, OpenCV, PCL | + | * Work with KnowRob knowledge processing framework |
- | Contact: [[team: | ||
- | == Simulation of a robots belief state to support perception(MA) == | + | Contact: [[team: |
- | Create a simulation environment that represents | + | < |
+ | == Development of Modules for Robot Perception (Student Job / HiWi) == | ||
+ | In our research group, we focus on the development of modern | ||
+ | In this context, we are currently offering multiple Hiwi positions / student jobs for the following tasks: | ||
+ | * Software development | ||
+ | * Software development for our Robot Perception framework [[http:// | ||
- | Requirements: | + | Requirements: |
- | * Good programming skills | + | * Experience |
- | * strong background | + | * Basic understanding of the ROS middleware and Linux. |
- | * Gazebo, OpenCV, PCL | + | The spoken language |
- | Contact: [[team:ferenc_balint-benczedi|Ferenc Balint-Benczedi]] | + | Contact: [[team:patrick_mania|Patrick Mania]] |
+ | --></ | ||
- | == Multi-expert segmentation of cluttered | + | == Game Engine Developer |
+ | A recent development in the field of AI is the usage of photorealistic simulations, | ||
+ | In our research group, we focus on the development of modern robots that can make use of the potential of game engines. This requires a high degree of specialized game engine plugins that can simulate certain aspects of our research. Another important task is the creation of 3d models. | ||
- | Objects in a human environment | + | Therefore, we are currently offering multiple Hiwi positions / student jobs for the following tasks: |
+ | * Modelling of objects for the use in Unreal Engine 4. | ||
+ | * Creation of specific simulation aspects | ||
- | Requirements: | + | Requirements: |
- | * Good programming skills in C/C++ | + | * Knowledge of 3D-Modelling tools. Blender would be highly preferred. |
- | * strong background in 3D vision | + | * Experience |
- | * basic knowledge of ROS, OpenCV, PCL | + | |
- | + | ||
- | Contact: [[team: | + | |
- | + | ||
- | == Robot control systems | + | |
- | + | ||
- | The use of robots in underwater missions shows a challenging task. The dynamic terrain | + | |
- | Requirements: | + | The spoken language |
- | * Good programming skills | + | |
- | * basic knowledge of ROS, OpenCV | + | |
- | Contact: [[team:fereshta_yazdani|Fereshta Yazdani]] | + | Contact: [[team:patrick_mania|Patrick Mania]] |
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: