SUTURO VaB RoboCup@Home 2024

The SUTURO VaB team video showing our results and scientific expertise.

Welcome to the SUTURO VaB team! It is a joint team between the Institute for Artificial Intellience at the University of Bremen(IAI) and the TU Vienna(TUW). By joining forces, our team for RoboCup@Home combines state-of-the-art robotic perception (TUW) with state-of-the-art robot control architectures (IAI). Both parties in the SUTURO VaB team are strongly interested in autonomous mobile manipulation on robot platforms. The robot control architecture developed by the IAI is also integrated within the CRC 1320 Everyday Activity Science and Engineering (EASE 🔗). EASE is an interdisciplinary research center at the University of Bremen that investigates everyday activity science & engineering. Its core purpose is to advance the understanding of how human-scale manipulation tasks can be mastered by robotic agents. Within EASE, general software frameworks for robotic agents are created and further developed. These frameworks (CRAM, KnowRob, Giskard, and RoboKudo) can also be well applied to the RoboCup@Home context, since there is a substantial overlap in robot capabilities required to pass the RoboCup@Home challenges with what capabilities robots within EASE provide. SUTURO VaB participants will benefit from expertise acquired by EASE researchers through multiple years of experience in working with autonomous robots performing everyday activities. TUW delivers a comprehensive suite of perception modules that provide semantic information, such as object identities, locations and grasp configurations. The focus of the research of these perception approaches is the target domain of mobile robot manipulation. We are looking forward to successfully employing and extending our approaches based on the exiciting challenges in the RoboCup@Home domain.
Sry, Gif is unavailable

Reactive placing using the force-torque-sensor.

Sry, Gif is unavailable

Failure detection and recovery. This gif is accelerated to approximately 200% speed.

Sry, Gif is unavailable

Opening doors and detecting if the gripper slips off.

Motivation and Goal

We are participating in the 27th edition of the RoboCup@Home which will take place in Eindhoven in 2024.
RoboCup@Home wants to provide a stage for service robots and the problems these robots can tackle, like interacting with humans and helping them in their everyday-life. In the competition, the participants have to absolve different challenges in two stages. We chose to go for the following challenges:

Stage 1
  • Carry my Luggage
  • GPSR
  • Receptionist
  • Serve Breakfast
  • Storing Groceries
Stage 2
  • Clean the Table
  • EGPSR
  • Restaurant
The RoboCup@Home presents a series of increasingly challenging tasks designed to advance the development of autonomous domestic service robots. These tasks, set within a home-like environment, require robots to demonstrate a variety of skills that are crucial for real-world applications.
Common key challenges include object recognition and manipulation, where robots must accurately identify and interact with various household items. This demands sophisticated vision systems capable of processing complex visual information in dynamic settings. Navigation and obstacle avoidance are also critical, necessitating advanced perception-action loops for smooth and safe movement within a household space. These robots must also possess a robust knowledge base to understand and execute complex, underspecified commands.
RoboCup@Home tasks are very broad in nature and require the robot to act competently in areas like human robot interaction, spatial reasoning, natural language understanding or object manipulation. Effective planning, reasoning and decision-making algorithms are essential for such robots to autonomously perform tasks like assisting humans, cleaning, or fetching objects.
Solving such challenges requires a complex architecture of specialized software frameworks developed for robotic applications. Our system is split into integrated subsystems of Perception, Manipulation, Knowledge, Natural Language Processing and Planning. The methodology of these subsystems is explained below.

Our team has participated in previous RoboCup@Home events (German Open 2019, World 2021, World 2023). In 2023, we have placed 2. in the Carry my Luggage task in the DSPL league.

Team

We are happy to present our whole team:

Team Lead
Vanessa Hassouna
hassouna[at]uni-bremen[.]de

Team Co-Lead
Alina Hawkin
hawkin[at]uni-bremen[.]de

Knowledge
Alina Konschin
alina6[at]uni-bremen[.]de
Felix Schmidt
felix18[at]uni-bremen[.]de

Manipulation
Lukas Hauschild
lukhau[at]uni-bremen[.]de
Yannis Bülter
ybuelter[at]uni-bremen[.]de

Perception
Leonie Gollner
lgollner[at]uni-bremen[.]de
Lukas Bollhorst
lubo1[at]uni-bremen[.]de
Alexander Haberl
Weibel[at]acin.tuwien.ac.at
Jean-Baptiste Weibel
Weibel[at]acin.tuwien.ac.at
Peter Hoenig
Weibel[at]acin.tuwien.ac.at

Planning
Celina Röll
celina5[at]uni-bremen[.]de
Juliane Schulz
juschulz[at]uni-bremen[.]de
Mohammad Aswad
aswad[at]uni-bremen[.]de

NLP
Andreas Benischke
anbe[at]uni-bremen[.]de
Christian Lukanowski
ch_lu[at]uni-bremen[.]de

Methodology and implementation

Knowledge:
To fulfill complex tasks a robot needs knowledge and memory of its environment. While the robot acts in its world, it recognizes objects and manipulaties them through pick-and-place tasks. With the use of KnowRob, a belief state provides episodic memory of the robot's experience, recording the robot's memory of each cognitive activity. Via ontologies objects can be classified and put into context, which enables logical reasoning over the environment and intelligent decision making.

Manipulation:
For manipulation tasks in the environment, we use the open source motion planning framework Giskard. It uses constraint and optimization based task space control to generate trajectories for the whole body of mobile manipulators. Giskard offers interfaces to plan and execute motion goals and to modify its world model. A selection of predefined basic motion goals can be arbitrarily combined to describe a motion. If an environment model is present, such goals can also be defined on the environment, e.g., to open a door. Sry, Gif is unavailable

Simulation of the HSR while opening a human sized door. This gif is accelerated to 200% speed. This results with a real-time factor of approximately 0.46 to an effective speed of 92%.

Planning:
Planning is responsible for the high-level control and failure handling of the autonomous robot system by utilizing generic cognitive strategies. It combines the frameworks of perception, knowledge and manipulation within high-level plans written within the CRAM (Cognitive Robot Abstract Machine) system. CRAM enables the implementation of various recovery strategies for failures, as well as a lightweight simulation tool for prospection, allowing to simulate the potential outcome of the current plans and their respective parameters before executing it on the real robot, hereby increasing the success of the performed action by discarding faulty parameters in advance. CRAM also allows the high-level plans to be written generically in a way, so that he plans are robot-platform independent. The plan execution results can furthermore be recorded in order to be reasoned about in the future, adapting and increasing the success of upcoming plan performance.

Perception:
The perception framework has the task to process the visual data received by the robot's camera sensors and establish the communication between the high-level and visual perception. RoboKudo is an open source robotic perception framework based on the principles of unstructured information management. The framework allows for the creation of perception systems that employ an ensembles of experts approach and treat perception as a question-answering problem. Based on the queries issued to the system a perception plan is created consisting of a list of experts to be executed. The perception experts generate object hypotheses, annotate these hypotheses and test and rank them in order to come up with the best possible interpretation of the data and generate the answer to the query. The methods provided by TUW are integrated as experts into the RoboKudo perception framework allowing the reasoning about the perception results and the communication with the high-level planning CRAM to close the perception-action loop.

NLP:
The NLP (Natural Language Processing) team focuses on transforming spoken language into text using tools like whisper and rasa. Audio is captured by the robot, filtering keywords essential for the Knowledge and Planning teams. However, achieving accurate text recognition under diverse conditions, including different accents or loud ambient noise, can be challenging. The central goal is about extracting semantics from the given input, recognizing elements such as locations, persons, and objects. NLP faces the challenge of understanding user intent, a crucial aspect for precise task execution, ensuring effective human-robot communication in dynamic environments.

Relevant Publications

Michael Beetz, Ferenc Balint-Benczedi, Nico Blodow, Daniel Nyga, Thiemo Wiedemeyer, and Zoltan-Csaba Marton. RoboSherlock: Unstructured Information Processing for Robot Perception. In Proc. of IEEE ICRA, pages 1549–1556, 2015.

Michael Beetz, Daniel Beßler, Andrei Haidu, Mihai Pomarlan, Asil Kaan Bozcuoglu, and Georg Bartels. Knowrob 2.0 – a 2nd generation knowledge processing framework for cognition-enabled robotic agents. In Proc. of IEEE ICRA, pages 512–519, 2018.

Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski, and Michael Beetz. The robot household marathon experiment. In Proc. of IEEE ICRA, pages 9382-9388, 2021.

Beetz, Michael, Lorenz Mösenlechner, and Moritz Tenorth. CRAM -- A Cognitive Robot Abstract Machine for Everyday Manipulation in Human Environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 1012-1017, 2010.

Simon Stelter, Georg Bartels, and Michael Beetz. “An open-source motion planning frame- work for mobile manipulators using constraint-based task space control with linear MPC”. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2022.

Marco Costanzo, Simon Stelter, Ciro Natale, Salvatore Pirozzi, Georg Bartels, Alexis Maldonado, and Michael Beetz. Manipulation Planning and Control for Shelf Replenishment. IEEE Robotics and Automation Letters, 5(2), 1595-1601.

Edith Langer, Timothy Patten, and Markus Vincze. Where does it belong? autonomous object mapping in open-world settings. Frontiers in Robotics and AI, 9, 2022

Dominik Bauer, Timothy Patten, and Markus Vincze. Verefine: Integrating object pose verification with physics-guided iterative refinement. IEEE Robotics and Automation Letters, 5(3):4289–4296, 2020





Prof. Dr. hc. Michael Beetz PhD
Head of Institute

Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de

Discover our VRB for innovative and interactive research


Memberships and associations:


Social Media: