=====Dr.-Ing. Daniel Beßler ===== ~~NOTOC~~ ^ {{:wiki:daniel.jpg?0x180}} |||| |::: ||Research Staff\\ \\ || |:::|Tel: |+49 -421 218 64016| |:::|Fax: |+49 -421 218 64047| |:::|Room: |TAB 1.56| |:::|Mail: |danielb@cs.uni-bremen.de| |:::| || ==== About ==== My work aims to bridge the gap between theoretical AI research and real-world robotic applications, **enabling more autonomous and semantically aware robotic systems**. With expertise in knowledge-based reasoning, simulation, and interactive AI, I am passionate about developing intelligent systems that can understand, learn, and adapt in dynamic environments, while also leveraging game technology and real-time graphics to enhance simulation and interaction capabilities. Beyond research and teaching, I have actively **contributed to international standardization efforts**, particularly as a co-author of the //IEEE 1872.2// standard ontology for autonomous robotics. I have also **organized multiple international workshops** ([[https://robontics.github.io|RobOntics]], [[https://wosra.github.io/wosra|WOSRA]]) focused on knowledge representation, decision-making in social settings, and cognitive robotics. My research focuses on **knowledge representation**, **cognitive robotics**, and **hybrid reasoning** — integrating symbolic and sub-symbolic approaches to enhance robotic decision-making. I am leading the development of [[https://github.com/knowrob/knowrob|KnowRob 2.0]], a widely used knowledge processing framework for service robots, and have designed its hybrid reasoning architecture, which enables the flexible integration of multiple reasoning methods. Furthermore, I was the lead developer of [[https://github.com/ease-crc/openease|openEASE]], a cloud-based platform that provides access to robot experience data, supporting data-driven research and machine learning applications. My research contributions also extend to semantic scene representation and activity recognition, particularly in virtual reality environments, where I have worked on human modeling, action recognition, and augmenting USD scene graphs with semantic information. Additionally, I am one of the lead authors of the [[https://github.com/ease-crc/soma|SOMA]] ontology, which provides a formal semantic model for robotic activities and environments, supporting reasoning and interoperability in autonomous systems. My research has been **published in leading AI and robotics conferences**, including ICRA, ECAI, and AAMAS. Notably, my work on KnowRob 2.0, published at ICRA, has been highly cited and was selected as one of the most important publications by the //IEEE Technical Committee on Cognitive Robotics//. I was also co-leading the writing of a journal article at KER with significant citation impact, and was the lead author of a paper that was nominated for Best Paper at AAMAS. Throughout my career, I have played a key role in the **CRC EASE** initiative, contributing to research funding efforts and shaping its knowledge-driven robotics agenda, and have been involved in multiple other research projects, collaborating on the integration of knowledge-based AI with perception, action, planning, manipulation, and human-robot interaction. My work on semantic modeling has helped bridge robotics with cognitive science and human activity understanding. Additionally, I have a strong interest in game technology and real-time graphics, where I explore GPU programming and interactive AI applications, particularly in relation to AI and robotics. In addition, I have over four years of **teaching experience**, having taught courses on artificial intelligence, robotics, and logic programming. Additionally, I have six years of experience supervising student groups in hands-on software projects, such as the //RoboCup@Home// competition and VR-based human modeling using Unreal Engine. These experiences have allowed me to mentor students in both theoretical foundations and practical implementations. ====Dissertation==== [[https://media.suub.uni-bremen.de/handle/elib/6248|{{:team:danielb-diss-cover.png?200 |}}]]//Abstract//-- It has been demonstrated many times that modern robotic platforms can generate competent bodily behavior comparable to the level of humans. However, the implementation of such behavior requires a lot of programming effort, and is often not feasible for the general case, i.e., regardless of the situational context in which the activity is performed. Furthermore, research and industry have an enormous need for intuitive robot programming. This is due to the high complexity of realizing an integrated robot control system, and adapting it to other robots, tasks and environments. The challenge is how a robot control program can be realized that can generate competent behavior depending on characteristics of the robot, the task it executes, and the environment where it operates. One way to approach this problem is to specialize the control program through the context-specific application of abstract knowledge. In this work, it will be investigated how abstract knowledge, required for flexible and competent robot task execution, can be represented using a formal ontology. To this end, a domain ontology of robot activity context will be proposed. Using this ontology, robots can infer how tasks can be accomplished through movements and interactions with the environment, and how they can improvise to a certain extent to take advantage of action possibilities that objects provide in their environment. Accordingly, it will be shown that parts of the context-specific information required for flexible task execution can be derived from broadly applicable knowledge represented in an ontology. Furthermore, it will be shown that the domain vocabulary yields additional benefits for the representation of knowledge gained through experimentation and simulation. Such knowledge can be leveraged for learning, or be used to inspect the robot's behavior. The latter of which will be demonstrated in this work by means of a case study. ====Publications==== bibfiles/allpublications.bib Beßler