Learning to Execute Robot Navigation Plans (bibtex)
by Thorsten Belker and Michael Beetz
Abstract:
Most state-of-the-art navigation systems for autonomous service robots decompose navigation into global navigation planning and local reactive navigation. While the methods for navigation planning and local navigation are well understood, the plan execution problem, the problem of how to generate and parameterize local navigation tasks from a given navigation plan, is largely unsolved. This article describes how a robot can autonomously learn to execute navigation plans. We formalize the problem as a Markov Decision Problem (MDP), discuss how it can be simplified to make its solution feasible, and describe how the robot can acquire the necessary action models. We show, both in simulation and on a RWI B21 mobile robot, that the learned models are able to produce competent navigation behavior.
Reference:
Thorsten Belker and Michael Beetz, "Learning to Execute Robot Navigation Plans", In Proceedings of the 25th German Conference on Artificial Intelligence (KI 01), Springer Verlag, Wien, Austria, 2001.
Bibtex Entry:
@InProceedings{Bel01Lea,
  author    = {Thorsten Belker and Michael Beetz},
  title     = {Learning to Execute Robot Navigation Plans},
  booktitle = {Proceedings of the 25th German Conference on Artificial Intelligence (KI 01)},
  year      = {2001},
  address   = {Wien, Austria},
  publisher = "Springer Verlag",
  bib2html_pubtype = {Refereed Conference Paper},
  bib2html_rescat = {Robot Learning, Plan-based Robot Control},
  bib2html_groups   = {IAS},
  bib2html_funding  = {ignore},
  bib2html_keywords = {Robot, Learning},
  abstract = {Most state-of-the-art navigation systems for autonomous service robots decompose navigation into
              global navigation planning and local reactive navigation. While the methods for navigation planning
              and local navigation are well understood, the plan execution problem, the problem of how to
              generate and parameterize local navigation tasks from a given navigation plan, is largely unsolved.
              This article describes how a robot can autonomously learn to execute navigation plans. We formalize
              the problem as a Markov Decision Problem (MDP), discuss how it can be simplified to make its
              solution feasible, and describe how the robot can acquire the necessary action models. We show,
              both in simulation and on a RWI B21 mobile robot, that the learned models are able to produce
              competent navigation behavior.}
}
Powered by bibtexbrowser