Accurate Human Motion Capture Using an Ergonomics-Based Anthropometric Human Model (bibtex)
by Jan Bandouch, Florian Engstler and Michael Beetz
Abstract:
In this paper we present our work on markerless model-based 3D human motion capture using multiple cameras. We use an industry proven anthropometric human model that was modeled taking ergonomic considerations into account. The outer surface consists of a precise yet compact 3D surface mesh that is mostly rigid on body part level apart from some small but important torsion deformations. Benefits are the ability to capture a great amount of possible human appearances with high accuracy while still having a simple to use and computationally efficient model. We have introduced special optimizations such as caching into the model to improve its performance in tracking applications. Available force and comfort measures within the model provide further opportunities for future research. 3D articulated pose estimation is performed in a Bayesian framework, using a set of hierarchically coupled local particle filters for tracking. This makes it possible to sample efficiently from the high dimensional space of articulated human poses without constraining the allowed movements. Sequences of tracked upper-body as well as full-body motions captured by three cameras show promising results. Despite the high dimensionality of our model (51 DOF) we succeed at tracking using only silhouette overlap as weighting function due to the precise outer appearance of our model and the hierarchical decomposition.
Reference:
Jan Bandouch, Florian Engstler and Michael Beetz, "Accurate Human Motion Capture Using an Ergonomics-Based Anthropometric Human Model", In Proceedings of the Fifth International Conference on Articulated Motion and Deformable Objects (AMDO), 2008.
Bibtex Entry:
@InProceedings{bandouch08amdo,
  author = {Jan Bandouch and Florian Engstler and Michael Beetz},
  title =	 {Accurate Human Motion Capture Using an Ergonomics-Based Anthropometric Human Model},
  year =	 {2008},
  booktitle =	 {Proceedings of the Fifth International Conference on Articulated Motion and Deformable Objects (AMDO)},
  bib2html_pubtype ={Conference Paper},
  bib2html_rescat  = {Perception},
  bib2html_groups = {Memoman},
  bib2html_funding = {CoTeSys},
  bib2html_domain  = {Assistive Household},
  abstract =     {In this paper we present our work on markerless model-based 3D
                  human motion capture using multiple cameras. We use an industry
                  proven anthropometric human model that was modeled taking ergonomic
                  considerations into account. The outer surface consists of a precise
                  yet compact 3D surface mesh that is mostly rigid on body part level
                  apart from some small but important torsion deformations. Benefits
                  are the ability to capture a great amount of possible human
                  appearances with high accuracy while still having a simple to use
                  and computationally efficient model. We have introduced special
                  optimizations such as caching into the model to improve its
                  performance in tracking applications. Available force and comfort
                  measures within the model provide further opportunities for future
                  research.
                  3D articulated pose estimation is performed in a Bayesian framework,
                  using a set of hierarchically coupled local particle filters for
                  tracking. This makes it possible to sample efficiently from the high
                  dimensional space of articulated human poses without constraining
                  the allowed movements. Sequences of tracked upper-body as well as
                  full-body motions captured by three cameras show promising results.
                  Despite the high dimensionality of our model (51 DOF) we succeed
                  at tracking using only silhouette overlap as weighting function
                  due to the precise outer appearance of our model and the
                  hierarchical decomposition.}
}
Powered by bibtexbrowser