Human Action Recognition using Global Point Feature Histograms and Action Shapes (bibtex)
by Radu Bogdan Rusu, Jan Bandouch, Franziska Meier, Irfan Essa and Michael Beetz
Abstract:
This article investigates the recognition of human actions from 3D point clouds that encode the motions of people acting in sensor-distributed indoor environments. Data streams are time-sequences of silhouettes extracted from cameras in the environment. From the 2D silhouette contours we generate space-time streams by continuously aligning and stacking the contours along the time axis as third spatial dimension. The space-time stream of an observation sequence is segmented into parts corresponding to subactions using a pattern matching technique based on suffix trees and interval scheduling. Then, the segmented space-time shapes are processed by treating the shapes as 3D point clouds and estimating global point feature histograms for them. The resultant models are clustered using statistical analysis, and our experimental results indicate that the presented methods robustly derive different action classes. This holds despite large intra-class variance in the recorded datasets due to performances from different persons at different time intervals.
Reference:
Radu Bogdan Rusu, Jan Bandouch, Franziska Meier, Irfan Essa and Michael Beetz, "Human Action Recognition using Global Point Feature Histograms and Action Shapes", In Advanced Robotics journal, Robotics Society of Japan (RSJ), 2009.
Bibtex Entry:
@Article{Rusu09RSJ-AR,
  author   = {Radu Bogdan Rusu and Jan Bandouch and Franziska Meier and Irfan Essa and Michael Beetz},
  title    = {{Human Action Recognition using Global Point Feature Histograms and Action Shapes}},
  journal  = {Advanced Robotics journal, Robotics Society of Japan (RSJ)},
  year     = {2009},
  bib2html_pubtype = {Journal},
  bib2html_rescat  = {Perception},
  bib2html_groups  = {Memoman, EnvMod},
  bib2html_funding = {CoTeSys},
  bib2html_domain  = {Assistive Household},
  abstract  = { This article investigates the recognition of human actions from 3D point clouds
  that encode the motions of people acting in sensor-distributed indoor environments.
  Data streams are time-sequences of silhouettes extracted from cameras in the environment.
  From the 2D silhouette contours we generate space-time streams by continuously aligning and
  stacking the contours along the time axis as third spatial dimension.
  The space-time stream of an observation sequence is segmented into parts
  corresponding to subactions using a pattern matching technique based
  on suffix trees and interval scheduling.  Then, the segmented space-time shapes
  are processed by treating the shapes as 3D point clouds and estimating global
  point feature histograms for them. The resultant models are clustered using
  statistical analysis, and our experimental results indicate that the presented
  methods robustly derive different action classes. This holds despite large
  intra-class variance in the recorded datasets due to performances from different
  persons at different time intervals.
  }
}
Powered by bibtexbrowser