LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Qi, T.; Xiao, J.; Zhuang, Y.; Zhang, H.; Yang, Xiaosong; Zhang, Jian J.; Feng, Yinfu (2014)
Languages: English
Types: Article
Subjects:
Identifiers:doi:10.1002/cav.1590
Even though there is an explosive growth of motion capture data, there is still a lack of efficient and reliable methods to automatically annotate all the motions in a database. Moreover, because of the popularity of mocap devices in home entertainment systems, real-time human motion annotation or recognition becomes more and more imperative. This paper presents a new motion annotation method that achieves both the aforementioned two targets at the same time. It uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose. After training a clustered pose feature model, a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the differences between action strings. Finally, in order to achieve the real-time target, we construct a hierarchical action string structure to quickly label each given action string. The experimental results demonstrate the efficacy and efficiency of our method.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 1. Heck R, Gleicher M. Parametric motion graphs. In Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games, Seattle, USA, 2007; 129-136.
    • 2. Jain S, Ye Y, Liu C. Optimization-based interactive motion synthesis. ACM Transactions on Graphics (TOG) 2009; 28: 1-12.
    • 3. Kovar L, Gleicher M, Pighin F. Motion graphs. ACM transactions on graphics (TOG) 2002; 21: 473-482.
    • 4. Gleicher M. Motion editing with spacetime constraints. In Proceedings of the 1997 Symposium on Interactive 3D Graphics, Providence, USA, 1997; 139-148.
    • 5. Min J, Liu H, Chai J. Synthesis and editing of personalized stylistic human motion. In Proceedings of the 2010 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Bethesda, USA, 2010; 39-46.
    • 6. Hecker C, Raabe B, Enslow R, DeWeese J, Maynard J, Prooijen K. Real-time motion retargeting to highly varied user-created morphologies. ACM Transactions on Graphics (TOG) 2008; 27(3): 27.
    • 7. Xsens, 2013. Available from: http://www.xsens.com.
    • 8. Microsoft. Kinect 2013. Available from: http://www. xbox.com/en-US/kinect.
    • 9. Liu G, Zhang J, Wang W, McMillan L. A system for analyzing and indexing human-motion databases. In Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data, Baltimore, USA, 2005; 924-926.
    • 10. Poppe R. A survey on vision-based human action recognition. Image and Vision Computing 2010; 28(6): 976-990.
    • 11. Li W, Zhang Z, Liu Z. Action recognition based on a bag of 3D points. In Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, USA, 2010; 9-14.
    • 12. Ni B, Wang G, Moulin P. RGBD-HuDaAct: a color-depth video database for human daily activity recognition. In Computer Vision Workshops (ICCV Workshops), Barcelona, 2011; 1147-1153.
    • 13. Wang J, Liu Z, Wu Y, Yuan J. Mining actionlet ensemble for action recognition with depth cameras. In Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012; 1290-1297.
    • 14. Yang X, Zhang C, Tian Y. Recognizing actions using depth motion maps-based histograms of oriented gradients. In Proceedings of the 20th ACM International Conference on Multimedia, New York, USA, 2012; 1057-1060.
    • 15. Baak A, Muller M, Bharaj G, Seidel H-P, Theobalt C. A data-driven approach for real-time full body pose reconstruction from a depth camera. In Consumer Depth Cameras for Computer Vision. Springer: London, 2013; 71-98.
    • 16. Shen W, Deng K, Bai X, Leyvand T, Guo B, Tu Z. Exemplar-based human action pose correction and tagging. In Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2012; 1784-1791.
    • 17. Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A. Real-time human pose recognition in parts from single depth images. In Computer Vision and Pattern Recognition (CVPR), Providence, USA, 2011; 1297-1304.
    • 18. Forbes K, Fiume E. An efficient search algorithm for motion data using weighted PCA. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Los Angeles, USA, 2005; 67-76.
    • 19. Meng J, Yuan J, Hans M, Wu Y. Mining motifs from human motion. In Eurographics 2008 Short Papers, Grete, Greece, 2008; 71-74.
    • 20. Vieira AW, Lewiner T, Schwartz WR, Campos M. Distance matrices as invariant features for classifying mocap data. In International Conference on Pattern Recognition (ICPR), Tsukuba, JP, 2012; 2934-2937.
    • 21. Muller M, Roder T. Motion templates for automatic classification and retrieval of motion capture data. In Proceedings of the 2006 ACM SIGGRAPH/ Eurographics Symposium on Computer Animation, Vienna, Austria, 2006; 137-146.
    • 22. Muller M, Baak A, Seidel HP. Efficient and robust annotation of motion capture data. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, New York, USA, 2009; 17-26.
    • 23. Qi T, Feng Y, Xiao J, Zhuang Y, Yang X, Zhang J. A semantic feature for human motion retrieval. Computer Annimation and Virtual Worlds 2013; 24(3-4): 399-407.
    • 24. Defays D. An efficient algorithm for a complete link method. The Computer Journal (British Computer Society) 1977; 20(4): 364C-366.
    • 25. Navarro G. A guided tour to approximate string matching. ACM Computing Surveys (CSUR) 2001; 33(1): 31-88.
    • 26. Müller M, Röder T, Clausen M, Eberhardt B, Krüger B, Weber A. Documentation mocap database HDM05. Technical Reports CG-2007-2, Universität Bonn, Bonn, Germany, June 2007. Yinfu Feng received his B.S. in Information Security from the University of Electronic Science and Technology of China, Chengdu, China, in July 2009. Now, he is a fourth-year Ph.D. student majoring in Computer Science and Technology at ZJU. His current research interests include multimedia analysis and retrieval, computer vision and machine learning.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article