LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Rodriguez, Mario; Orrite, Carlos; Medrano, Carlos; Makris, Dimitrios (2016)
Publisher: Institute of Electrical and Electronics Engineers
Languages: English
Types: Article
Subjects: computer
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, “HMDB: a large video database for human motion recognition,” in IEEE Intenational Conference on Computer Vision (ICCV), 2011.
    • [2] J. C. Niebles, C.-W. Chen, and L. Fei-Fei, “Modeling temporal structure of decomposable motion segments for activity classification,” in European Conference on Computer Vision (ECCV), 2010, pp. 392-405.
    • [3] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri, “Actions as space-time shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 12, pp. 2247-2253, December 2007.
    • [4] C. Schuldt, I. Laptev, and B. Caputo, “Recognizing human actions: a local svm approach,” in International Conference on Pattern Recognition, 2004. [Online]. Available: http://www.nada.kth.se/cvap/ actions/
    • [5] D. Weinland, R. Ronfard, and E. Boyer, “Free viewpoint action recognition using motion history volumes,” Computer Vision and Image Understanding, vol. 104, no. 2-3, pp. 249-257, 2006. [Online]. Available: http://4drepository.inrialpes.fr/public/viewgroup/6
    • [6] H. J. Seo and P. Milanfar, “Action recognition from one example,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 5, pp. 867-882, 2011. [Online]. Available: http://doi.ieeecomputersociety.org/10.1109/ TPAMI.2010.156
    • [7] Y. Yang, I. Saleemi, and M. Shah, “Discovering motion primitives for unsupervised grouping and one-shot learning of human actions, gestures, and expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 7, pp. 1635-1648, Jul. 2013.
    • [8] C. Orrite, M. Rodriguez, and M. Montan˜es, “One-sequence learning of human actions.” in Human Behavior Unterstanding, A. Salah and B. Lepri, Eds., vol. 7065. Springer Berlin / Heidelberg, 2011, pp. 40-51.
    • [9] M. Rodriguez, C. Medrano, E. Herrero, and C. Orrite, “Transfer learning of human poses for action recognition,” in Human Behavior Unterstanding, 2013.
    • [10] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, “Speaker verification using adapted gaussian mixture models,” Digital Signal Processing, vol. 10, no. 1-3, pp. 19-41, 2000.
    • [11] H. Wang and C. Schmid, “Action recognition with improved trajectories,” in IEEE Intenational Conference on Computer Vision (ICCV), 2013.
    • [12] I. Laptev, M. Marszałek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun 2008.
    • [13] N. Dalal, B. Triggs, and C. Schmid, “Human detection using oriented histograms of flow and appearance,” in European Conference on Computer Vision (ECCV), 2006. [Online]. Available: http: //lear.inrialpes.fr/pubs/2006/DTS06
    • [14] T. Shinozaki and M. Ostendorf, “Cross-validation and aggregated {EM} training for robust parameter estimation,” Computer Speech & Language, vol. 22, no. 2, pp. 185 - 195, 2008. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0885230807000472
    • [15] L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, feb 1989.
    • [16] P. Turaga, R. Chellappa, V. Subrahmanian, and O. Udrea, “Machine recognition of human activities: A survey,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 18, no. 11, pp. 1473- 1488, Nov 2008.
    • [17] R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, no. 6, pp. 976-990, Jun. 2010. [Online]. Available: http://dx.doi.org/10.1016/j.imavis.2009.11.014
    • [18] D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer Vision and Image Understanding, vol. 115, no. 2, pp. 224-241, Feb. 2011. [Online]. Available: http://dx.doi.org/10.1016/j. cviu.2010.10.002
    • [19] A. Bobick and J. Davis, “Real-time recognition of activity using temporal templates,” in Applications of Computer Vision, 1996. WACV '96., Proceedings 3rd IEEE Workshop on, Dec 1996, pp. 39-42.
    • [20] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 3, pp. 257-267, Mar. 2001. [Online]. Available: http://dx.doi.org/10.1109/34.910878
    • [21] A. Yilmaz and M. Shah, “Actions sketch: a novel action representation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, June 2005, pp. 984-989.
    • [22] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, vol. 60, no. 2, pp. 91-110, Nov. 2004. [Online]. Available: http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94
    • [23] I. Laptev, “On space-time interest points,” International Journal of Computer Vision (IJCV), vol. 64, pp. 107-123, 2005.
    • [24] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, “Behavior recognition via sparse spatio-temporal features,” in Proceedings of the 14th International Conference on Computer Communications and Networks, ser. ICCCN '05. Washington, DC, USA: IEEE Computer Society, 2005, pp. 65-72. [Online]. Available: http: //dl.acm.org/citation.cfm?id=1259587.1259830
    • [25] O. Kliper-Gross, Y. Gurovich, T. Hassner, and L. Wolf, “Motion interchange patterns for action recognition in unconstrained videos,” in European Conference on Computer Vision (ECCV), 2012.
    • [26] D. Oneata, J. Verbeek, and C. Schmid, “Action and event recognition with fisher vectors on a compact feature set,” in IEEE Intenational Conference on Computer Vision (ICCV), 2013, pp. 1817-1824.
    • [27] H. Wang, A. Kla¨ser, C. Schmid, and C.-L. Liu, “Dense trajectories and motion boundary descriptors for action recognition,” International Journal of Computer Vision (IJCV), vol. 103, pp. 60-79, 2013.
    • [28] D. Batra, T. Chen, and R. Sukthankar, “Space-time shapelets for action recognition,” in Motion and video Computing, 2008. WMVC 2008. IEEE Workshop on, Jan 2008, pp. 1-6.
    • [29] A. Veeraraghavan, A. Roy-Chowdhury, and R. Chellappa, “Matching shape sequences in video with applications in human movement analysis,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 12, pp. 1896-1909, Dec 2005.
    • [30] B. Yao and S.-C. Zhu, “Learning deformable action templates from cluttered videos,” in Computer Vision, 2009 IEEE 12th International Conference on, Sept 2009, pp. 1507-1514.
    • [31] C. Sminchisescu, A. Kanaujia, and D. Metaxas, “Conditional models for contextual human motion recognition,” Comput. Vis. Image Underst., vol. 104, no. 2, pp. 210-220, Nov. 2006. [Online]. Available: http://dx.doi.org/10.1016/j.cviu.2006.07.014
    • [32] J. D. Lafferty, A. McCallum, and F. C. N. Pereira, “Conditional random fields: Probabilistic models for segmenting and labeling sequence data,” in Proceedings of the Eighteenth International Conference on Machine Learning, ser. ICML '01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2001, pp. 282-289. [Online]. Available: http://dl.acm.org/citation.cfm?id=645530.655813
    • [33] A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell, “Hidden conditional random fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, pp. 1848-1852, 2007.
    • [34] X. Feng and P. Perona, “Human action recognition by sequence of movelet codewords,” in 3D Data Processing Visualization and Transmission, 2002. Proceedings. First International Symposium on, 2002, pp. 717-721.
    • [35] W.-L. Lu and J. Little, “Simultaneous tracking and action recognition using the pca-hog descriptor,” in Computer and Robot Vision, 2006. The 3rd Canadian Conference on, June 2006, pp. 6-6.
    • [36] H. Ug˘uz, A. O¨ ztu¨rk, R. Sarac¸og˘lu, and A. Arslan, “A biomedical system based on fuzzy discrete hidden markov model for the diagnosis of the brain diseases,” Expert Systems with Applications, vol. 35, no. 3, pp. 1104-1114, Oct. 2008. [Online]. Available: http://dx.doi.org/10.1016/j.eswa.2007.08.006
    • [37] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345- 1359, October 2010.
    • [38] D. Cook, K. Feuz, and N. Krishnan, “Transfer learning for activity recognition: a survey,” Knowledge and Information Systems, pp. 1-20, 2013. [Online]. Available: http://dx.doi.org/10.1007/s10115-013-0665-3
    • [39] J. Liu, M. Shah, B. Kuipers, and S. Savarese, “Cross-view action recognition via view knowledge transfer.” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 3209-3216. [Online]. Available: http://dblp.uni-trier.de/db/conf/cvpr/ cvpr2011.html#LiuSKS11
    • [40] W. Bian, D. Tao, and Y. Rui, “Cross-domain human action recognition.” IEEE Transactions on Systems, Man, and Cybernetics. B Cybernetics, vol. 42, no. 2, pp. 298-307, 2012.
    • [41] Y. Zhu, X. Zhao, Y. Fu, and Y. Liu, “Sparse coding on local spatialtemporal volumes for human action recognition,” in Asian Conference on Computer Vision (ACCV), 2011.
    • [42] D. H. Hu, V. W. Zheng, and Q. Yang, “Cross-domain activity recognition via transfer learning,” Pervasive and Mobile Computing, vol. 7, pp. 344-358, June 2011. [Online]. Available: http://dx.doi.org/ 10.1016/j.pmcj.2010.11.005
    • [43] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 594-611, 2006.
    • [44] L. Cao, Z. Liu, and T. S. Huang, “Cross-dataset action detection,” in The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13- 18 June 2010, 2010, pp. 1998-2005. [Online]. Available: http: //dx.doi.org/10.1109/CVPR.2010.5539875
    • [45] T. P. Minka, “Estimating a Dirichlet distribution,” Tech. Rep., 2009. [Online]. Available: http://research.microsoft.com/\∼{}minka
    • [46] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics). Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2006.
    • [47] S. Oh, A. Hoogs, A. Perera, N. Cuntoor, C.-C. Chen, J. T. Lee, S. Mukherjee, J. K. Aggarwal, H. Lee, L. Davis, E. Swears, X. Wang, Q. Ji, K. Reddy, M. Shah, C. Vondrick, H. Pirsiavash, D. Ramanan, J. Yuen, A. Torralba, B. Song, A. Fong, A. Roy-Chowdhury, and M. Desai, “A large-scale benchmark dataset for event recognition in surveillance video,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Washington, DC, USA, 2011, pp. 3153-3160.
    • [48] J. Liu, J. Luo, and M. Shah, “Recognizing realistic actions from videos “in the wild”,” IEEE International Conference on Computer Vision and Pattern Recognition, 2009. [Online]. Available: http://server.cs.ucf.edu/ ∼vision/projects/liujg/realistic action recognition.html
    • [49] L. Liu, L. Shao, X. Zhen, and X. Li, “Learning discriminative key poses for action recognition,” IEEE Transactions on Cybernetics, vol. 43, no. 6, pp. 1860-1870, Dec 2013.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article