Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Languages: English
Types: Unknown

Classified by OpenAIRE into

ACM Ref: ComputerApplications_MISCELLANEOUS
We present the first Audio-Visual+ Emotion recognition Challenge and workshop (AV+EC 2015) aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological emotion analysis. This is the 5th event in the AVEC series, but the very first Challenge that bridges across audio, video and physiological data. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. This paper presents the challenge, the dataset and the performance of the baseline system.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] T. R. Almaev and M. F. Valstar. Local Gabor Binary Patterns from Three Orthogonal Planes for Automatic Facial Expression Recognition. In Proc. of ACII, pages 356-361, Geneva, Switzerland, 2013. IEEE Computer Society.
    • [2] D. Bone, C.-C. Lee, and S. S. Narayanan. Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features. IEEE Transactions on A ective Computing, 5(2):201-213, April-June 2014.
    • [3] M. Chen, Y. Zhang, M. M. H. Yong Li, and A. Alarmi. AIWAC: Affective interaction through wearable computing and cloud technology. IEEE Mobile Wearable Communications, 22(1):20-27, 2015.
    • [4] L. J. Cronbach. Coefficient alpha and the internal structure of tests. Psychometrika, 16(3):297-334, 1951.
    • [5] M. Dawson, A. Schell, and D. Filion. The electrodermal system. In J. T. Cacioppo, L. G. Tassinary, and G. G. Berntson, editors, Handbook of psychophysiology, volume 2, pages 200-223. Cambridge: Cambridge University Press, 2007.
    • [6] P. Ekman, W. V. Friesen, and J. Hager. Facial action coding system. Salt Lake City, UT: Research Nexus, 2002.
    • [7] F. Eyben, K. Scherer, B. Schuller, J. Sundberg, E. André, C. Busso, L. Devillers, J. Epps, P. Laukka, S. Narayanan, and K. Truong. The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing. IEEE Transactions on A ective Computing, 2015. in press.
    • [8] F. Eyben, F. Weninger, F. Groß, and B. Schuller. Recent developments in openSMILE, the Munich open-source multimedia feature extractor. In Proc. of ACM MM, pages 835-838, Barcelona, Spain, 2013.
    • [9] M. Grimm and K. Kroschel. Evaluation of natural emotions using self assessment manikins. In Proc. of IEEE ASRU, pages 381-385, San Juan, Puerto Rico, 2005.
    • [10] N. Halko, P.-G. Martinsson, Y. Shkolnisky, and M. Tygert. An algorithm for the principal component analysis of large data sets. Journal on Scienti c Computing, 33(5):2580-2594, October 2011.
    • [11] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The WEKA data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10-18, June 2009.
    • [12] D. Keltner and J. S. Lerner. Emotion. In S. Fiske, D. Gilbert, and G. Lindzey, editors, Handbook of Social Psychology, volume 1, pages 317-331. John Wiley & Sons Inc., 5th edition, 2010.
    • [13] R. B. Knapp, J. Kim, and E. André. Physiological signals and their use in augmenting emotion recognition for human-machine interaction. In Emotion-Oriented Systems { The Humaine Handbook, pages 133-159. Springer Berlin Heielberg, 2011.
    • [14] S. Koelstra, C. Mühl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, and A. N. I. Patras. DEAP: A database for emotion analysis using physiological signals. IEEE Transactions on A ective Computing, 3:18-31, 2012.
    • [15] L. Li. A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45(1):255-268, March 1989.
    • [16] R. Picard. Affective media and wearables: surprising findings. In Proc. of ACM MM, pages 3-4, Orlando (FL), USA, 2014. ACM.
    • [17] F. Ringeval, S. Amiriparian, F. Eyben, K. Scherer, and B. Schuller. Emotion recognition in the wild: Incorporating voice and lip activity in multimodal decision-level fusion. In Proc. of EmotiW, ICMI, pages 473-480, Istanbul, Turkey, 2014. ACM.
    • [18] F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne, and B. Schuller. Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recognition Letters, November 2014, in press.
    • [19] F. Ringeval, A. Sonderegger, B. Noris, A. Billard, J. Sauer, and D. Lalanne. On the influence of emotional feedback on emotion awareness and gaze behavior. In Proc. of ACII, pages 448-453, Geneva, Switzerland, 2013. IEEE Computer Society.
    • [20] F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In Proc. of EmoSPACE, FG, Shanghai, China, 2013.
    • [21] A. Sanoa, R. W. Picard, and R. Stickgold. Quantitative analysis of wrist electrodermal activity during sleep. International Journal of Psychophysiology, 94(3):382-389, 2014.
    • [22] S. Schachter. Cognition and peripheralist-centralist controversies in motivation and emotion. In M. S. Gazzaniga, editor, Handbook of Psychobioogy, pages 529-564. Academic Press Inc., 2012.
    • [23] K. R. Scherer, A. Schorr, and T. Johnstone. Appraisal processes in emotion: Theory, methods, research. In K. R. Scherer, A. Schorr, and T. Johnstone, editors, Series in A ective Science. Oxford University Press, New York and Oxford, 2001.
    • [24] P. Shrout and J. Fleiss. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2):420-428, 1979.
    • [25] M. Valstar, G. McKeown, M. Mehu, L. Yin, M. Pantic, and J. Cohn. FERA 2015 - Second Facial Expression Recognition and Analysis Challenge. In Proc. of FG, Ljublijana, Slovenia, May 2015. IEEE.
    • [26] M. Valstar, B. Schuller, K. Smith, T. Almaev, F. Eyben, J. Krajewski, R. Cowie, and M. Pantic. AVEC 2014 - The Three Dimensional Affect and Depression Challenge. In Proc. of ACM MM, Orlando (FL), USA, November 2014.
    • [27] F. Weninger, J. Bergmann, and B. Schuller. Introducing CURRENNT - the Munich Open-Source CUDA RecurREnt Neural Network Toolkit. Journal of Machine Learning Research, 16(3):547-551, 2015.
    • [28] X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proc. of CVPR, pages 532-539, Portland (OR), USA, 2013. IEEE.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects

  • EC | SEWA
  • EC | MixedEmotions

Cite this article