LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Zhang, Li; Gillies, Marco; Dhaliwal, Kulwant; Gower, Amanda; Robertson, Dale; Crabtree, Barry (2009)
Publisher: IOS Press
Languages: English
Types: Article
Subjects: G600, G400, G700
This paper describes a multi-user role-playing environment, e-drama, which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research – edrama – is a 2D graphical environment in which users are represented by static cartoon figures. An application has been developed to enable integration of the existing edrama tool with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor – EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users’ sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Argyle, M., & Cook, M. (1976). Gaze and Mutual Gaze. Cambridge University Press.
    • Aylett, R. S., Dias, J., & Paiva, A. (2006). An affectively-driven planner for synthetic characters. In D. Long, S. F. Smith, D. Borrajo & L. McCluskey (Eds.) Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS 2006) (pp. 2-10). AAAI Press.
    • Aylett, R., Louchart, S., Dias, J., Paiva, A., Vala, M., Woods, S., & Hall, L. E. (2006). Unscripted Narrative for Affectively Driven Characters. IEEE Computer Graphics and Applications, 26(3), 42-52.
    • Barnden, J. A., Glasbey, S. R., Lee, M. G., & Wallington, A. M. (2004). Varieties and Directions of Interdomain Influence in Metaphor. Metaphor and Symbol, 19(1), 1-30.
    • Brand, M., & Hertzmann, A. (2000). Style Machines. In Proceedings of the 27th Annual Conference on Computer Graphics SIGGRAPH 2000 (pp. 183-192). New Orleans, Louisiana, USA, July 23-28. ACM.
    • Briscoe, E., & Carroll, J. (2002). Robust Accurate Statistical Annotation of General Text. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (pp. 1499-1504). Las Palmas, Gran Canaria.
    • Carletta, J. (1996). Assessing Agreement on Classification Tasks: The Kappa statistic. Computational Linguistics, 22(2), 249-254.
    • Cassell, J., & Thórisson, K. R. (1999). The Power of a Nod and a Glance: Envelope vs. Emotional Feedback in Animated Conversational Agents. International Journal of Applied Artificial Intelligence, 13(4-5), 519- 538.
    • Cassell, J., & Vilhjálmsson, H. (1999). Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous. Autonomous Agents and Multi-Agent Systems, 2(1), 45-64.
    • Cassell, J., Stocky, T., Bickmore, T., Gao, Y., Nakano, Y., Ryokai, K., Tversky, D., Vaucelle, C., & Vilhjálmsson, H. (2002). MACK: Media lab Autonomous Conversational Kiosk. In Proceedings of Imagina '02. February 12-15, Monte Carlo.
    • Cavazza, M., Smith, C., Charlton, D., Zhang, L., Turunen, M., & Hakulinen, J. (2008). A 'Companion' ECA with Planning and Activity Modeling. In L. Padgham, D. Parkes, J. Müller & S. Parsons (Eds.) Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems (pp. 1281-1284). May 2008. Portugal.
    • Chi, D., Costa, M., Zhao, L., & Badler, N. I. (2000) The EMOTE Model for Effort and Shape. In Proceedings of
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

Share - Bookmark

Cite this article