LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P. (2015)
Publisher: Frontiers Media S.A.
Journal: Frontiers in Psychology
Languages: English
Types: Article
Subjects: human-robot interaction, Psychology, human?robot interaction, social behavior, eye tracking, human–robot interaction, BF1-990, Original Research, social signals, intention recognition, interaction strategies
ddc: ddc:410
We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Abelson, R. P. (1981). Psychological status of the script concept. Am. Psychol. 36, 715-729.
    • Abernethy, B., Maxwell, J. P., Jackson, R. C., and Masters, R. S. W. (2007). “Skill in sport,” in Handbook of Applied Cognition, eds F. T. Durso, R. S. Nickerson, S. T. Dumais, S. Lewandowsky, and T. J. Perfect (Chichester: John Wiley & Sons Ltd.), 333-359.
    • Admoni, H., Dragan, A., Srinivasa, S. S., and Scassellati, B. (2014). “Deliberate delays during robot-to-human handovers improve compliance with gaze communication,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction (Bielefeld: ACM Press), 49-56. doi: 10.1145/2559636.2559682
    • Allport, D. A., Antonis, B., and Reynolds, P. (1972). On the division of attention: adisproof of the single channel hypothesis. Q. J. Exp. Psychol. 24, 225-235. doi: 10.1080/00335557243000102
    • Argyle, M., and Dean, J. (1965). Eye-Contact, distance and affiliation. Sociometry 28, 289-304. doi: 10.2307/2786027
    • Baltzakis, H., Pateraki, M., and Trahanias, P. (2012). Visual tracking of hands, faces and facial features of multiple persons. Mach. Vis. Appl. 23, 1141-1157. doi: 10.1007/s00138-012-0409-5
    • Bohus, D., and Horvitz, E. (2009a). “Dialog in the open world: platform and applications,” in Proceedings of the 2009 International Conference on Multimodal Interfaces (ICMI-MLMI) (Cambridge, MA: ACM Press), 31. doi: 10.1145/1647314.1647323
    • Bohus, D., and Horvitz, E. (2009b). “Learning to predict engagement with a spoken dialog system in open-world settings,” in Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (London: Association for Computational Linguistics), 244-252.
    • Bohus, D., and Horvitz, E. (2009c). “Models for multiparty engagement in openworld dialog,” in Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (London: Association for Computational Linguistics), 225-234.
    • Bohus, D., and Horvitz, E. (2009d). “Open-world dialog: challenges, directions, and prototype,” in Proceedings of the IJCAI'2009 Workshop on Knowledge and Reasoning in Practical Dialogue Systems, Pasadena, CA.
    • Bohus, D., and Horvitz, E. (2010). “On the challenges and opportunities of physically situated dialog,” in Proceedings of the AAAI Fall Symposium Series, Arlington, VA.
    • Bohus, D., and Horvitz, E. (2011). “Multiparty turn taking in situated dialog: study, lessons, and directions,” in Proceedings of the SIGDIAL 2011 Conference (Portland, OR: Association for Computational Linguistics), 98-109.
    • Bohus, D., Saw, C. W., and Horvitz, E. (2014). “Directions robot: in-the-wild experiences and lessons learned,” in Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (Paris: International Foundation for Autonomous Agent and Multiagent Systems).
    • Breazeal, C., DePalma, N., Orkin, J., Chernova, S., and Jung, M. (2013). Crowdsourcing human-robot interaction: new methods and system evaluation in a public environment. J. Human-Robot Interact. 2, 82-111. doi: 10.5898/JHRI.2.1.Breazeal
    • Broadbent, D. E. (1969). Perception and Communication. Oxford: Pergamon Press.
    • Brouwer, D., Gerritsen, M., and De Haan, D. (1979). Speech differences between women and men on the wrong track? Lang. Soc. 8, 33-50. doi: 10.1017/S0047404500005935
    • Clark, H. H. (2003). “Pointing and placing,” in Pointing?: Where Language, Culture, and Cognition Meet, eds S. Kita (Mahwah, NJ: L. Erlbaum Associates), 243-268.
    • Clark, H. H. (2012). “Wordless questions, wordless answers,” in Questions: Formal, Functional and Interactional Perspectives, ed. J. P. De Ruiter (Cambridge: Cambridge University Press), 81-102.
    • Dahlbäck, N., Jönsson, A., and Ahrenberg, L. (1993). “Wizard of Oz studies,” in IUI '93 Proceedings of the 1st international conference on Intelligent User Interfaces (New York, NY: ACM Press), 193-200. doi: 10.1145/169891.169968
    • Dalton, P., and Fraenkel, N. (2012). Gorillas we have missed: sustained inattentional deafness for dynamic events. Cognition 124, 367-372. doi: 10.1016/j.cognition.2012.05.012
    • De Ruiter, J. P., and Cummins, C. (2012). “A model of intentional communication: AIRBUS (Asymmetric Intention Recognition with Bayesian Updating of Signals),” Presented at the Semantics and Pragmatics of Dialogue (SemDial), Paris.
    • faceLAB Eye Tracker (2009). (Version 5). Tucson. Arizona: Seeing Machines Inc.
    • Faul, F., Erdfelder, E., Lang, A.-G., and Buchner, A. (2007). G*Power 3: a flexible statistical power analysis program for social, behavioral, and biomedical sciences. Behav. Res. Methods 39, 175-191. doi: 10.3758/BF0319 3146
    • Foster, M. E. (2014). “Validating attention classifiers for multi-party human-robot interaction,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction: Workshop on Attention Models in Robotics (Bielefeld: ACM Press).
    • Foster, M. E., Gaschler, A., and Giuliani, M. (2013). “How can I help you? Comparing engagement classification strategies for a robot bartender,” in Proceedings of the ACM International Conference on Multimodal Interaction (ICMI 2013) (Sydney: ACM Press), 255-262. doi: 10.1145/2522848. 2522879
    • Foster, M. E., Gaschler, A., Giuliani, M., Isard, A., Pateraki, M., and Petrick, R. P. A. (2012). “Two people walk into a bar: dynamic multi-party social interaction with a robot agent,” in Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI 2012) (Santa Monica, CA: ACM Press). doi: 10.1145/2388676.2388680
    • Fraser, N. M., and Gilbert, G. N. (1991). Simulating speech systems. Comput. Speech Lang. 5, 81-99. doi: 10.1016/0885-2308(91)90019-M
    • Gaschler, A., Huth, K., Giuliani, M., Kessler, I., De Ruiter, J. P., and Knoll, A. (2012). “Modelling state of interaction from head poses for social humanrobot interaction,” in Proceedings of the Gaze in Human- Robot Interaction Workshop held at the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012), Boston.
    • Giuliani, M., Petrick, R. P. A., Foster, M. E., Gaschler, A., Isard, A., Pateraki, M., et al. (2013). “Comparing task-based and socially intelligent behaviour in a robot bartender,” in Proceedings of the 15th ACM on International Conference on Multimodal Interaction (Sydney: ACM Press), 263-270. doi: 10.1145/2522848.2522869
    • Goffman, E. (1963). Behaviour in Public Places. Galt, ON: Collier-Macmillan
    • Goodrich, M. A., and Schultz, A. C. (2007). Human-robot interaction: a survey. Found. Trends Hum. Comput. Inter. 1, 203-275. doi: 10.1561/1100000005
    • Goodwin, C. (2000). Action and embodiment within situated human interaction. J. Pragmat. 32, 1489-1522. doi: 10.1016/S0378-2166(99)00096-X
    • Gray, J., Breazeal, C., Berlin, M., Brooks, A., and Lieberman, J. (2005). Action Parsing and Goal Inference Using Self as Simulator. Cambridge, MA: IEEE, 202-209. doi: 10.1109/ROMAN.2005.1513780
    • Green, A., Hüttenrauch, H., and Eklundh, K. S. (2004). “Applying the wizard-ofOz framework to cooperative service discovery and configuration,” in RO-MAN 2004: 13th IEEE International Workshop on Robot and Human Interactive Communication: Proceedings: September 20-22, 2004 (Kurashiki: IEEE).
    • Grice, H. P. (1957). Meaning. Philos. Rev. 66:377. doi: 10.2307/2182440
    • Hall, E. T. (1969). The Hidden Dimension: An Anthropologist Examines Humans' Use of Space in Public and Private. New York, NY: Anchor Books, Doubleday & Company Inc.
    • Heritage, J. (1984). Garfinkel and Ethnomethodology. New York, N.Y: Polity Press.
    • Holroyd, A., Rich, C., Sidner, C. L., and Ponsler, B. (2011). “Generating connection events for human-robot collaboration,” in Proceedings of the 20th IEEE International Symposium on Robot and Human Interactive Communication (Atlanta, GA: IEEE), 241-246. doi: 10.1109/ROMAN.2011.600 5245
    • Java Runtime Environment (2012). (Version 7). 500 Oracle Parkway. Redwood Shores, CA: Oracle Corporation. Available at: http://www.java.com
    • Jeannerod, M. (2006). “Representations for actions,” in Motor Cognition: What Actions Tell the Self, eds M. D'Esposito, J. Driver, T. Robbins, D. Schacter, A. Treismann, and L. Weiskranz (Oxford, New York: Oxford University Press), 1-21.
    • Kelley, J. F. (1984). An iterative design methodology for user-friendly natural language office information applications. ACM Trans. Inf. Syst. 2, 26-41. doi: 10.1145/357417.357420
    • Krämer, N., Kopp, S., Becker-Asano, C., and Sommer, N. (2013). Smile and the world will smile with you-The effects of a virtual agent“s smile on users” evaluation and behavior. Int. J. Hum. Comput. Stud. 71, 335-349. doi: 10.1016/j.ijhcs.2012.09.006
    • Lakens, D., and Stel, M. (2011). If they move in sync, they must feel in sync: movement synchrony leads to attributions of rapport and entitativity. Soc. Cogn. 29, 1-14. doi: 10.1521/soco.2011.29.1.1
    • Lee, M. K., Forlizzi, J., Rybski, P. E., Crabbe, F., Chung, W., Finkle, J., et al. (2009). “The snackbot: documenting the design of a robot for long-term human-robot interaction,” in Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (La Jolla, CA: ACM Press), 7-14.
    • Levinson, S. C. (1995). “Interactional biases in human thinking,” in Social Intelligence and Interaction: Expressions and Implications of the Social Bias in Human Intelligence, ed. E. N. Goody (New York, NY: Cambridge University Press), 221-260.
    • Lichtenthäler, C., Peters, A., Griffiths, S., and Kirsch, A. (2013). “Social navigation - identifying robot navigation patterns in a path crossing scenario,” in Social Robotics, Vol. 8239, eds G. Herrmann, M. J. Pearson, A. Lenz, P. Bremner, A. Spiers, and U. Leonards (Cham: Springer International Publishing), 84-93.
    • Liu, X., Rieser, V., and Lemon, O. (2009). “A wizard-of-oz interface to study information presentation strategies for spoken dialogue systems,” in Proceedings of the First International Workshop on Spoken Dialogue Systems (IWSDS), Irsee.
    • Loth, S., Giuliani, M., and De Ruiter, J. P. (2014). “Ghost-in-the-machine: initial results,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, 234-235. doi: 10.1145/2559636.25 63696
    • Loth, S., Huth, K., and De Ruiter, J. P. (2013). Automatic detection of service initiation signals used in bars. Front. Psychol. 4:557. doi: 10.3389/fpsyg.2013.00557
    • Loth, S., Huth, K., and De Ruiter, J. P. (2015). “Seeking attention: testing a model of initiating service Interactions,” in A Multidisciplinary Approach to Service Encounters, Vol. 14, eds M. de la O. Hernández-López and L. Fernández Amaya (Amsterdam: Brill), 229-247.
    • Love, J., Selker, R., Verhagen, J., Smira, M., Wild, A., Marsman, M., et al. (2014). JASP (Version 0.5). Amsterdam: JASP. Available at: https://jasp-stats.org/
    • Mack, A., and Rock, I. (1998). Inattentional Blindness. Cambridge, MA: MIT Press.
    • Mcleod, P. (1977). A dual task response modality effect: support for multiprocessor models of attention. Q. J. Exp. Psychol. 29, 651-667. doi: 10.1080/14640747708400639
    • Michalowski, M. P., Sabanovic, S., and Simmons, R. (2006). “A spatial model of engagement for a social robot,” in Proceedings of the 9th IEEE International Workshop on Advanced Motion Control (Istanbul: IEEE), 762-767. doi: 10.1109/AMC.2006.1631755
    • Morey, R. D., Rouder, J. N., and Jamil, T. (2014). Package “BayesFactor” (Version 0.9.9) [R]. Groningen, NL: Rijksuniversiteit Groningen. Available at: http:// bayesfactorpcl.r-forge.r-project.org/
    • Néda, Z., Ravasz, E., Brechet, Y., Vicsek, T., and Barabási, A.-L. (2000). Selforganising processes: the sound of many hands clapping. Nature 403, 849-850. doi: 10.1038/35002660
    • Noël, B., Furley, P., van der Kamp, J., Dicks, M., and Memmert, D. (2014). The development of a method for identifying penalty kick strategies in association football. J. Sports Sci. 33, 1-10. doi: 10.1080/02640414.2014.926383
    • Orkin, J., and Roy, D. (2007). The restaurant game: learning social behavior and language from thousands of players online. J. Game Dev. 3, 39-60.
    • Orkin, J., and Roy, D. (2009). “Automatic learning and generation of social behaviour from collective human gameplay,” in Proceedings of the 8th International Conference on Autonomous Agents and Multimagent Systems?: May 10-15, 2009 (Budapest: International Foundation for Autonomous Agent and Multiagent Systems).
    • Pateraki, M., Baltzakis, H., and Trahanias, P. (2014). Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation. Comput. Vis. Image Understand. 120, 1-13. doi: 10.1016/j.cviu.2013.12.006
    • Petrick, R. P. A., and Foster, M. E. (2012). “What would you like to drink? Recognising and planning with social states in a robot bartender domain,” in Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (Toronto: AAAI Press), 69-76.
    • Petrick, R. P. A., and Foster, M. E. (2013). “Plan-based social interaction with a robot bartender,” in Proceedings of the ICAPS 2013 Application Showcase, eds N. Policella and N. Onder (Rome: ICAPS), 10-13.
    • Poppe, R. (2010). A survey on vision-based human action recognition. Image Vis. Comput. 28, 976-990. doi: 10.1016/j.imavis.2009.11.014
    • R development core team (2007). R: A Language And Environment For Statistical Computing (Version 2.12.0). Wien: R Foundation for Statistical Computing.
    • Rich, C., Ponsler, B., Holroyd, A., and Sidner, C. L. (2010). “Recognizing engagement in human-robot interaction,” in Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (Osaka: ACM Press), 375-382. doi: 10.1145/1734454.1734580
    • Richardson, M. J., Marsh, K. L., Isenhower, R. W., Goodman, J. R. L., and Schmidt, R. C. (2007). Rocking together: dynamics of intentional and unintentional interpersonal coordination. Hum. Move. Sci. 26, 867-891. doi: 10.1016/j.humov.2007.07.002
    • Riek, L. (2012). Wizard of Oz studies in hri: a systematic review and new reporting guidelines. J. Hum. Robot Inter. 1, 119-136. doi: 10.5898/JHRI.1.1.Riek
    • Rieser, V., Keizer, S., Liu, X., and Lemon, O. (2011). “Adaptive information presentation for spoken dialogue systems: evaluation with human subjects,” in Proceedings of the 13th European Workshop on Natural Language Generation (Nancy: Association for Computational Linguistics), 102-109.
    • Rieser, V., and Lemon, O. (2009). Learning human multimodal dialogue strategies. Nat. Lang. Eng. 16, 3-23. doi: 10.1017/S13513249090 05099
    • Ripley, B., and Venables, W. (2014). Package “nnet” (Version 7.3-8) [R]. Oxford, Oxforshire: University of Oxford. Avilable at: http://www.stats.ox.ac.uk/pub/ MASS4/
    • Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., and Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16, 225-237. doi: 10.3758/PBR.16.2.225
    • Sacks, H., Schegloff, E. A., and Jefferson, G. (1974). A simplest systematics for the organization of turn-taking for conversation. Language 50, 696-735. doi: 10.2307/412243
    • Schank, R. C., and Abelson, R. P. (1977). Scripts, Plans, Goals and Understanding: An Inquiry Into Human Knowledge Structures. Hillsdale, NJ: L. Erlbaum.
    • Schegloff, E. A. (1968). Sequencing in conversational openings. Am. Anthropol. 70, 1075-1095. doi: 10.1525/aa.1968.70.6.02a00030
    • Schegloff, E. A. (1972). “Notes on a conversational practice: formulating place,” in Studies in Social Interaction, ed. D. Sudnow (New York, NY: The Free Press and Collier-Macmillan Limited), 75-119.
    • Schegloff, E. A., Jefferson, G., and Sacks, H. (1977). The preference for selfcorrection in the organization of repair in conversation. Language 53, 361-382. doi: 10.2307/413107
    • Schegloff, E. A., and Sacks, H. (1973). Opening up closings. Semiotica 8, 289-327. doi: 10.1515/semi.1973.8.4.289
    • Schorer, J., Rienhoff, R., Fischer, L., and Baker, J. (2013). Foveal and peripheral fields of vision influences perceptual skill in anticipating opponents' attacking position in volleyball. Appl. Psychophysiol. Biofeedback 38, 185-192. doi: 10.1007/s10484-013-9224-7
    • Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., et al. (2013). Real-time human pose recognition in parts from single depth images. Commun. ACM 56, 116-124. doi: 10.1145/2398356.239 8381
    • Sidner, C. L., and Lee, C. (2003). “Engagement rules for human-robot collaborative interactions,” in IEEE International Conference on Systems, Man and Cybernetics, Vol. 4 (Washington, DC: IEEE), 3957-3962. doi: 10.1109/ICSMC.2003.1244506
    • Sidner, C. L., Lee, C., Kidd, C. D., Lesh, N., and Rich, C. (2005). Explorations in engagement for humans and robots. Artif. Intell. 166, 140-164. doi: 10.1016/j.artint.2005.03.005
    • Simons, D. J., and Chabris, C. F. (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28, 1059-1074. doi: 10.1068/p2952
    • Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D'Errico, F., et al. (2012). Bridging the gap between social animal and unsocial machine: a survey of social signal processing. IEEE Trans. Affect. Comput. 3, 69-87. doi: 10.1109/T-AFFC.2011.27
    • von Ahn, L., and Dabbish, L. (2008). Designing games with a purpose. Commun. ACM 51, 58-67. doi: 10.1145/1378704.1378719
    • Xu, Y., Ohmoto, Y., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., et al. (2010). Active adaptation in human-agent collaborative interaction. J. Intell. Inf. Syst. 37, 23-38. doi: 10.1007/s10844-010-0135-2
    • Yousuf, A. M., Kobayashi, Y., Yamazaki, A., and Yamazaki, K. (2012). “Development of a mobile museum guide robot that can configure spatial formation with visitors,” in Intelligent Computing Technology Berlin, Heidelberg, Vol. 7389, eds D.-S. Huang, C. Jiang, V. Bevilacqua, and J. C. Figueroa (Berlin: Springer), 432-432.
  • No related research data.
  • Discovered through pilot similarity algorithms. Send us your feedback.

    Title Year Similarity

    Ghost-in-the-machine: initial results

    201490
    90%

Share - Bookmark

Funded by projects

  • EC | JAMES

Cite this article