Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Williamson, J.; Murray-Smith, R. (2005)
Publisher: Springer
Languages: English
Types: Article
Subjects: QA75
We present a gestural interface for entering text on a mobile device via continuous movements, with control based on feedback from a probabilistic language model. Text is represented by continuous trajectories over a hexagonal tessellation, and entry becomes a manual control task. The language model is used to infer user intentions and provide predictions about future actions, and the local dynamics adapt to reduce effort in entering probable text. This leads to an interface with a stable layout, aiding user learning, but which appropriately supports the user via the probability model. Experimental results demonstrate that the application of this technique reduces variance in gesture trajectories, and is competitive in terms of throughput for mobile devices. This paper provides a practical example of a user interface making uncertainty explicit to the user, and probabilistic feedback from hypothesised goals has general application in many gestural interfaces, and is well-suited to support multimodal interaction.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] Kolsch, M., Turk, M.: Keyboards without keyboards: A survey of virtual keyboard implemenations. In: Proceedings of Sensing and Input for Media-centric Systems. (2002)
    • [2] Plamondon, R., Srihari, S.N.: On-line and off-line handwriting recognition: A comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (2000) 63-84
    • [3] Mankoff, J., Abowd, G.D.: Cirrin: A word-level unistroke keyboard for pen input. In: ACM Symposium on User Interface Software and Technology. (1998) 213-214
    • [4] Perlin, K.: Quikwriting: Continuous stylus-based text entry. In: ACM Symposium on User Interface Software and Technology. (1998) 215-216
    • [5] Ward, D.J., Blackwell, A.F., MacKay, D.J.C.: Dasher - a data entry interface using continuous gestures and language models. In: UIST'00. (2000) 129-137
    • [6] Bellman, T., MacKenzie, I.S.: A probabilistic character layout strategy for mobile text entry. In: Proceedings of Graphics Interface '98. (1998) 168-176
    • [7] Jagacinski, R., Flach, J.: Control theory for humans : quantitative approaches to modeling performance. L. Erlbaum Associates, Mahwah, N.J. (2003)
    • [8] Cleary, J., Witten, I.: Data compression using adaptive coding and partial string matching. IEEE Transactions on Communications 32 (1984) 396-402
    • [9] Cleary, J., Teahan, W., Witten, I.H.: Unbounded length contexts for ppm. In: Proceedings DCC'95. (1995) 52-61
    • [10] Hart, M.: Project gutenberg (2003) Available at http://promo.net/pg/.
    • [11] Lesher, G., Rinkus, G.: Leveraging word prediction to improve character prediction in a scanning configuration. In: Proceedings of the RESNA 2002 Annual Conference. (2002)
    • [12] Williamson, J., Murray-Smith, R.: Audio feedback for gesture recognition. Technical Report TR-2002-127, Dept. Computing Science, University of Glasgow (2002)
    • [13] Flash, T., Hogan, H.: The coordination of arm movements: an experimentally confirmed mathematical model. Journal of Neuroscience 5 (1985) 1688-1703
    • [14] Isokoski, P.: Model for unistroke writing time. In: CHI. (2001) 357-364
  • No related research data.
  • No similar publications.

Share - Bookmark

Download from

Cite this article