Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Fields, Bob; Amaldi, Paolo; Wong, B. L. William; Gill, Satinder (2007)
Publisher: Lawrence Erlbaum Associates, Inc.
Languages: English
Types: Article
A case for evaluating in use and in-situ \ud Many authors have argued the need for a broader understanding of context and the situatedness of activity when approaching the evaluation of systems. However, prevailing practice often still tends towards attempting to understand the use of designed artefacts by focusing on a core set of tasks that are thought to define the system. A consequence of such focus is that other tasks are considered peripheral and outside the scope of design and evaluation activities. To illustrate the point, consider the experience, familiar to many of us, of being involved in an evaluation activity where participants provide unstructured qualitative feedback. Irrespective of whether the activity is carried out in a laboratory, in a high fidelity simulation or in a naturalistic setting, participants will frequently volunteer unsolicited feedback about tasks and goals that were not originally within the ambit of the design activity. This unprompted feedback, we suggest, is a cue for the evaluators to pay attention to the relationship between the tool and the practice in which it will be used. In other words a cue to consider the situations in which artefact will be used, the tasks and activities that may be affected by the new system, and so on. These are empirical questions that cannot be answered a priori by the development team, whether the evaluation is taking place in “artificial” or “natural” setting.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Amaldi, P., Gill, S., Fields, B., & Wong, W. (Eds.). (2005). Proceedings of in use, in situ: Extending field research methods.Technical Report IDC-TR-2005-001. Available from www.cs.mdx.ac.uk/research/idc
    • Benyon, D., Turner, P., & Turner, S. (2005). Designing interactive systems: People, activities, contexts, technologies: Addison Wesley.
    • Nielsen, J. (2000). Why you only need to test with 5 users. Alertbox - www.alertbox.com, 19 March, 2000. Accessed 21.6.06.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article