skip to main content
research-article

The Impact of Thinking-Aloud on Usability Inspection

Published:18 June 2020Publication History
Skip Abstract Section

Abstract

This study compared the results of a usability inspection conducted under two separate conditions: An explicit concurrent think-aloud that required explanations and silent working. 12 student analysts inspected two travel websites thinking-aloud and working in silence to produce a set of problem predictions. Overall, the silent working condition produced more initial predictions, but the think-aloud condition yielded a greater proportion of accurate predictions as revealed by falsification testing. The analysts used a range of problem discovery methods with system searching being favoured by the silent working condition and the more active, goal playing discovery method in the think-aloud condition. Thinking-aloud was also associated with a broader spread of knowledge resources.

References

  1. Chi, M.T.H., de Leeuw, N., Chiu, M.H. and La Vancher, C. (1994) Eliciting self-explanations improves understanding. Cognitive Science, 18, 439--477Google ScholarGoogle Scholar
  2. Cockton, G, Woolrych, A, Hall, L, Hindmarch, M. (2003): Changing Analysts' Tunes: The Surprising Impact of a New Instrument for Usability Inspect. In: Proceedings of the HCI03 Conference on People and Computers XVII , 2003, .pp. 145--162Google ScholarGoogle Scholar
  3. Cockton, G, Woolrych, A and Hindmarch, M (2004) Reconditioned merchandise: extended structured report formats in usability inspection. In: Extended abstracts of the 2004 conference on Human factors and computing systems - CHI '04. Association for Computing Machinery, New York, pp. 1433--1436. ISBN 1--58113--703--6Google ScholarGoogle Scholar
  4. Cockton, G, Woolrych, A, Hornbæk, K and Frøkjær, E (2012) Inspection-based evaluations. In: The Human-Computer Interaction Handbook: fundamentals, evolving technologies, and emerging applications [3rd edition]. CRC Press, Boca Raton, FL, pp. 1279--1298. ISBN 978--1439829431Google ScholarGoogle Scholar
  5. Ericsson, A. and Simon, H. A. (1993) Protocol Analysis: Verbal reports as data. MIT Press, London, UK.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Fernandez, A., Anrahao, S and Insfran, E (2013) Empirical validation of a usability inspection method for model-driven Web development Journal of Systems and Software, 68 (1) p161--186.Google ScholarGoogle Scholar
  7. Fox, M.C., Ericsson, K.A. & Best, R., (2011). Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods. Psychological Bulletin, 137(2), pp.316--344Google ScholarGoogle ScholarCross RefCross Ref
  8. Hertzum, M., Hansen, K.D. and Andersen, H.H.K. (2009). Scrutinising usability evaluation: does thinking aloud affect behaviour and mental workload? Behaviour & Information Technology. 28 (2). pp. 165--181Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Hertzum, M & Jacobsen, NE (2003) The Evaluator Effect: A Chilling Fact About Usability Evaluation Methods, International Journal of Human--Computer Interaction, 15:1, 183--204, DOI: 10.1207/S15327590IJHC1501_14Google ScholarGoogle ScholarCross RefCross Ref
  10. Holtzblatt, K., Wendell, J and Wood, S., (2004) Rapid Contextual Design. Morgan KaufmannGoogle ScholarGoogle Scholar
  11. Hornbaek, K. (2010). Dogmas in the assessment of usability evaluation methods. Behaviour and Information Technology, 29(1), 97--111.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Hornbaek, K and Frokjaer, E (2004) Two psychology-based usability inspection techniques studied in a diary experiment. NordiCHI '04: Proceedings of the third Nordic conference on Human-computer interaction October 2004 Pages 3--12Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hornbæk, K. and Frøkjær, E. (2008), "Comparison of Techniques for Matching of Usability-Problem Descriptions", Interacting with Computers, 20(6), December 2008, Pages 505--514.ISO 2019Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Hornbæk, K and Frøkjær, E (2008) Making use of business goals in usability evaluation: an experiment with novice evaluators. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 903--912.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Hvannberg, Ebba Thora, Law, Effie Lai-Chong and Larusdottir, Marta Kristin (2007) Heuristic evaluation: Comparing ways of finding and reporting usability problems. Interacting with Computers Vol 19, no 2, p 225--240Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. International Standards Organisation. (2019) ISO 9241--210:2019 Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systemsGoogle ScholarGoogle Scholar
  17. Jacobsen, N.E., Hertzum, M. and John, B.E. (1998) The Evaluator Effect in Usability Studies: Problem Detection and Severity Judgments. Proc. HFES 42nd Annual Meeting. Human Factors and Ergonomics Society, SantaMonica, CA, pp. 1336--1340.Google ScholarGoogle ScholarCross RefCross Ref
  18. Keinonen, T. (2009). Design method: Instrument, competence or agenda. Multiple Ways to Design Research: Research Cases Reshape the Design Discipline. M. Botta (ed). Swiss Design Network, 280--293Google ScholarGoogle Scholar
  19. Lavery, D Cockton, G, Atkinson, M.P. (1997): Comparison of Evaluation Methods Using Structured Usability Problem Reports. Behaviour and Information Technology, 16 (4) pp. 246--266Google ScholarGoogle ScholarCross RefCross Ref
  20. McDonald, S., and Petrie, H. (2013) The effect of global instructions on think-aloud testing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Paris, France, April 27 - May 2. ACM New York, NY, USA. pp. 2941--2944Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. McDonald, S., Zhao, T and Edwards, H.M. (2015) Look Who's Talking: Evaluating the Utility of Interventions During an Interactive Think-Aloud. Interacting with Computers, doi:10.1093/iwc/iwv014Google ScholarGoogle Scholar
  22. Molich, (2018). Are usability evaluations reproducible? Interactions 25(6), 82--85. DOI: https://doi.org/10.1145/3278154Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Monahan, K, McDonald, S and Cockton, G (2006) Modified Contextual Design as a Field Evaluation Method. In: Fourth Nordichi conference on human computer interaction.Google ScholarGoogle Scholar
  24. Nielsen, J. (1992). Finding usability problems through heuristic evaluation. Proceedings ACM CHI'92 Conference (Monterey, CA, May 3--7), 373--380Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sears, A and Hess, D. J. (1998). The effect of task description detail on evaluator performance with cognitive walkthroughs. In CHI 98 Conference Summary on Human Factors in Computing Systems (CHI '98). ACM, New York, NY, USA, 259--260.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Schön, D.A. (1983). The Reflective Practitioner: How Professionals Think in Action. Basic BooksGoogle ScholarGoogle Scholar
  27. Van Den Haak, M.J., De Jong, M.D.T., and Schellens, P.J. (2006) Constructive Interaction: An Analysis of Verbal Interaction in a Usability Setting, IEEE Transactions on Professional Communication, 49(4), 311--324.Google ScholarGoogle ScholarCross RefCross Ref
  28. VanLehn, K and Jones, R.M. and Chi, M.T.H (1992) A model of the Self-explanation. Journal of the Learning Sciences. (1), 1--60Google ScholarGoogle ScholarCross RefCross Ref
  29. Woolrych, A., Cockton, G. and Hindmarch, M., (2004) Falsification Testing for Usability Inspection Method Assessment. Proceedings of HCI 2004, Volume 2, eds. A. Dearden and Leon Watts, Research Press International, 137--140, 1--897851--13--8Google ScholarGoogle Scholar
  30. Woolrych, A, Hornbaek, K, Frokjaer, E and Cockton, G (2011) Ingredients and meals rather than recipes: a proposal for research that does not treat usability evaluation methods as indivisible wholes. International Journal of Human-Computer Interaction, 27 (10). pp. 940--970. ISSN 1044--7318Google ScholarGoogle ScholarCross RefCross Ref
  31. Woolrych, A. (2011) Understanding and Improving Analyst Decision Making in Usability Inspection Unpublished PhD thesis, University of Sunderland, UKGoogle ScholarGoogle Scholar
  32. Zhao, T. McDonald, S., and Edwards, H.M. (2014) The impact of two different think-aloud instructions in a usability test: a case of just following orders? Behaviour and Information Technology, 33, 163--183.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The Impact of Thinking-Aloud on Usability Inspection

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!