skip to main content
research-article

Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications

Published:01 July 2012Publication History
Skip Abstract Section

Abstract

A diversity of user goals and strategies make creation-oriented applications such as word processors or photo-editors difficult to comprehensively test. Evaluating such applications requires testing a large pool of participants to capture the diversity of experience, but traditional usability testing can be prohibitively expensive. To address this problem, this article contributes a new usability evaluation method called backtracking analysis, designed to automate the process of detecting and characterizing usability problems in creation-oriented applications. The key insight is that interaction breakdowns in creation-oriented applications often manifest themselves in backtracking operations that can be automatically logged (e.g., undo and erase operations). Backtracking analysis synchronizes these events to contextual data such as screen capture video, helping the evaluator to characterize specific usability problems. The results from three experiments demonstrate that backtracking events can be effective indicators of usability problems in creation-oriented applications, and can yield a cost-effective alternative to traditional laboratory usability testing.

References

  1. Akers, D. L. 2010. Backtracking events as indicators of software usability problems. PhD Dissertation, Stanford University.Google ScholarGoogle Scholar
  2. Akers, D., Simpson, M., Jeffries, R., and Winograd, T. 2009. Undo and erase events as indicators of usability problems. In Proceedings of the Conference on Human Factors in Computing Systems. ACM Press, 659--668. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Andre, T. S., Hartson, H. R., Belz, S. M., and McCreary, F. A. 2001. The user action framework: A reliable foundation for usability engineering support tools. Int. J. Hum.-Comput. Stud. 54, 1, 107--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Andreassi, J. L. 2006. Psychophysiology: Human Behavior and Physiological Response. Lawrence Erlbaum Associates.Google ScholarGoogle Scholar
  5. Bias, R. 1991. Interface-Walkthroughs: Efficient collaborative testing. IEEE Softw. 8, 5, 94--95. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Bruun, A., Gull, P., Hofmeister, L., and Stage, J. 2009. Let your users do the testing: A comparison of three remote asynchronous usability testing methods. In Proceedings of the Conference on Human Factors in Computing Systems. ACM Press, 1619--1628. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Capra, M. 2002. Contemporaneous versus retrospective user-reported critical incidents in usability evaluation. In Proceedings of the Human Factors and Ergonomics Society. 1973--1977.Google ScholarGoogle ScholarCross RefCross Ref
  8. Card, S. K., Moran, T. P., and Newell, A. 1983. The Psychology of Human-Computer Interaction. Erlbaum. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Cronbach, L. J. 1951. Coefficient alpha and the internal structure of tests. Psychometrika 16, 3, 297--334.Google ScholarGoogle ScholarCross RefCross Ref
  10. Del Galdo, E. M., Williges, B. H., and Wixon, D. R. 1986. An evaluation of critical incidents for software documentation design. In Proceedings of the Human Factors Society. 19--23.Google ScholarGoogle Scholar
  11. Dumas, J. and Redish, J. 1999. A Practical Guide to Usability Testing. Intellect Books. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Dumas, J. S. and Loring, B. A. 2008. Moderating Usability Tests: Principles and Practice for Interacting. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Faulkner, A. and Walthers Von Alten, J. 2007. Classroom in a Book: Adobe Photoshop CS3. Adobe Press.Google ScholarGoogle Scholar
  14. Flanagan, J. C. 1954. The critical incident technique. Psych. Rev. 54, 4, 327--358.Google ScholarGoogle Scholar
  15. Gray, W. D. 1997. Who ya gonna call! You’re on your own {software usability design}. IEEE Softw. 14, 4, 26--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Guan, Z., Lee, S., Cuddihy, E., and Ramey, J. 2006. The validity of the stimulated retrospective think-aloud method as measured by eye tracking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1253--1262. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Hackman, G. S. and Biers, D. W. 1992. Team usability testing: Are two heads better than one? In Proceedings of the 36th Annual Meeting of the Human Factors Society. 1205--1209.Google ScholarGoogle Scholar
  18. Hartson, H. R. and Castillo, J. C. 1998. Remote evaluation for post-deployment usability improvement. In Proceedings of the International Conference on Advanced Visual Interfaces. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hartson, H. R. Castillo, J. C., Kelso, J., and Neale, W. C. 1996. Remote evaluation: The network as an extension of the usability laboratory. In Proceedings of the Conference on Human Factors in Computing Systems. ACM Press, 228--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hartson, R., Andre, T. S., and Williges, R. C. 2000. Criteria for evaluating usability evaluation methods. Int. J. Hom. Comp. Interact. 13, 4, 373--410.Google ScholarGoogle ScholarCross RefCross Ref
  21. Hilbert, D. M. and Redmiles, D. F. 2000. Extracting usability information from user interface events. ACM Comput. Surv. 32, 4, 384--421. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Hornbæk, K. and Frøkjær, E. 2008. Comparison of techniques for matching of usability problem descriptions. Interact. Comput. 20, 6, 505--514. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Howarth, J., Andre, T. S., and Hartson, R. 2007. A structured process for transforming usability data into usability information. J. Usability Stud. 3, 1, 7--23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ivory, M. Y. and Hearst, M. A. 2001. The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv. 33, 4, 470--516. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Jacobsen, N. E., Hertzum, M., and John, B. E. 1998. The evaluator effect in usability tests. In Proceedings of the Conference on Human Factors in Computing Systems. ACM Press, 255--256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. John, B. E. and Marks, S. J. 1997. Tracking the effectiveness of usability evaluation methods. Behav. Inf. Techn. 16, 4, 188--202.Google ScholarGoogle ScholarCross RefCross Ref
  27. John, B. E., Prevas, K., Salvucci, D. D., and Koedinger, K. 2004. Predictive human performance modeling made easy. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 455--462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Karat, C.-M., Campbell, R., and Fiegel, T. 1992. Comparison of empirical testing and walkthrough methods in user interface evaluation. In Proceedings of the Conference on Human Factors in Computing Systems. ACM, 397--404. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Koenemann-Belliveau, J., Carroll, J. M., Rosson, M. B., and Singley, M. K. 1994. Comparative usability evaluation: Critical incidents and critical threads. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 245--251. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Kohavi, R., Henne, R. M., and Sommerfield, D. 2007. Practical guide to controlled experiments on the web: Listen to your customers not to the hippo. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 959--967. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Law, E. L.-C. and Hvannberg, E. T. 2004. Analysis of combinatorial user effect in international usability tests. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 9--16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Lewis, J. R. 1994. Sample sizes for usability studies: Additional considerations. Human Factors 36, 2, 368--378.Google ScholarGoogle ScholarCross RefCross Ref
  33. Lindgaard, G. and Chattratichart, J. 2007. Usability testing: what have we overlooked? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1415--1424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Medlock, M. C., Wixon, D., Terrano, M., Romero, R., and Fulton, B. 2002. Using the RITE method to improve products: A definition and a case study. In Proceedings of the Usability Professionals Association Conference.Google ScholarGoogle Scholar
  35. Nichols, D. M., and McKay, D. 2003. Participatory Usability: supporting proactive users. In Proceedings of the 4th Annual Conference of the ACM Special Interest Group on Computer Human Interaction: New Zealand Chapter (CHINZ’03). ACM SIGCHI, 63--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Nielsen, J. and Landauer, T. K. 1993. A mathematical model of the finding of usability problems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM Press, 206--213. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Nielsen, J. and Molich, R. 1990. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 249--256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Nisbett, R. E. and Wilson, T. D. 1977. Telling more than we can know: Verbal reports on mental processes. Psych. Rev. 84, 231--259.Google ScholarGoogle ScholarCross RefCross Ref
  39. Norman, D. A. 1986. Cognitive engineering. In User Centered System Design, D. A. Norman and S. W. Draper Eds., Lawrence Erlbaum Associates, 31--61.Google ScholarGoogle Scholar
  40. O’Malley, C. E., Draper, S. W., and Riley, M. S. 1984. Constructive interaction: A method for studying human-computer-human interaction. In Proceedings of IFIP Interact. 84, 269--274.Google ScholarGoogle Scholar
  41. Rauterberg, M. 1995. From novice to expert decision behaviour: A qualitative modelling approach with Petri nets. Adv. Human Factors Ergonomics 20, 449--449.Google ScholarGoogle Scholar
  42. Robertson, G., Card, S. K., and Mackinlay, J. D. 1989. The cognitive coprocessor architecture for interactive user interfaces. In Proceedings of the 2nd Annual ACM SIGGRAPH Symposium on User Interface Software and Technology. ACM, 10--18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Rubin, J. and Chisnell, D. 2008. Handbook of Usability Testing. 2nd Ed. Wiley.Google ScholarGoogle Scholar
  44. Siochi, A. C. and Ehrich, R. W. 1991. Computer analysis of user interfaces based on repetition in transcripts of user sessions. ACM Trans. Inf. Syst. 9, 4, 309--335. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Skov, M. B. and Stage, J. 2005. Supporting problem identification in usability evaluations. In Proceedings of CHI Australia. ACM Press, 1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Spool, J. and Schroeder, W. 2001. Testing web sites: Five users is nowhere near enough. In Extended Abstracts on Human Factors in Computing Systems (CHI’01). ACM, 285--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Swallow, J., Hameluck, D., and Carey, T. 1997. User Interface instrumentation for usability analysis: A case study. In Proceedings of the Conference of the Centre for Advanced Studies on Collaborative Research.Google ScholarGoogle Scholar
  48. Thompson, C. 2007. Halo 3: How Microsoft Labs invented a new science of play. Wired 15, 9.Google ScholarGoogle Scholar
  49. Tullis, T. and Albert, B. 2008. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Van Den Haak, M. J. and De Jong, M. D. T. 2003. Exploring two methods of usability testing: concurrent versus retrospective think-aloud protocols. In Proceedings of the IEEE International Professional Communication Conference.Google ScholarGoogle Scholar
  51. Virzi, R. A. 1992. Refining the test phase of usability evaluation: How many subjects is enough? Hum. Factors 34, 4, 457--468. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Wharton, C., Bradford, J., Jeffries, R., and Franzke, M. 1992. Applying cognitive walkthroughs to more complex user interfaces: Experiences, issues, and recommendations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 381--388. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Winograd, T. and Flores, F. 1985. Understanding Computers and Cognition. Ablex. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Wixon, D. 2003. Evaluating usability methods: Why the current literature fails the practitioner. Interactions 10, 4, 28--34. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Computer-Human Interaction
      ACM Transactions on Computer-Human Interaction  Volume 19, Issue 2
      July 2012
      226 pages
      ISSN:1073-0516
      EISSN:1557-7325
      DOI:10.1145/2240156
      Issue’s Table of Contents

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 July 2012
      • Accepted: 1 March 2012
      • Revised: 1 December 2011
      • Received: 1 July 2011
      Published in tochi Volume 19, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader