skip to main content
research-article

UiLab, a Workbench for Conducting and Reproducing Experiments in GUI Visual Design

Authors Info & Claims
Published:29 May 2021Publication History
Skip Abstract Section

Abstract

With the continuously increasing number and variety of devices, the study of visual design of their Graphical User Interfaces grows in importance and scope, particularly for new devices, including smartphones, tablets, and large screens. Conducting a visual design experiment typically requires defining and building a GUI dataset with different resolutions for different devices, computing visual design measures for the various configurations, and analyzing their results. This workflow is very time- and resource-consuming, therefore limiting its reproducibility. To address this problem, we present UiLab, a cloud-based workbench that parameterizes the settings for conducting an experiment on visual design of Graphical User Interfaces, for facilitating the design of such experiments by automating some workflow stages, and for fostering their reproduction by automating their deployment. Based on requirements elicited for UiLab, we define its conceptual model to delineate the borders of services of the software architecture to support the new workflow. We exemplify it by demonstrating a system walkthrough and we assess its impact on experiment reproducibility in terms of design and development time saved with respect to a classical workflow. Finally, we discuss potential benefits brought by this workbench with respect to reproducing experiments in GUI visual design and existing shortcomings to initiate future avenues. We publicly release UiLab source code on a GitHub repository.

References

  1. Alain Abran, Adel Khelifi, Witold Suryn, and Ahmed Seffah. 2003. Usability Meanings and Interpretations in ISO Standards. Software Quality Journal11, 4 (Nov. 2003), 325--338. https://doi.org/10.1023/A:1025869312943 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. ACM. 2018. Artifact Review and Badging.(April 2018). https://www.acm.org/publications/policies/artifact-review-badgingGoogle ScholarGoogle Scholar
  3. Enis Afgan, Dannon Baker, Bérénice Batut, Marius Van Den Beek, Dave Bouvier, Martin Ech, John M. Chilton, Dave Clements, Nate Coraor, Björn A. Grüning, Aysam Guerler, Jennifer Lynne Jackson, Saskia Hiltemann, Vahid Jalili,Helena Rasche, Nicola Soranzo, Jeremy Goecks, James Taylor, Anton Nekrutenko, and Daniel Blankenberg. 2018. The Galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2018 update.Nucleic Acids Research 46, W1 (2 7 2018), W537--W544. https://doi.org/10.1093/nar/gky379Google ScholarGoogle Scholar
  4. Khalid Alemerien and Magel Kenneth. 2015. SLC: a visual cohesion metric to predict the usability of graphical user interfaces. 1526--1533. https://doi.org/10.1145/2695664.2695791 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Khalid Alemerien and Kenneth Magel. 2014. GUI Evaluator: A Metric-tool for Evaluating the Complexity of Graphical User Interfaces. In SEKE.Google ScholarGoogle Scholar
  6. Farah Alsudani and Matthew Casey. 2009. The Effect of Aesthetics on Web Credibility. In Proceedings of the 23rdBritish HCI Group Annual Conference on People and Computers: Celebrating People and Technology (BCS-HCI '09). British Computer Society, Swinton, UK, 512--519. http://dl.acm.org/citation.cfm?id=1671011.1671077 Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Raluca Antonie. 2009. Concepts of Research Methods and Statistics Used in Program Evaluation. Transylvanian Review of Administrative Sciences 26E (06 2009).Google ScholarGoogle Scholar
  8. Maxim Bakaev, Sebastian Heil, Vladimir Khvorostov, and Martin Gaedke. 2019. Auto-Extraction and Integration of Metrics for Web User Interfaces. Journal of Web Engineering (JWE)17 (03 2019), 561--590. https://doi.org/10.13052/jwe1540--9589.17676Google ScholarGoogle Scholar
  9. Pablo Becker, Philip Lew, and Luis Olsina. 2012. Specifying Process Views for a Measurement, Evaluation, and Improvement Strategy.Advanced Software Engineering 2012 (2012), 949746:1--949746:28. https://doi.org/10.1155/2012/949746 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Narjes Bessghaier, Makram Soui, Christophe Kolski, and Mabrouka Chouchane. 2021. On the Detection of Structural Aesthetic Defects of Android Mobile User Interfaces with a Metrics-Based Tool. ACM Trans. Interact. Intell. Syst.11, 1, Article 3 (March 2021), 27 pages. https://doi.org/10.1145/3410468Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Josette Bettany-Saltikov. 2010. Learning how to undertake a systematic review: part 1. Nursing standard (Royal College of Nursing (Great Britain) : 1987)24 (08 2010), 47--55; quiz 56. https://doi.org/10.7748/ns2010.08.24.50.47.c7939Google ScholarGoogle Scholar
  12. Josette Bettany-Saltikov. 2010. Learning how to undertake a systematic review: Part 2.Nursing standard (Royal College of Nursing (Great Britain) : 1987)24 (08 2010), 47--56; quiz 58, 60. https://doi.org/10.7748/ns2010.08.24.51.47.c7943Google ScholarGoogle Scholar
  13. Elodie Bouzekri. 2018. Model-Based Approach to Design and Develop Usable and Dependable Recommender Systems. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS 2018, Paris, France,June 19--22, 2018. ACM, 17:1--17:7. https://doi.org/10.1145/3220134.3220147 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Murilo C. Camargo, Rodolfo M. Barros, and Vanessa T. O. Barros. 2018. Visual Design Checklist for Graphical User Interface (GUI) Evaluation. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing (SAC '18). Association for Computing Machinery, New York, NY, USA, 670--672. https://doi.org/10.1145/3167132.3167391 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jon F. Claerbout and Martin Karrenbach. 2005. Electronic documents give reproducible research a new meaning. 601--604. https://doi.org/10.1190/1.1822162 arXiv: https://library.seg.org/doi/pdf/10.1190/1.1822162Google ScholarGoogle Scholar
  16. Sandy Claes and Andrew Vande Moere. 2017. Replicating an In-The-Wild Study One Year Later: Comparing Prototypes with Different Material Dimensions. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS '17). Association for Computing Machinery, New York, NY, USA, 1321--1325. https://doi.org/10.1145/3064663.3064725 Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Andy Cockburn, Carl Gutwin, and Alan Dix. 2018. HARK No More: On the Preregistration of CHI Experiments. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, Article Paper 141, 12 pages. https://doi.org/10.1145/3173574.3173715 Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. D. A Dondis. 1973. A Primer of Visual Literacy. The MIT Press, MA, USA.Google ScholarGoogle Scholar
  19. Sophie Dupuy-Chessa, Yann Laurillau, and Eric Céret. 2016. Considering Aesthetics and Usability Temporalities in a Model Based Development Process. In Actes de La 28ième Conference Francophone Sur l'Interaction Homme-Machine(IHM '16). Association for Computing Machinery, New York, NY, USA, 25--35. https://doi.org/10.1145/3004107.3004122 Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Florian Echtler and Maximilian Häuundefinedler. 2018. Open Source, Open Science, and the Replication Crisis in HCI. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). Association for Computing Machinery, New York, NY, USA, Article alt02, 8 pages. https://doi.org/10.1145/3170427.3188395 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Patricia Farrugia, Bradley A. Petrisor, Forough Farrokhyar, and Mohit Bhandari. 2010. Practical tips for surgical research: Research questions, hypotheses and objectives.Canadian Journal of Surgery53, 4, Article PMC2912019 (Aug.2010), 278--281 pages. https://canjsurg.ca/wp-content/uploads/2013/12/53--4--278.pdfGoogle ScholarGoogle Scholar
  22. Omar S. Gómez, Natalia Juristo, and Sira Vegas. 2010. Replications Types in Experimental Disciplines. In Proceedings ofthe 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM '10). Association for Computing Machinery, New York, NY, USA, Article Article 3, 10 pages. https://doi.org/10.1145/1852786.1852790 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Salvador González, Francisco Montero, and Pascual González. 2012. BaLOReS: A Suite of Principles and Metrics for Graphical User Interface Evaluation. In Proceedings of the 13th International Conference on Interacción Persona-Ordenador (Interaccion '12). Association for Computing Machinery, New York, NY, USA, Article 9, 2 pages. https://doi.org/10.1145/2379636.2379645 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Steven N. Goodman, Daniele Fanelli, and John P.A. Ioannidis. 2016. What does research reproducibility mean? Science Translational Medicine 8, 341 (June 2016), 341ps12. https://doi.org/10.1126/scitranslmed.aaf5027Google ScholarGoogle ScholarCross RefCross Ref
  25. Christian Greiffenhagen and Stuart Reeves. 2013. Is Replication Important for HCI?. In Proceedings of the ACM CHI '13 Workshop on the Replication of HCI Research(April 27--28, 2013)(CEUR Workshop Proceedings), Max L. Wilson, Ed H.Chi, David Coyle, and Paul Resnick (Eds.), Vol. 976. CEUR-WS.org, 8--13. http://ceur-ws.org/Vol-976Google ScholarGoogle Scholar
  26. Edward Hartono and Clyde W. Holsapple. 2019. Website Visual Design Qualities: A Threefold Framework. ACM Trans. Manage. Inf. Syst.10, 1, Article Article 1 (April 2019), 21 pages. https://doi.org/10.1145/3309708 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Kasper Hornbæk. 2015. We Must Be More Wrong in HCI Research. Interactions 22, 6 (Oct. 2015), 20--21. https://doi.org/10.1145/2833093 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Kasper Hornbæk, Søren S. Sander, Javier Andrés Bargas-Avila, and Jakob Grue Simonsen. 2014. Is Once Enough? On theExtent and Content of Replications in Human-Computer Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 3523--3532. https://doi.org/10.1145/2556288.2557004 Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Melody Y. Ivory and Marti A. Hearst. 2002. Statistical Profiles of Highly-rated Web Sites. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '02). ACM, New York, NY, USA, 367--374. https://doi.org/10.1145/503376.503442 Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Vince Kellen. [n. d.]. Business Performance Measurement -- At the Crossroads of Strategy, Decision-Making, Learning and Information Visualization. Technical Report. DePaul University.Google ScholarGoogle Scholar
  31. Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of Psychology 22, 140 (1932), 55--. http://psycnet.apa.org/record/1933-01885-001Google ScholarGoogle Scholar
  32. Gitte Lindgaard, Gary Fernandes, Cathy Dudek, and Judith M. Brown. 2006. Attention web designers: You have 50 milliseconds to make a good first impression! Behaviour and Information Technology, 25(2), 115--126. Behaviour IT25(03 2006), 115--126. https://doi.org/10.1080/01449290500330448Google ScholarGoogle Scholar
  33. Adriana Lopes, Anna Beatriz Marques, Simone Diniz Junqueira Barbosa, and Tayana Conte. 2015. Evaluating HCIDesign with Interaction Modeling and Mockups - A Case Study. In ICEIS 2015 - Proceedings of the 17th International Conference on Enterprise Information Systems, Volume 3, Barcelona, Spain, 27--30 April, 2015, Slimane Hammoudi, Leszek A. Maciaszek, and Ernest Teniente (Eds.). SciTePress, 79--87. https://doi.org/10.5220/0005374200790087 Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Salvador González López, Francisco Montero Simarro, and Pascual González López. 2013. BaLOReS: A Framework for Quantitative User Interface Evaluation. Springer London, London, 127--143. https://doi.org/10.1007/978--1--4471--5445--7_10Google ScholarGoogle Scholar
  35. Nadine Mandran and Sophie Dupuy-Chessa. 2018. Supporting experimental methods in information system research. In Proceedings of 12th International Conference on Research Challenges in Information Science (RCIS '18). 1--12. https://doi.org/10.1109/RCIS.2018.8406654Google ScholarGoogle ScholarCross RefCross Ref
  36. Anna Beatriz Marques, Simone D. J. Barbosa, and Tayana Conte. 2016. A comparative evaluation of interaction models for the design of interactive systems. In Proceedings of the 31st Annual ACM Symposium on Applied Computing, Pisa,Italy, April 4--8, 2016, Sascha Ossowski (Ed.). ACM, 173--180. https://doi.org/10.1145/2851613.2851679 Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Kevin Mullet and Darrell Sano. 1995. Designing Visual Interfaces. Pearson Education, USA.Google ScholarGoogle Scholar
  38. David Chek Ling Ngo, Lian Seng Teo, and John G. Byrne. 2003. Modelling Interface Aesthetics. Inf. Sci.152, 1 (June 2003), 25--46. https://doi.org/10.1016/S0020-0255(02)00404--8 Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Raquel Oliveira, Sophie Dupuy-Chessa, and Gaëlle Calvary. 2015. Plasticity of User Interfaces: Formal Verification of Consistency. In Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS'15). Association for Computing Machinery, New York, NY, USA, 260--265. https://doi.org/10.1145/2774225.2775078 Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. International Standard Organization. 2011. ISO/IEC 25010:2011, Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models. (2011). https://www.iso.org/standard/35733.htmlGoogle ScholarGoogle Scholar
  41. Antti Oulasvirta, Samuli De Pascale, Janin Koch, Thomas Langerak, Jussi Jokinen, Kashyap Todi, Markku Laine,Manoj Kristhombuge, Yuxi Zhu, Aliaksei Miniukovich, Gregorio Palmas, and Tino Weinkauf. 2018. Aalto Interface Metrics (AIM): A Service and Codebase for Computational GUI Evaluation. In The 31st Annual ACM Symposiumon User Interface Software and Technology Adjunct Proceedings (UIST '18 Adjunct). ACM, New York, NY, USA, 16--19. https://doi.org/10.1145/3266037.3266087 Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Eleftherios Papachristos and Nikolaos Avouris. 2011. Are First Impressions about Websites Only Related to Visual Appeal?. In Human-Computer Interaction -- INTERACT 2011, Pedro Campos, Nicholas Graham, Joaquim Jorge, NunoNunes, Philippe Palanque, and Marco Winckler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 489--496. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Harold Pashler and Eric-Jan Wagenmakers. 2012. Editors' Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence?Perspectives on Psychological Science7, 6 (2012), 528--530. https://doi.org/10.1177/1745691612465253 arXiv: https://doi.org/10.1177/1745691612465253 PMID: 26168108.Google ScholarGoogle Scholar
  44. Prasad Patil, Roger D. Peng, and Jeffrey T. Leek. 2019. A visual tool for defining reproducibility and replicability. Nature Human Behaviour 3, 7 (2019), 650--652. https://doi.org/10.1038/s41562-019-0629-zGoogle ScholarGoogle ScholarCross RefCross Ref
  45. Lg Pee, James Jiang, and Gary Klein. 2018. Signaling effect of website usability on repurchase intention. International Journal of Information Management 39 (04 2018), 228--241. https://doi.org/10.1016/j.ijinfomgt.2017.12.010Google ScholarGoogle ScholarCross RefCross Ref
  46. Roger D. Peng. 2011. Reproducible Research in Computational Science. Science 334, 6060 (2011), 1226--1227. https://doi.org/10.1126/science.1213847 arXiv: https://science.sciencemag.org/content/334/6060/1226.full.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  47. Stefan Pröll, Kristof Meixner, and Andreas Rauber. 2016. Precise Data Identification Services for Long Tail Research Data. (01 2016). https://doi.org/10.6084/M9.FIGSHARE.3847632Google ScholarGoogle Scholar
  48. Ghulam Jilani Quadri and Paul Rosen. 2019. You Can't Publish Replication Studies (and How to Anyways). (2019). arXiv:cs.HC/1908.08893Google ScholarGoogle Scholar
  49. Matthias Rauterberg. 1996. How to measure and to quantify usability attributes of man-machine interfaces. In Proceedings 5th IEEE International Workshop on Robot and Human Communication (ROMAN '96). 262--267. https://doi.org/10.1109/ROMAN.1996.568839Google ScholarGoogle ScholarCross RefCross Ref
  50. Katharina Reinecke and Krzysztof Z. Gajos. 2014. Quantifying Visual Preferences Around the World. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 11--20. https://doi.org/10.1145/2556288.2557052 Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Andreas Riegler and Clemens Holzmann. 2015. UI-CAT: Calculating User Interface Complexity Metrics for Mobile Applications. In Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia (MUM '15). Association for Computing Machinery, New York, NY, USA, 390--394. https://doi.org/10.1145/2836041.2841214 Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Suzanne Robertson and James Robertson. 2012. Mastering the Requirements Process: Getting Requirements Right(3rded.). Addison-Wesley Professional. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Geir Kjetil Sandve, Anton Nekrutenko, James Taylor, and Eivind Hovig. 2013. Ten Simple Rules for Reproducible Computational Research. PLoS Computational Biology 9, 10 (2013). http://dblp.uni-trier.de/db/journals/ploscb/ploscb9.html#SandveNTH13Google ScholarGoogle Scholar
  54. Jonathan W. Schooler. 2014. Metascience could rescue the "replication crisis". Nature 515, 9 (Nov. 2014), 9. https://doi.org/10.1038/515009aGoogle ScholarGoogle ScholarCross RefCross Ref
  55. Ahmed Seffah, Mohammad Donyaee, Rex Kline, and Harkirat Padda. 2006. Usability measurement and metrics: A consolidated model.Software Quality Journal14 (06 2006), 159--178. https://doi.org/10.1007/s11219-006--7600--8 Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Andreas Sonderegger and Jürgen Sauer. 2009. The influence of design aesthetics in usability testing: Effects on user performance and perceived usability. Applied ergonomics41 (11 2009), 403--10. https://doi.org/10.1016/j.apergo.2009.09.002Google ScholarGoogle Scholar
  57. Makram Soui, Mabrouka Chouchane, Ines Gasmi, and Mohamed Wiem Mkaouer. 2017. PLAIN: PLugin for predicting the usAbility of mobile user INterface. In Proceedings of the 12th International Joint Conference on Computer Vision,Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) - Volume 1: GRAPP, Porto, Portugal, February 27 - March 1, 2017, Ana Paula Cláudio, Dominique Bechmann, and José Braz (Eds.). SciTePress, 127--136. https://doi.org/10.5220/0006171201270136Google ScholarGoogle ScholarCross RefCross Ref
  58. Vasan Subramanian. 2019. Pro MERN Stack: Full Stack Web App Development with Mongo, Express, React, and Node. Apress. https://www.oreilly.com/library/view/pro-mern-stack/9781484243916/ Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Toby J. Teorey, Dongqing Yang, and James P. Fry. 1986. A Logical Design Methodology for Relational Databases Using the Extended Entity-Relationship Model. ACM Comput. Surv. 18, 2 (June 1986), 197--222. https://doi.org/10.1145/7474.7475 Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. N Tractinsky, A.S Katz, and D Ikar. 2000. What is beautiful is usable. Interacting with Computers 13, 2 (2000), 127 -- 145. https://doi.org/10.1016/S0953--5438(00)00031-XGoogle ScholarGoogle ScholarCross RefCross Ref
  61. Stefan Trausan-Matu and Brahma Dathan. 2016. Perceived aesthetics of user-modifiable layouts: a comparison between an unspecified design and a GUI. In 13th International Conference on Human Computer Interaction, RoCHI 2016, Iasi, Romania, September 8--9, 2016, Adrian Iftene and Jean Vanderdonckt (Eds.). Matrix Rom, 22--25. http://rochi.utcluj.ro/proceedings/en/articles-RoCHI2016.phpGoogle ScholarGoogle Scholar
  62. Eric W.K. Tsang and Kai-Man Kwan. 1999. Replication and Theory Development in Organizational Science: A Critical Realist Perspective. The Academy of Management Review 24, 4 (October 1999), 759--780. https://doi.org/10.2307/259353Google ScholarGoogle ScholarCross RefCross Ref
  63. Silvia Uribe, Federico Álvarez, and José Manuel Menéndez. 2017. User's Web Page Aesthetics Opinion: A Matter of Low-Level Image Descriptors Based on MPEG-7. ACM Trans. Web 11, 1, Article 5 (March 2017), 25 pages. https://doi.org/10.1145/3019595 Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Axel van Lamsweerde. 2001. Goal-oriented requirements engineering: a guided tour. In Proceedings Fifth IEEE International Symposium on Requirements Engineering. 249--262. https://doi.org/10.1109/ISRE.2001.948567 Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Axel van Lamsweerde. 2003. Goal-Oriented Requirements Engineering: From System Objectives to UML Models to Precise Software Specifications. In Proceedings of the 25th International Conference on Software Engineering (ICSE '03). IEEE Computer Society, USA, 744â??745. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Jean Vanderdonckt and Xavier Gillo. 1994. Visual Techniques for Traditional and Multimedia Layouts. In Proceedings of the ACM Int. Conf. on Advanced Visual Interfaces (AVI '94), Maria Francesca Costabile, Tiziana Catarci, Stefano Levialdi, and Giuseppe Santucci (Eds.). ACM, 95--104. https://doi.org/10.1145/192309.192334 Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Any Whitefield, Frank Wilson, and John Dowell. 1991. A framework for human factors evaluation. Behaviour & Information Technology 10, 1 (1991), 65--79. https://doi.org/10.1080/01449299108924272Google ScholarGoogle ScholarCross RefCross Ref
  68. Max Wilson, Wendy Mackay, Ed Chi, Michael Bernstein, and Jeffrey Nichols. 2012. RepliCHI SIG: From a Panel to a New Submission Venue for Replication. In CHI '12 Extended Abstracts on Human Factors in Computing Systems (CHI EA'12). Association for Computing Machinery, New York, NY, USA, 1185--1188. https://doi.org/10.1145/2212776.2212419 Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Max L. Wilson, Ed H. Chi, Stuart Reeves, and David Coyle. 2014. RepliCHI: The Workshop II. In CHI '14 Extended Abstracts on Human Factors in Computing Systems (CHI EA '14). Association for Computing Machinery, New York, NY,USA, 33--36. https://doi.org/10.1145/2559206.2559233 Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Mathieu Zen and Jean Vanderdonckt. 2014. Towards an evaluation of graphical user interfaces aesthetics based on metrics. In Proc. of IEEE 8th International Conference on Research Challenges in Information Science (RCIS '14), Marko Bajec, Martine Collard, and Rébecca Deneckère (Eds.). IEEE, 1--12. https://doi.org/10.1109/RCIS.2014.6861050Google ScholarGoogle ScholarCross RefCross Ref
  71. Mathieu Zen and Jean Vanderdonckt. 2016. Assessing User Interface Aesthetics Based on the Inter-Subjectivity of Judgment. In Proceedings of the 30th International BCS Human-Computer Interaction Conference (BCS-HCI '16), Shamal Faily, Nan Jiang, Huseyin Dogan, and Jacqui Taylor (Eds.). BCS. http://ewic.bcs.org/content/ConWebDoc/56903 Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Ping Zhang and Na Li. 2005. The importance of affective quality. Commun. ACM 48 (09 2005), 105--108. https://doi.org/10.1145/1081992.1081997 Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Xianjun Sam Zheng, Ishani Chakraborty, James Jeng-Weei Lin, and Robert Rauschenberger. 2009. Correlating Low-level Image Statistics with Users - Rapid Aesthetic and Affective Judgments of Web Pages. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 1--10. https://doi.org/10.1145/1518701.1518703 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. UiLab, a Workbench for Conducting and Reproducing Experiments in GUI Visual Design

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!