skip to main content
research-article
Public Access

Image features influence reaction time: a learned probabilistic perceptual model for saccade latency

Published:22 July 2022Publication History
Skip Abstract Section

Abstract

We aim to ask and answer an essential question "how quickly do we react after observing a displayed visual target?" To this end, we present psychophysical studies that characterize the remarkable disconnect between human saccadic behaviors and spatial visual acuity. Building on the results of our studies, we develop a perceptual model to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Specifically, we implement a neurologically-inspired probabilistic model that mimics the accumulation of confidence that leads to a perceptual decision. We validate our model with a series of objective measurements and user studies using an eye-tracked VR display. The results demonstrate that our model prediction is in statistical alignment with real-world human behavior. Further, we establish that many sub-threshold image modifications commonly introduced in graphics pipelines may significantly alter human reaction timing, even if the differences are visually undetectable. Finally, we show that our model can serve as a metric to predict and alter reaction latency of users in interactive computer graphics applications, thus may improve gaze-contingent rendering, design of virtual experiences, and player performance in e-sports. We illustrate this with two examples: estimating competition fairness in a video game with two different team colors, and tuning display viewing distance to minimize player reaction time.

Skip Supplemental Material Section

Supplemental Material

144-103-supp-video.mp4

supplemental material

3528223.3530055.mp4

References

  1. Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency Requirements for Foveated Rendering in Virtual Reality. ACM Transactions on Applied Perception 14, 4, Article 25 (sep 2017), 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Elena Arabadzhiyska, Okan Tarhan Tursun, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Trans. Graph. 36, 4, Article 50 (July 2017), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Terry Bahill, Michael R. Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3 (1975), 191--204. Google ScholarGoogle ScholarCross RefCross Ref
  4. A Terry Bahill. 1975. Most naturally occurring human saccades have magnitudes of 15 deg or less. Invest. Ophthalmol 14 (1975), 468--469.Google ScholarGoogle Scholar
  5. Reynold Bailey, Ann McNamara, Nisha Sudarsanam, and Cindy Grimm. 2009. Subtle Gaze Direction. ACM Trans. Graph. 28, 4, Article 100 (Sept. 2009), 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Peter GJ Barten. 1999. Contrast sensitivity of the human eye and its effects on image quality. SPIE press.Google ScholarGoogle Scholar
  7. W. Becker and A.F. Fuchs. 1969. Further properties of the human saccadic system: Eye movements and correction saccades with and without visual fixation points. Vision Research 9, 10 (1969), 1247--1258. Google ScholarGoogle ScholarCross RefCross Ref
  8. AH Bell, MA Meredith, AJ Van Opstal, and DougP Munoz. 2006. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research 174, 1 (2006), 53--59.Google ScholarGoogle ScholarCross RefCross Ref
  9. DC Burr, MC Morrone, and J Ross. 1994. Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 6497 (1994), 511--513. Google ScholarGoogle ScholarCross RefCross Ref
  10. Anke Cajar, Ralf Engbert, and Jochen Laubrock. 2016. Spatial frequency processing in the central and peripheral visual field during scene viewing. Vision Research 127 (2016), 186--197.Google ScholarGoogle ScholarCross RefCross Ref
  11. RHS Carpenter. 2004. Contrast, probability, and saccadic latency: evidence for independence of detection and decision. Current Biology 14, 17 (2004), 1576--1580.Google ScholarGoogle ScholarCross RefCross Ref
  12. Roger HS Carpenter and MLL Williams. 1995. Neural computation of log likelihood in control of saccadic eye movements. Nature 377, 6544 (1995), 59--62.Google ScholarGoogle Scholar
  13. Haoyang Chen, Yasukuni Mori, and Ikuo Matsuba. 2014. Solving the balance problem of massively multiplayer online role-playing games using coevolutionary programming. Applied Soft Computing 18 (2014), 1--11.Google ScholarGoogle ScholarCross RefCross Ref
  14. Shaoyu Chen, Budmonde Duinkharjav, Xin Sun, Li-Yi Wei, Stefano Petrangeli, Jose Echevarria, Claudio Silva, and Qi Sun. 2022. Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming. IEEE Transactions on Visualization and Computer Graphics 28, 5 (2022), 2157--2167. Google ScholarGoogle ScholarCross RefCross Ref
  15. Michael A. Cohen, Thomas L. Botch, and Caroline E. Robertson. 2020. The limits of color awareness during active, real-world vision. Proceedings of the National Academy of Sciences 117, 24 (2020), 13821--13827. arXiv:https://www.pnas.org/content/117/24/13821.full.pdf Google ScholarGoogle ScholarCross RefCross Ref
  16. Julien Cotti, Muriel Panouilleres, Douglas P Munoz, Jean-Louis Vercher, Denis Pélisson, and Alain Guillaume. 2009. Adaptation of reactive and voluntary saccades: different patterns of adaptation revealed in the antisaccade task. The Journal of Physiology 587, 1 (2009), 127--138.Google ScholarGoogle ScholarCross RefCross Ref
  17. Scott J Daly. 1992. Visible differences predictor: an algorithm for the assessment of image fidelity. In Human Vision, Visual Processing, and Digital Display III, Vol. 1666. International Society for Optics and Photonics, 2--15.Google ScholarGoogle Scholar
  18. H. Deubel, W. Wolf, and G. Hauske. 1982. Corrective saccades: Effect of shifting the saccade goal. Vision Research 22, 3 (1982), 353--364. Google ScholarGoogle ScholarCross RefCross Ref
  19. Mark R. Diamond, John Ross, and M. C. Morrone. 2000. Extraretinal Control of Saccadic Suppression. Journal of Neuroscience 20, 9 (2000), 3449--3455. arXiv:https://www.jneurosci.org/content/20/9/3449.full.pdf Google ScholarGoogle ScholarCross RefCross Ref
  20. Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing Visual Discomfort of 3D Stereoscopic Displays with Gaze-Contingent Depth-of-Field (SAP '14). Association for Computing Machinery, New York, NY, USA, 39--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. David Dunn, Okan Tursun, Hyeonseung Yu, Piotr Didyk, Karol Myszkowski, and Henry Fuchs. 2020. Stimulating the Human Visual System Beyond Real World Performance in Future Augmented Reality Displays. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 90--100.Google ScholarGoogle Scholar
  22. Ralf Engbert and Konstantin Mergenthaler. 2006. Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences 103, 18 (2006), 7192--7197.Google ScholarGoogle ScholarCross RefCross Ref
  23. Jasper H Fabius, Alessio Fracasso, Tanja CW Nijboer, and Stefan Van der Stigchel. 2019. Time course of spatiotopic updating across saccades. Proceedings of the National Academy of Sciences 116, 6 (2019), 2027--2032.Google ScholarGoogle ScholarCross RefCross Ref
  24. J Leroy Folks and Raj S Chhikara. 1978. The inverse Gaussian distribution and its statistical application---a review. Journal of the Royal Statistical Society: Series B (Methodological) 40, 3 (1978), 263--275.Google ScholarGoogle ScholarCross RefCross Ref
  25. Linus Franke, Laura Fink, Jana Martschinke, Kai Selgrad, and Marc Stamminger. 2021. Time-Warped Foveated Rendering for Virtual Reality Headsets. Computer Graphics Forum 40, 1 (2021), 110--123. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14176 Google ScholarGoogle ScholarCross RefCross Ref
  26. Drew Fudenberg, Whitney Newey, Philipp Strack, and Tomasz Strzalecki. 2020. Testing the drift-diffusion model. Proceedings of the National Academy of Sciences 117, 52 (2020), 33141--33148.Google ScholarGoogle ScholarCross RefCross Ref
  27. Ramanathan Gnanadesikan and Martin B Wilk. 1968. Probability plotting methods for the analysis of data. Biometrika 55, 1 (1968), 1--17.Google ScholarGoogle Scholar
  28. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D Graphics. ACM Transactions on Graphics 31, 6, Article 164 (nov 2012), 10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. E. Hartmann, B. Lachenmayr, and H. Brettel. 1979. The peripheral critical flicker frequency. Vision Research 19, 9 (1979), 1019--1023. Google ScholarGoogle ScholarCross RefCross Ref
  30. Toyohiko Hatada, Haruo Sakata, and Hideo Kusaka. 1980. Psychophysical analysis of the "sensation of reality" induced by a visual wide-field display. Smpte Journal 89, 8 (1980), 560--569.Google ScholarGoogle ScholarCross RefCross Ref
  31. Sebastien Hillaire, Anatole Lecuyer, Remi Cozot, and Gery Casiez. 2008. Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments. In 2008 IEEE Virtual Reality Conference. 47--50. Google ScholarGoogle ScholarCross RefCross Ref
  32. Alain Hore and Djemel Ziou. 2010. Image quality metrics: PSNR vs. SSIM. In 2010 20th international conference on pattern recognition. IEEE, 2366--2369.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Michael R. Ibbotson and Shaun L. Cloherty. 2009. Visual Perception: Saccadic Omission---Suppression or Temporal Masking? Current Biology 19, 12 (2009), R493--R496. Google ScholarGoogle ScholarCross RefCross Ref
  34. Sirkka L Jarvenpaa. 1990. Graphic displays in decision making---the visual salience effect. Journal of Behavioral Decision Making 3, 4 (1990), 247--262.Google ScholarGoogle ScholarCross RefCross Ref
  35. Akshay Jindal, Krzysztof Wolski, Karol Myszkowski, and Rafał K Mantiuk. 2021. Perceptual model for adaptive local shading and refresh rate. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. RP Kalesnykas and PE Hallett. 1994. Retinal eccentricity and the latency of eye saccades. Vision research 34, 4 (1994), 517--531.Google ScholarGoogle Scholar
  37. Anton S Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. DeepFovea: neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Donald H Kelly. 1979. Motion and vision. II. Stabilized spatio-temporal threshold surface. Josa 69, 10 (1979), 1340--1349.Google ScholarGoogle ScholarCross RefCross Ref
  39. Joohwan Kim, Josef Spjut, Morgan McGuire, Alexander Majercik, Ben Boudaoud, Rachel Albert, and David Luebke. 2019. Esports arms race: Latency and refresh rate for competitive gaming tasks. Journal of Vision 19, 10 (2019), 218c--218c.Google ScholarGoogle ScholarCross RefCross Ref
  40. Robert Konrad, Anastasios Angelopoulos, and Gordon Wetzstein. 2020. Gaze-Contingent Ocular Parallax Rendering for Virtual Reality. ACM Trans. Graph. 39 (2020). Issue 2.Google ScholarGoogle Scholar
  41. Denis Koposov, Maria Semenova, Andrey Somov, Andrey Lange, Anton Stepanov, and Evgeny Burnaev. 2020. Analysis of the reaction time of esports players through the gaze tracking and personality trait. In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). IEEE, 1560--1565.Google ScholarGoogle ScholarCross RefCross Ref
  42. Matias Koskela, Atro Lotvonen, Markku Mäkitalo, Petrus Kivi, Timo Viitanen, and Pekka Jääskeläinen. 2019. Foveated real-time path tracing in visual-polar space. In Eurographics Symposium on Rendering. The Eurographics Association.Google ScholarGoogle Scholar
  43. Matias Koskela, Timo Viitanen, Pekka Jääskeläinen, and Jarmo Takala. 2016. Foveated path tracing. In International Symposium on Visual Computing. Springer, 723--732.Google ScholarGoogle ScholarCross RefCross Ref
  44. Eileen Kowler. 2011. Eye movements: The past 25years. Vision Research 51, 13 (2011), 1457--1483. Vision Research 50th Anniversary Issue: Part 2. Google ScholarGoogle ScholarCross RefCross Ref
  45. Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2020. Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2021. A Perceptual Model for Eccentricity-dependent Spatio-temporal Flicker Fusion and its Applications to Foveated Graphics. ACM Trans. Graph. 40 (2021). Issue 4.Google ScholarGoogle Scholar
  47. Matteo Lisi, Joshua A. Solomon, and Michael J. Morgan. 2019. Gain control of saccadic eye movements is probabilistic. Proceedings of the National Academy of Sciences 116, 32 (2019), 16137--16142. arXiv:https://www.pnas.org/content/116/32/16137.full.pdf Google ScholarGoogle ScholarCross RefCross Ref
  48. Casimir JH Ludwig, J Rhys Davies, and Miguel P Eckstein. 2014. Foveal analysis and peripheral selection during active visual sampling. Proceedings of the National Academy of Sciences 111, 2 (2014), E291--E299.Google ScholarGoogle ScholarCross RefCross Ref
  49. Madhumitha S Mahadevan, Harold E Bedell, and Scott B Stevenson. 2018. The influence of endogenous attention on contrast perception, contrast discrimination, and saccadic reaction time. Vision research 143 (2018), 89--102.Google ScholarGoogle Scholar
  50. Rafal Mantiuk, Grzegorz Krawczyk, Karol Myszkowski, and Hans-Peter Seidel. 2004. Perception-motivated high dynamic range video encoding. ACM Transactions on Graphics (TOG) 23, 3 (2004), 733--741.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Rafał K Mantiuk, Gyorgy Denes, Alexandre Chapiro, Anton Kaplanyan, Gizem Rufo, Romain Bachy, Trisha Lian, and Anjul Patney. 2021. FovVideoVDP: A visible difference predictor for wide field-of-view video. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Frank J Massey Jr. 1951. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American statistical Association 46, 253 (1951), 68--78.Google ScholarGoogle ScholarCross RefCross Ref
  53. Ethel Matin. 1975. Saccadic suppression: A review and an analysis. Psychological bulletin 81 (01 1975), 899--917. Google ScholarGoogle ScholarCross RefCross Ref
  54. Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth Perception with Gaze-Contingent Depth of Field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 217--226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Deepmala Mazumdar, Najiya S Kadavath Meethal, Manish Panday, Rashima Asokan, Gijs Thepass, Ronnie J George, Johannes van der Steen, and Johan JM Pel. 2019. Effect of age, sex, stimulus intensity, and eccentricity on saccadic reaction time in eye movement perimetry. Translational Vision Science & Technology 8, 4 (2019), 13--13.Google ScholarGoogle ScholarCross RefCross Ref
  56. Mark E Mazurek, Jamie D Roitman, Jochen Ditterich, and Michael N Shadlen. 2003. A role for neural integrators in perceptual decision making. Cerebral cortex 13, 11 (2003), 1257--1269.Google ScholarGoogle Scholar
  57. Suzanne P McKee and Ken Nakayama. 1984. The detection of motion in the peripheral visual field. Vision research 24, 1 (1984), 25--32.Google ScholarGoogle Scholar
  58. Xiaoxu Meng, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. 2018. Kernel Foveated Rendering. Proceedings of ACM Computer Graphics and Interactive Techniques 1, 1, Article 5 (jul 2018), 20 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Aythami Morales, Francisco M Costela, and Russell L Woods. 2021. Saccade Landing Point Prediction Based on Fine-Grained Learning Method. IEEE Access 9 (2021), 52474--52484.Google ScholarGoogle ScholarCross RefCross Ref
  60. Manon Mulckhuyse and Jan Theeuwes. 2010. Unconscious cueing effects in saccadic eye movements-Facilitation and inhibition in temporal and nasal hemifield. Vision Research 50, 6 (2010), 606--613.Google ScholarGoogle ScholarCross RefCross Ref
  61. Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983), 1--11.Google ScholarGoogle ScholarCross RefCross Ref
  62. Evan M Palmer, Todd S Horowitz, Antonio Torralba, and Jeremy M Wolfe. 2011. What are the shapes of response time distributions in visual search? Journal of experimental psychology: human perception and performance 37, 1 (2011), 58.Google ScholarGoogle ScholarCross RefCross Ref
  63. John Palmer, Alexander C Huk, and Michael N Shadlen. 2005. The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of vision 5, 5 (2005), 1--1.Google ScholarGoogle ScholarCross RefCross Ref
  64. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards Foveated Rendering for Gaze-Tracked Virtual Reality. ACM Trans. Graph. 35, 6, Article 179 (Nov. 2016), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Andreas Polychronakis, George Alex Koulieris, and Katerina Mania. 2021. Emulating Foveated Path Tracing. In Motion, Interaction and Games. 1--9.Google ScholarGoogle Scholar
  66. Dale Purves, Roberto Cabeza, Scott A Huettel, Kevin S LaBar, Michael L Platt, Marty G Woldorff, and Elizabeth M Brannon. 2008. Cognitive neuroscience. Sunderland: Sinauer Associates, Inc.Google ScholarGoogle Scholar
  67. Roger Ratcliff. 1978. A theory of memory retrieval. Psychological review 85, 2 (1978), 59.Google ScholarGoogle Scholar
  68. Baj AJ Reddi, Kaleab N Asrress, and Roger HS Carpenter. 2003. Accuracy, information, and response time in a saccadic decision task. Journal of neurophysiology 90, 5 (2003), 3538--3546.Google ScholarGoogle ScholarCross RefCross Ref
  69. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779--788.Google ScholarGoogle ScholarCross RefCross Ref
  70. Snježana Rimac-Drıje, Mario Vranješ, and Drago Žagar. 2010. Foveated Mean Squared Error-a Novel Video Quality Metric. Multimedia Tools and Applications 49, 3 (sep 2010), 425--445. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Snježana Rimac-Drlje, Goran Martinović, and Branka Zovko-Cihlar. 2011. Foveation-based content Adaptive Structural Similarity index. In 2011 18th International Conference on Systems, Signals and Image Processing. 1--4.Google ScholarGoogle Scholar
  72. Richard Schweitzer and Martin Rolfs. 2021. Intrasaccadic motion streaks jump-start gaze correction. Science Advances 7, 30 (2021), eabf2218.Google ScholarGoogle Scholar
  73. Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon Wetzstein, Diego Gutierrez, and Belen Masia. 2017. Movie editing and cognitive event segmentation in virtual reality video. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1--12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wetzstein. 2018. Saliency in VR: How do people explore virtual environments? IEEE transactions on visualization and computer graphics 24, 4 (2018), 1633--1642.Google ScholarGoogle Scholar
  75. Miriam Spering and Marisa Carrasco. 2015. Acting without seeing: eye movements reveal visual processing without awareness. Trends in neurosciences 38, 4 (2015), 247--258.Google ScholarGoogle Scholar
  76. Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-Guided Foveation for Light Field Displays. ACM Trans. Graph. 36, 6, Article 192 (Nov. 2017), 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Qi Sun, Fu-Chung Huang, Li-Yi Wei, David Luebke, Arie Kaufman, and Joohwan Kim. 2020. Eccentricity effects on blur and depth perception. Optics express 28, 5 (2020), 6734--6739.Google ScholarGoogle Scholar
  78. Qi Sun, Anjul Patney, Li-Yi Wei, Omer Shapira, Jingwan Lu, Paul Asente, Suwen Zhu, Morgan Mcguire, David Luebke, and Arie Kaufman. 2018. Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection. ACM Trans. Graph. 37, 4, Article 67 (July 2018), 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. The Manim Community Developers. 2022. Manim - Mathematical Animation Framework. https://www.manim.community/Google ScholarGoogle Scholar
  80. L.N. Thibos, D.J. Walsh, and F.E. Cheney. 1987b. Vision beyond the resolution limit: Aliasing in the periphery. Vision Research 27, 12 (1987), 2193--2197.Google ScholarGoogle ScholarCross RefCross Ref
  81. L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987a. Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A 4, 8 (1987), 1524--1529.Google ScholarGoogle ScholarCross RefCross Ref
  82. Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Robert J van Beers. 2007. The sources of variability in saccadic eye movements. Journal of Neuroscience 27, 33 (2007), 8757--8770.Google ScholarGoogle ScholarCross RefCross Ref
  84. Boris B Velichkovsky, Nikita Khromov, Alexander Korotin, Evgeny Burnaev, and Andrey Somov. 2019. Visual fixations duration as an indicator of skill level in esports. In IFIP Conference on Human-Computer Interaction. Springer, 397--405.Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. David R Walton, Rafael Kuffner Dos Anjos, Sebastian Friston, David Swapp, Kaan Akşit, Anthony Steed, and Tobias Ritschel. 2021. Beyond blur: Real-time ventral metamers for foveated rendering. ACM Transactions on Graphics 40, 4 (2021), 1--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Zhou Wang, Alan Conrad Bovik, Ligang Lu, and Jack L. Kouloheris. 2001. Foveated wavelet image quality index. In Applications of Digital Image Processing XXIV, Andrew G. Tescher (Ed.), Vol. 4472. International Society for Optics and Photonics, SPIE, 42 -- 52. Google ScholarGoogle ScholarCross RefCross Ref
  87. Martin Weier, Thorsten Roth, Ernst Kruijff, André Hinkenjann, Arsène Pérard-Gayot, Philipp Slusallek, and Yongmin Li. 2016. Foveated real-time ray tracing for head-mounted displays. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 289--298.Google ScholarGoogle Scholar
  88. Shimpei Yamagishi and Shigeto Furukawa. 2020. Factors Influencing Saccadic Reaction Time: Effect of Task Modality, Stimulus Saliency, Spatial Congruency of Stimuli, and Pupil Size. Frontiers in Human Neuroscience (2020), 513.Google ScholarGoogle Scholar

Index Terms

  1. Image features influence reaction time: a learned probabilistic perceptual model for saccade latency

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 41, Issue 4
        July 2022
        1978 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3528223
        Issue’s Table of Contents

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 22 July 2022
        Published in tog Volume 41, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader