Abstract
We aim to ask and answer an essential question "how quickly do we react after observing a displayed visual target?" To this end, we present psychophysical studies that characterize the remarkable disconnect between human saccadic behaviors and spatial visual acuity. Building on the results of our studies, we develop a perceptual model to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Specifically, we implement a neurologically-inspired probabilistic model that mimics the accumulation of confidence that leads to a perceptual decision. We validate our model with a series of objective measurements and user studies using an eye-tracked VR display. The results demonstrate that our model prediction is in statistical alignment with real-world human behavior. Further, we establish that many sub-threshold image modifications commonly introduced in graphics pipelines may significantly alter human reaction timing, even if the differences are visually undetectable. Finally, we show that our model can serve as a metric to predict and alter reaction latency of users in interactive computer graphics applications, thus may improve gaze-contingent rendering, design of virtual experiences, and player performance in e-sports. We illustrate this with two examples: estimating competition fairness in a video game with two different team colors, and tuning display viewing distance to minimize player reaction time.
Supplemental Material
- Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency Requirements for Foveated Rendering in Virtual Reality. ACM Transactions on Applied Perception 14, 4, Article 25 (sep 2017), 13 pages. Google Scholar
Digital Library
- Elena Arabadzhiyska, Okan Tarhan Tursun, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Trans. Graph. 36, 4, Article 50 (July 2017), 12 pages. Google Scholar
Digital Library
- A. Terry Bahill, Michael R. Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3 (1975), 191--204. Google Scholar
Cross Ref
- A Terry Bahill. 1975. Most naturally occurring human saccades have magnitudes of 15 deg or less. Invest. Ophthalmol 14 (1975), 468--469.Google Scholar
- Reynold Bailey, Ann McNamara, Nisha Sudarsanam, and Cindy Grimm. 2009. Subtle Gaze Direction. ACM Trans. Graph. 28, 4, Article 100 (Sept. 2009), 14 pages. Google Scholar
Digital Library
- Peter GJ Barten. 1999. Contrast sensitivity of the human eye and its effects on image quality. SPIE press.Google Scholar
- W. Becker and A.F. Fuchs. 1969. Further properties of the human saccadic system: Eye movements and correction saccades with and without visual fixation points. Vision Research 9, 10 (1969), 1247--1258. Google Scholar
Cross Ref
- AH Bell, MA Meredith, AJ Van Opstal, and DougP Munoz. 2006. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research 174, 1 (2006), 53--59.Google Scholar
Cross Ref
- DC Burr, MC Morrone, and J Ross. 1994. Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 6497 (1994), 511--513. Google Scholar
Cross Ref
- Anke Cajar, Ralf Engbert, and Jochen Laubrock. 2016. Spatial frequency processing in the central and peripheral visual field during scene viewing. Vision Research 127 (2016), 186--197.Google Scholar
Cross Ref
- RHS Carpenter. 2004. Contrast, probability, and saccadic latency: evidence for independence of detection and decision. Current Biology 14, 17 (2004), 1576--1580.Google Scholar
Cross Ref
- Roger HS Carpenter and MLL Williams. 1995. Neural computation of log likelihood in control of saccadic eye movements. Nature 377, 6544 (1995), 59--62.Google Scholar
- Haoyang Chen, Yasukuni Mori, and Ikuo Matsuba. 2014. Solving the balance problem of massively multiplayer online role-playing games using coevolutionary programming. Applied Soft Computing 18 (2014), 1--11.Google Scholar
Cross Ref
- Shaoyu Chen, Budmonde Duinkharjav, Xin Sun, Li-Yi Wei, Stefano Petrangeli, Jose Echevarria, Claudio Silva, and Qi Sun. 2022. Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming. IEEE Transactions on Visualization and Computer Graphics 28, 5 (2022), 2157--2167. Google Scholar
Cross Ref
- Michael A. Cohen, Thomas L. Botch, and Caroline E. Robertson. 2020. The limits of color awareness during active, real-world vision. Proceedings of the National Academy of Sciences 117, 24 (2020), 13821--13827. arXiv:https://www.pnas.org/content/117/24/13821.full.pdf Google Scholar
Cross Ref
- Julien Cotti, Muriel Panouilleres, Douglas P Munoz, Jean-Louis Vercher, Denis Pélisson, and Alain Guillaume. 2009. Adaptation of reactive and voluntary saccades: different patterns of adaptation revealed in the antisaccade task. The Journal of Physiology 587, 1 (2009), 127--138.Google Scholar
Cross Ref
- Scott J Daly. 1992. Visible differences predictor: an algorithm for the assessment of image fidelity. In Human Vision, Visual Processing, and Digital Display III, Vol. 1666. International Society for Optics and Photonics, 2--15.Google Scholar
- H. Deubel, W. Wolf, and G. Hauske. 1982. Corrective saccades: Effect of shifting the saccade goal. Vision Research 22, 3 (1982), 353--364. Google Scholar
Cross Ref
- Mark R. Diamond, John Ross, and M. C. Morrone. 2000. Extraretinal Control of Saccadic Suppression. Journal of Neuroscience 20, 9 (2000), 3449--3455. arXiv:https://www.jneurosci.org/content/20/9/3449.full.pdf Google Scholar
Cross Ref
- Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing Visual Discomfort of 3D Stereoscopic Displays with Gaze-Contingent Depth-of-Field (SAP '14). Association for Computing Machinery, New York, NY, USA, 39--46. Google Scholar
Digital Library
- David Dunn, Okan Tursun, Hyeonseung Yu, Piotr Didyk, Karol Myszkowski, and Henry Fuchs. 2020. Stimulating the Human Visual System Beyond Real World Performance in Future Augmented Reality Displays. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 90--100.Google Scholar
- Ralf Engbert and Konstantin Mergenthaler. 2006. Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences 103, 18 (2006), 7192--7197.Google Scholar
Cross Ref
- Jasper H Fabius, Alessio Fracasso, Tanja CW Nijboer, and Stefan Van der Stigchel. 2019. Time course of spatiotopic updating across saccades. Proceedings of the National Academy of Sciences 116, 6 (2019), 2027--2032.Google Scholar
Cross Ref
- J Leroy Folks and Raj S Chhikara. 1978. The inverse Gaussian distribution and its statistical application---a review. Journal of the Royal Statistical Society: Series B (Methodological) 40, 3 (1978), 263--275.Google Scholar
Cross Ref
- Linus Franke, Laura Fink, Jana Martschinke, Kai Selgrad, and Marc Stamminger. 2021. Time-Warped Foveated Rendering for Virtual Reality Headsets. Computer Graphics Forum 40, 1 (2021), 110--123. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14176 Google Scholar
Cross Ref
- Drew Fudenberg, Whitney Newey, Philipp Strack, and Tomasz Strzalecki. 2020. Testing the drift-diffusion model. Proceedings of the National Academy of Sciences 117, 52 (2020), 33141--33148.Google Scholar
Cross Ref
- Ramanathan Gnanadesikan and Martin B Wilk. 1968. Probability plotting methods for the analysis of data. Biometrika 55, 1 (1968), 1--17.Google Scholar
- Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D Graphics. ACM Transactions on Graphics 31, 6, Article 164 (nov 2012), 10 pages. Google Scholar
Digital Library
- E. Hartmann, B. Lachenmayr, and H. Brettel. 1979. The peripheral critical flicker frequency. Vision Research 19, 9 (1979), 1019--1023. Google Scholar
Cross Ref
- Toyohiko Hatada, Haruo Sakata, and Hideo Kusaka. 1980. Psychophysical analysis of the "sensation of reality" induced by a visual wide-field display. Smpte Journal 89, 8 (1980), 560--569.Google Scholar
Cross Ref
- Sebastien Hillaire, Anatole Lecuyer, Remi Cozot, and Gery Casiez. 2008. Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments. In 2008 IEEE Virtual Reality Conference. 47--50. Google Scholar
Cross Ref
- Alain Hore and Djemel Ziou. 2010. Image quality metrics: PSNR vs. SSIM. In 2010 20th international conference on pattern recognition. IEEE, 2366--2369.Google Scholar
Digital Library
- Michael R. Ibbotson and Shaun L. Cloherty. 2009. Visual Perception: Saccadic Omission---Suppression or Temporal Masking? Current Biology 19, 12 (2009), R493--R496. Google Scholar
Cross Ref
- Sirkka L Jarvenpaa. 1990. Graphic displays in decision making---the visual salience effect. Journal of Behavioral Decision Making 3, 4 (1990), 247--262.Google Scholar
Cross Ref
- Akshay Jindal, Krzysztof Wolski, Karol Myszkowski, and Rafał K Mantiuk. 2021. Perceptual model for adaptive local shading and refresh rate. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1--18.Google Scholar
Digital Library
- RP Kalesnykas and PE Hallett. 1994. Retinal eccentricity and the latency of eye saccades. Vision research 34, 4 (1994), 517--531.Google Scholar
- Anton S Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. DeepFovea: neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1--13.Google Scholar
Digital Library
- Donald H Kelly. 1979. Motion and vision. II. Stabilized spatio-temporal threshold surface. Josa 69, 10 (1979), 1340--1349.Google Scholar
Cross Ref
- Joohwan Kim, Josef Spjut, Morgan McGuire, Alexander Majercik, Ben Boudaoud, Rachel Albert, and David Luebke. 2019. Esports arms race: Latency and refresh rate for competitive gaming tasks. Journal of Vision 19, 10 (2019), 218c--218c.Google Scholar
Cross Ref
- Robert Konrad, Anastasios Angelopoulos, and Gordon Wetzstein. 2020. Gaze-Contingent Ocular Parallax Rendering for Virtual Reality. ACM Trans. Graph. 39 (2020). Issue 2.Google Scholar
- Denis Koposov, Maria Semenova, Andrey Somov, Andrey Lange, Anton Stepanov, and Evgeny Burnaev. 2020. Analysis of the reaction time of esports players through the gaze tracking and personality trait. In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). IEEE, 1560--1565.Google Scholar
Cross Ref
- Matias Koskela, Atro Lotvonen, Markku Mäkitalo, Petrus Kivi, Timo Viitanen, and Pekka Jääskeläinen. 2019. Foveated real-time path tracing in visual-polar space. In Eurographics Symposium on Rendering. The Eurographics Association.Google Scholar
- Matias Koskela, Timo Viitanen, Pekka Jääskeläinen, and Jarmo Takala. 2016. Foveated path tracing. In International Symposium on Visual Computing. Springer, 723--732.Google Scholar
Cross Ref
- Eileen Kowler. 2011. Eye movements: The past 25years. Vision Research 51, 13 (2011), 1457--1483. Vision Research 50th Anniversary Issue: Part 2. Google Scholar
Cross Ref
- Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2020. Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1--10.Google Scholar
Digital Library
- Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2021. A Perceptual Model for Eccentricity-dependent Spatio-temporal Flicker Fusion and its Applications to Foveated Graphics. ACM Trans. Graph. 40 (2021). Issue 4.Google Scholar
- Matteo Lisi, Joshua A. Solomon, and Michael J. Morgan. 2019. Gain control of saccadic eye movements is probabilistic. Proceedings of the National Academy of Sciences 116, 32 (2019), 16137--16142. arXiv:https://www.pnas.org/content/116/32/16137.full.pdf Google Scholar
Cross Ref
- Casimir JH Ludwig, J Rhys Davies, and Miguel P Eckstein. 2014. Foveal analysis and peripheral selection during active visual sampling. Proceedings of the National Academy of Sciences 111, 2 (2014), E291--E299.Google Scholar
Cross Ref
- Madhumitha S Mahadevan, Harold E Bedell, and Scott B Stevenson. 2018. The influence of endogenous attention on contrast perception, contrast discrimination, and saccadic reaction time. Vision research 143 (2018), 89--102.Google Scholar
- Rafal Mantiuk, Grzegorz Krawczyk, Karol Myszkowski, and Hans-Peter Seidel. 2004. Perception-motivated high dynamic range video encoding. ACM Transactions on Graphics (TOG) 23, 3 (2004), 733--741.Google Scholar
Digital Library
- Rafał K Mantiuk, Gyorgy Denes, Alexandre Chapiro, Anton Kaplanyan, Gizem Rufo, Romain Bachy, Trisha Lian, and Anjul Patney. 2021. FovVideoVDP: A visible difference predictor for wide field-of-view video. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--19.Google Scholar
Digital Library
- Frank J Massey Jr. 1951. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American statistical Association 46, 253 (1951), 68--78.Google Scholar
Cross Ref
- Ethel Matin. 1975. Saccadic suppression: A review and an analysis. Psychological bulletin 81 (01 1975), 899--917. Google Scholar
Cross Ref
- Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth Perception with Gaze-Contingent Depth of Field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). Association for Computing Machinery, New York, NY, USA, 217--226. Google Scholar
Digital Library
- Deepmala Mazumdar, Najiya S Kadavath Meethal, Manish Panday, Rashima Asokan, Gijs Thepass, Ronnie J George, Johannes van der Steen, and Johan JM Pel. 2019. Effect of age, sex, stimulus intensity, and eccentricity on saccadic reaction time in eye movement perimetry. Translational Vision Science & Technology 8, 4 (2019), 13--13.Google Scholar
Cross Ref
- Mark E Mazurek, Jamie D Roitman, Jochen Ditterich, and Michael N Shadlen. 2003. A role for neural integrators in perceptual decision making. Cerebral cortex 13, 11 (2003), 1257--1269.Google Scholar
- Suzanne P McKee and Ken Nakayama. 1984. The detection of motion in the peripheral visual field. Vision research 24, 1 (1984), 25--32.Google Scholar
- Xiaoxu Meng, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. 2018. Kernel Foveated Rendering. Proceedings of ACM Computer Graphics and Interactive Techniques 1, 1, Article 5 (jul 2018), 20 pages. Google Scholar
Digital Library
- Aythami Morales, Francisco M Costela, and Russell L Woods. 2021. Saccade Landing Point Prediction Based on Fine-Grained Learning Method. IEEE Access 9 (2021), 52474--52484.Google Scholar
Cross Ref
- Manon Mulckhuyse and Jan Theeuwes. 2010. Unconscious cueing effects in saccadic eye movements-Facilitation and inhibition in temporal and nasal hemifield. Vision Research 50, 6 (2010), 606--613.Google Scholar
Cross Ref
- Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983), 1--11.Google Scholar
Cross Ref
- Evan M Palmer, Todd S Horowitz, Antonio Torralba, and Jeremy M Wolfe. 2011. What are the shapes of response time distributions in visual search? Journal of experimental psychology: human perception and performance 37, 1 (2011), 58.Google Scholar
Cross Ref
- John Palmer, Alexander C Huk, and Michael N Shadlen. 2005. The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of vision 5, 5 (2005), 1--1.Google Scholar
Cross Ref
- Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards Foveated Rendering for Gaze-Tracked Virtual Reality. ACM Trans. Graph. 35, 6, Article 179 (Nov. 2016), 12 pages. Google Scholar
Digital Library
- Andreas Polychronakis, George Alex Koulieris, and Katerina Mania. 2021. Emulating Foveated Path Tracing. In Motion, Interaction and Games. 1--9.Google Scholar
- Dale Purves, Roberto Cabeza, Scott A Huettel, Kevin S LaBar, Michael L Platt, Marty G Woldorff, and Elizabeth M Brannon. 2008. Cognitive neuroscience. Sunderland: Sinauer Associates, Inc.Google Scholar
- Roger Ratcliff. 1978. A theory of memory retrieval. Psychological review 85, 2 (1978), 59.Google Scholar
- Baj AJ Reddi, Kaleab N Asrress, and Roger HS Carpenter. 2003. Accuracy, information, and response time in a saccadic decision task. Journal of neurophysiology 90, 5 (2003), 3538--3546.Google Scholar
Cross Ref
- Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779--788.Google Scholar
Cross Ref
- Snježana Rimac-Drıje, Mario Vranješ, and Drago Žagar. 2010. Foveated Mean Squared Error-a Novel Video Quality Metric. Multimedia Tools and Applications 49, 3 (sep 2010), 425--445. Google Scholar
Digital Library
- Snježana Rimac-Drlje, Goran Martinović, and Branka Zovko-Cihlar. 2011. Foveation-based content Adaptive Structural Similarity index. In 2011 18th International Conference on Systems, Signals and Image Processing. 1--4.Google Scholar
- Richard Schweitzer and Martin Rolfs. 2021. Intrasaccadic motion streaks jump-start gaze correction. Science Advances 7, 30 (2021), eabf2218.Google Scholar
- Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon Wetzstein, Diego Gutierrez, and Belen Masia. 2017. Movie editing and cognitive event segmentation in virtual reality video. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1--12.Google Scholar
Digital Library
- Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wetzstein. 2018. Saliency in VR: How do people explore virtual environments? IEEE transactions on visualization and computer graphics 24, 4 (2018), 1633--1642.Google Scholar
- Miriam Spering and Marisa Carrasco. 2015. Acting without seeing: eye movements reveal visual processing without awareness. Trends in neurosciences 38, 4 (2015), 247--258.Google Scholar
- Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-Guided Foveation for Light Field Displays. ACM Trans. Graph. 36, 6, Article 192 (Nov. 2017), 13 pages. Google Scholar
Digital Library
- Qi Sun, Fu-Chung Huang, Li-Yi Wei, David Luebke, Arie Kaufman, and Joohwan Kim. 2020. Eccentricity effects on blur and depth perception. Optics express 28, 5 (2020), 6734--6739.Google Scholar
- Qi Sun, Anjul Patney, Li-Yi Wei, Omer Shapira, Jingwan Lu, Paul Asente, Suwen Zhu, Morgan Mcguire, David Luebke, and Arie Kaufman. 2018. Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection. ACM Trans. Graph. 37, 4, Article 67 (July 2018), 13 pages. Google Scholar
Digital Library
- The Manim Community Developers. 2022. Manim - Mathematical Animation Framework. https://www.manim.community/Google Scholar
- L.N. Thibos, D.J. Walsh, and F.E. Cheney. 1987b. Vision beyond the resolution limit: Aliasing in the periphery. Vision Research 27, 12 (1987), 2193--2197.Google Scholar
Cross Ref
- L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987a. Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A 4, 8 (1987), 1524--1529.Google Scholar
Cross Ref
- Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1--14.Google Scholar
Digital Library
- Robert J van Beers. 2007. The sources of variability in saccadic eye movements. Journal of Neuroscience 27, 33 (2007), 8757--8770.Google Scholar
Cross Ref
- Boris B Velichkovsky, Nikita Khromov, Alexander Korotin, Evgeny Burnaev, and Andrey Somov. 2019. Visual fixations duration as an indicator of skill level in esports. In IFIP Conference on Human-Computer Interaction. Springer, 397--405.Google Scholar
Digital Library
- David R Walton, Rafael Kuffner Dos Anjos, Sebastian Friston, David Swapp, Kaan Akşit, Anthony Steed, and Tobias Ritschel. 2021. Beyond blur: Real-time ventral metamers for foveated rendering. ACM Transactions on Graphics 40, 4 (2021), 1--14.Google Scholar
Digital Library
- Zhou Wang, Alan Conrad Bovik, Ligang Lu, and Jack L. Kouloheris. 2001. Foveated wavelet image quality index. In Applications of Digital Image Processing XXIV, Andrew G. Tescher (Ed.), Vol. 4472. International Society for Optics and Photonics, SPIE, 42 -- 52. Google Scholar
Cross Ref
- Martin Weier, Thorsten Roth, Ernst Kruijff, André Hinkenjann, Arsène Pérard-Gayot, Philipp Slusallek, and Yongmin Li. 2016. Foveated real-time ray tracing for head-mounted displays. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 289--298.Google Scholar
- Shimpei Yamagishi and Shigeto Furukawa. 2020. Factors Influencing Saccadic Reaction Time: Effect of Task Modality, Stimulus Saliency, Spatial Congruency of Stimuli, and Pupil Size. Frontiers in Human Neuroscience (2020), 513.Google Scholar
Index Terms
Image features influence reaction time: a learned probabilistic perceptual model for saccade latency
Recommendations
A review of current trends on visual perception studies in virtual and augmented reality
SA '20: SIGGRAPH Asia 2020 CoursesIn the development of novel algorithms and techniques in virtual and augmented reality (VR/AR), it is crucial to take human visual perception into account. For example, when hardware resources are a restraining factor, the limitations of the human ...
Gaze-contingent ocular parallax rendering for virtual reality
SIGGRAPH '19: ACM SIGGRAPH 2019 TalksCurrent-generation virtual reality (VR) displays aim to generate perceptually realistic user experiences by accurately rendering many perceptually important effects including perspective, disparity, motion parallax, and other depth cues. We introduce ...
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity,...





Comments