skip to main content
research-article

Eye-catching crowds: saliency based selective variation

Published:27 July 2009Publication History
Skip Abstract Section

Abstract

Populated virtual environments need to be simulated with as much variety as possible. By identifying the most salient parts of the scene and characters, available resources can be concentrated where they are needed most. In this paper, we investigate which body parts of virtual characters are most looked at in scenes containing duplicate characters or clones. Using an eye-tracking device, we recorded fixations on body parts while participants were asked to indicate whether clones were present or not. We found that the head and upper torso attract the majority of first fixations in a scene and are attended to most. This is true regardless of the orientation, presence or absence of motion, sex, age, size, and clothing style of the character. We developed a selective variation method to exploit this knowledge and perceptually validated our method. We found that selective colour variation is as effective at generating the illusion of variety as full colour variation. We then evaluated the effectiveness of four variation methods that varied only salient parts of the characters. We found that head accessories, top texture and face texture variation are all equally effective at creating variety, whereas facial geometry alterations are less so. Performance implications and guidelines are presented.

Skip Supplemental Material Section

Supplemental Material

tps014_09.mp4

References

  1. Beardsworth, T., and Buckner, T. 1981. The ability to recognize oneself from a video recording of ones movements without seeing ones body. Bulletin of the Psychonomic Society 18, 1, 19--22.Google ScholarGoogle ScholarCross RefCross Ref
  2. Cater, K., Chalmers, A., and Ward, G. 2003. Detail to attention: Exploiting visual tasks for selective rendering. In Proceedings of the Eurographics Symposium on Rendering, 270--280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Cutting, J., and Kozlowski, L. 1977. Recognizing friends by their walk: Gait perception without familiarity cues. Bulletin of the Psychonomic Society 9, 5, 353--356.Google ScholarGoogle ScholarCross RefCross Ref
  4. Daly, S. J. 1998. Engineering observations from spatiovelocity and spatiotemporal visual models. Proceedings of SPIE 3299, 180--191.Google ScholarGoogle Scholar
  5. de Heras Ciechomski, P., Schertenleib, S., Maïm, J., and Thalmann, D. 2005. Real-time shader rendering for crowds in virtual heritage. In Proceedings of the 6th international Symposium on Virtual Reality, Archeology and Cultural Heritage (VAST 2005), 1--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. de Heras Ciechomski, P., Schertenleib, S., Maïm, J., and Thalmann, D. 2005. Reviving the roman odeon of aphrodisias: Dynamic animation and variety control of crowds in virtual heritage. In VSMM: Virtual Systems and Multimedia, 601--610.Google ScholarGoogle Scholar
  7. Dudash, B. 2007. Skinned instancing. In NVidia white paper.Google ScholarGoogle Scholar
  8. Farah, M. J. 1992. Is an object an object an object? Cognitive and neuropsychological investigations of domain specificity in visual object recognition. Current Directions in Psychological Science 1, 5, 164--169.Google ScholarGoogle ScholarCross RefCross Ref
  9. Gosselin, D., Sander, P., and Mitchell, J. 2005. Drawing a crowd. ShaderX3, 505--517.Google ScholarGoogle Scholar
  10. Henderson, J. M., and Hollingworth, A. 1998. Eye movements during scene viewing: An overview. Eye Guidance in Reading and Scene Perception, 269--293.Google ScholarGoogle Scholar
  11. Henderson, J. M. 1992. Object identification in context: the visual processing of natural scenes. Canadian Journal of Psychology 46, 3, 319--41.Google ScholarGoogle ScholarCross RefCross Ref
  12. Hochstein, S., Barlasov, A., Hershler, O., and Shneor, A. N. S. 2004. Rapid vision is holistic. Journal of Vision 4, 8, 95.Google ScholarGoogle ScholarCross RefCross Ref
  13. Howlett, S., Hamill, J., and O'Sullivan, C. 2005. Predicting and evaluating saliency for simplified polygonal models. ACM Transactions on Applied Perception 2, 3, 286--308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Itti, L., Koch, C., and Niebur, E. 1998. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 11, 1254--1259. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Johnson, K. L., and Tassinary, L. G. 2005. Perceiving sex directly and indirectly: Meaning in motion and morphology. Psychological Science 16, 11, 890--897.Google ScholarGoogle ScholarCross RefCross Ref
  16. Laarni, J., Ravaja, N., and T. Saari, T. 2003. Using eye tracking and psychophysiological methods to study spatial presence. In Proceedings of the 6th International Workshop on Presence.Google ScholarGoogle Scholar
  17. Lee, C. H., Varshney, A., and Jacobs, D. 2005. Mesh saliency. ACM Transactions on Graphics 24, 3, 659--666. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Maïm, J., Haegler, S., Yersin, B., Mueller, P., Thalmann, D., and VanGool, L. 2007. Populating ancient pompeii with crowds of virtual romans. In Proceedings of the 8th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST 2007), 26--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. McDonnell, R., Larkin, M., Dobbyn, S., Collins, S., and O'Sullivan, C. 2008. Clone attack! Perception of crowd variety. ACM Transactions on Graphics 27, 3, 1--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Nothdurft, H. C. 1993. Faces and facial expressions do not pop out. Perception 22, 11, 1287--1298.Google ScholarGoogle ScholarCross RefCross Ref
  21. Noton, D., and Stark, L. W. 1971. Scanpaths in eye movements during pattern perception. Science 171, 3968, 308--311.Google ScholarGoogle Scholar
  22. Peters, R. J., and Itti, L. 2006. Computational mechanisms for gaze direction in interactive visual environments. In ETRA '06: Proceedings of the 2006 symposium on Eye tracking research & applications, 27--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Sinha, P., Balas, B., Ostrovsky, Y., and Russell, R. 2006. Face recognition by humans: Nineteen results all computer vision researchers should know about. Proceedings of the IEEE 94, 11, 1948--1962.Google ScholarGoogle ScholarCross RefCross Ref
  24. Tecchia, F., Loscos, C., and Chrysanthou, Y. 2002. Visualizing crowds in real-time. Computer Graphics Forum (Eurographics 2002) 21, 4, 753--765.Google ScholarGoogle Scholar
  25. Thalmann, D., and Musse, S. R. 2007. Crowd Simulation. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Thalmann, D., O'Sullivan, C., Yersin, B., Maïm, J., and McDonnell, R. 2007. Populating virtual environments with crowds. Eurographics Tutorials 1, 25--124.Google ScholarGoogle Scholar
  27. Yarbus, A. L. 1967. Eye Movements and Vision. Plenum Press, New York.Google ScholarGoogle Scholar
  28. Yee, H., Pattanaik, S., and Greenberg, D. P. 2001. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Graphics 20, 1, 39--65. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Eye-catching crowds: saliency based selective variation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 28, Issue 3
        August 2009
        750 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/1531326
        Issue’s Table of Contents

        Copyright © 2009 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 27 July 2009
        Published in tog Volume 28, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader