10.1145/3173574.3174001acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedings
research-article
Honorable Mention

ExtVision: Augmentation of Visual Experiences with Generation of Context Images for a Peripheral Vision Using Deep Neural Network

ABSTRACT

We propose a system, called ExtVision, to augment visual experiences by generating and projecting context-images onto the periphery of the television or computer screen. A peripheral projection of the context-image is one of the most effective techniques to enhance visual experiences. However, the projection is not commonly used at present, because of the difficulty in preparing the context-image. In this paper, we propose a deep neural network-based method to generate context-images for peripheral projection. A user study was performed to investigate the manner in which the proposed system augments traditional visual experiences. In addition, we present applications and future prospects of the developed system.

References

  1. Amit Aides, Tamar Avraham, and Yoav Y. Schechner. 2011. Multiscale ultrawide foveated video extrapolation. In 2011 IEEE International Conference on Computational Photography (ICCP), 1--8.Google ScholarGoogle Scholar
  2. Tamar Avraham and Yoav Y. Schechner. 2011. Ultrawide Foveated Video Extrapolation. IEEE Journal of Selected Topics in Signal Processing 5, 2: 321--334.Google ScholarGoogle ScholarCross RefCross Ref
  3. Patrick Baudisch, Nathaniel Good, and Paul Stewart. 2001. Focus plus context screens. In Proceedings of the 14th annual ACM symposium on User interface software and technology - UIST '01, 31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti. 1993. Surround-screen projection-based virtual reality. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques - SIGGRAPH '93, 135--142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Figure 11. Percentage of each answer to some questions of "Drama". Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. Advances in Neural Information Processing Systems 27: 2672--2680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yixiang Huang, Xuan F. Zha, Jay Lee, and Chengliang Liu. 2013. Discriminant diffusion maps analysis: A robust manifold learner for dimensionality reduction and its applications in machine condition monitoring and fault diagnosis. Mechanical Systems and Signal Processing 34, 1--2: 277--297.Google ScholarGoogle ScholarCross RefCross Ref
  7. Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2017. Globally and Locally Consistent Image Completion. ACM Transactions on Graphics ACM Trans. Graph. Article 36, 13. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2016. Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification. ACM Transactions on Graphics (Proc. of SIGGRAPH 2016) 35, 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5967--5976.Google ScholarGoogle Scholar
  10. Brett R. Jones, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2013. IllumiRoom: Peripheral Projected Illusions for Interactive Experiences. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13: 869. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Mehdi Mirza and Simon Osindero. 2014. Conditional Generative Adversarial Nets. Retrieved January 9, 2018 from http://arxiv.org/abs/1411.1784Google ScholarGoogle Scholar
  12. Daniel E Novy. 2013. COMPUTATIONAL IMMERSIVE DISPLAYS. Retrieved January 9, 2018 from http://excedrin.media.mit.edu/wpcontent/uploads/sites/10/2013/07/novyms.pdfGoogle ScholarGoogle Scholar
  13. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. 2016. Context Encoders: Feature Learning by Inpainting.Google ScholarGoogle Scholar
  14. Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.Google ScholarGoogle Scholar
  15. Recommendation Itu-. Methodology for the subjective assessment of video quality in multimedia applications. Retrieved January 9, 2018 from https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-RECBT.1788-0--200701-I!!PDF-E.pdfGoogle ScholarGoogle Scholar
  16. Laura Turban, Fabrice Urban, and Philippe Guillotel. 2017. Extrafoveal Video Extension for an Immersive Viewing Experience. IEEE Transactions on Visualization and Computer Graphics 23, 5: 1520-- 1533. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Saiwen Wang, Jie Song, Jaime Lien, Ivan Poupyrev, and Otmar Hilliges. 2016. Interacting with Soli: Exploring Fine-Grained Dynamic Gesture Recognition in the Radio-Frequency Spectrum. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Kentaro Yoshida, Seki Inoue, Yasutoshi Makino, and Hiroyuki Shinoda. 2017. VibVid: VIBration Estimation from VIDeo by using Neural Network. 37--44.Google ScholarGoogle Scholar
  19. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Places: A 10 million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence: 1--1.Google ScholarGoogle Scholar
  20. SCREENX. Retrieved January 9, 2018 from https://screenx.co.kr/Google ScholarGoogle Scholar
  21. Ready 2 Escape -- The ultimate immersive cinema experience. Retrieved January 9, 2018 from https://ready2escape.com/Google ScholarGoogle Scholar
  22. Philips TV. Experience Ambilight | Philips. Retrieved January 9, 2018 from https://www.philips.co.uk/c-mso/televisions/p/ambilightGoogle ScholarGoogle Scholar
  23. Phillipi. 2017. phillipi/pix2pix. (December 2017). Retrieved January 8, 2018 from https://github.com/phillipi/pix2pixGoogle ScholarGoogle Scholar

Supplemental Material

pn3578-file3.mp4

pn3578-file5.mp4

Index Terms

  1. ExtVision

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!