skip to main content
research-article

360Anywhere: Mobile Ad-hoc Collaboration in Any Environment using 360 Video and Augmented Reality

Published:19 June 2018Publication History
Skip Abstract Section

Abstract

360-degree video is increasingly used to create immersive user experiences; however, it is typically limited to a single user and not interactive. Recent studies have explored the potential of 360 video to support multi-user collaboration in remote settings. These studies identified several challenges with respect to 360 live streams, such as the lack of gaze awareness, out-of-sync views, and missed gestures. To address these challenges, we created 360Anywhere, a framework for 360 video-based multi-user collaboration that, in addition to allowing collaborators to view and annotate a 360 live stream, also supports projection of annotations in the 360 stream back into the real-world environment in real-time. This enables a range of collaborative augmented reality applications not supported with existing tools. We present the 360Anywhere framework and tools that allow users to generate applications tailored to specific collaboration and augmentation needs with support for remote collaboration. In a series of exploratory design sessions with users, we assess 360Anywhere's power and flexibility for three mobile ad-hoc scenarios. Using 360Anywhere, participants were able to set up and use fairly complex remote collaboration systems involving projective augmented reality in less than 10 minutes.

References

  1. Tomoaki Adachi, Takefumi Ogawa, Kiyoshi Kiyokawa, and Haruo Takemura. 2005. A telepresence system by using live video projection of wearable camera onto a 3d scene model. In Proc. HCII.Google ScholarGoogle Scholar
  2. Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti. 1993. Surround-screen Projection-based Virtual Reality: The Design and Implementation of the CAVE. In Proc. SIGGRAPH. 135--142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Omid Fakourfar, Kevin Ta, Richard Tang, Scott Bateman, and Anthony Tang. 2016. Stabilized Annotations for Mobile Remote Assistance. In Proc. CHI. 1548--1560. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Andreas Rene Fender, Hrvoje Benko, and Andy Wilson. 2017. MeetAlive: Room-Scale Omni-Directional Display System for Multi-User Content and Control Sharing. In Proc. ISS. 106--115. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Steffen Gauglitz, Benjamin Nuernberger, Matthew Turk, and Tobias Höllerer. 2014. World-stabilized Annotations and Virtual Scene Navigation for Remote Collaboration. In Proc. UIST. 449--459. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Michael Haller, Jakob Leitner, Thomas Seifried, James R. Wallace, Stacey D. Scott, Christoph Richter, Peter Brandl, Adam Gokcezade, and Seth E. Hunter. 2010. The NICE discussion room: integrating paper and digital media to support co-located group meetings. In Proc. CHI. 609--618. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Thuong Hoang, Martin Reinoso, Zaher Joukhadar, Frank Vetere, and David Kelly. 2017. Augmented Studio: Projection Mapping on Moving Body for Physiotherapy Education. In Proc. CHI. 1419--1430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Brad Johanson, Armando Fox, and Terry Winograd. 2002. The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive Computing 1, 2 (2002), 67--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Brett R. Jones, Hrvoje Benko, Eyal Ofek, and Andrew D. Wilson. 2013. IllumiRoom: Peripheral Projected Illusions for Interactive Experiences. In Proc. CHI. 869--878. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Brett R. Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew Wilson, Eyal Ofek, Blair MacIntyre, Nikunj Raghuvanshi, and Lior Shapira. 2014. RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units. In Proc. UIST. 637--644. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Hannes Kaufmann and Dieter Schmalstieg. 2003. Mathematics and geometry education with collaborative augmented reality. Computers & Graphics 27, 3 (2003), 339--345.Google ScholarGoogle ScholarCross RefCross Ref
  12. Seungwon Kim, Gun A. Lee, Sangtae Ha, Nobuchika Sakata, and Mark Billinghurst. 2015. Automatically Freezing Live Video for Annotation During Remote Collaboration. In CHI Extended Abstracts. 1669--1674. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Jerry Li, Mia Manavalan, Sarah D'Angelo, and Darren Gergle. 2016. Designing Shared Gaze Awareness for Remote Collaboration. In Proc. CSCW Companion. 325--328. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Yen-Chen Lin, Yung-Ju Chang, Hou-Ning Hu, Hsien-Tzu Cheng, Chi-Wen Huang, and Min Sun. 2017. Tell Me Where to Look: Investigating Ways for Assisting Focus in 360º Video. In Proc. CHI. 2535--2545. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Microsoft. 2018. Skype on HoloLens. (2018). https://www.microsoft.com/en-us/hololens/apps/skype.Google ScholarGoogle Scholar
  16. Brad Myers, Scott E. Hudson, and Randy Pausch. 2000. Past, Present, and Future of User Interface Software Tools. ACM Trans. Comput.-Hum. Interact. 7, 1 (March 2000), 3--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Dan R Olsen Jr. 2007. Evaluating User Interface Systems Research. In Proc. UIST. 251--258. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Jun Rekimoto and Masanori Saitoh. 1999. Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments. In Proc. CHI. 378--385. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Rajinder S. Sodhi, Brett R. Jones, David Forsyth, Brian P. Bailey, and Giuliano Maciocci. 2013. BeThere: 3D Mobile Collaboration with Spatial Input. In Proc. CHI. 179--188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Norbert A. Streitz, Jörg Geißler, Torsten Holmer, Shin'ichi Konomi, Christian Müller-Tomfelde, Wolfgang Reischl, Petra Rexroth, Peter Seitz, and Ralf Steinmetz. 1999. i-LAND: An Interactive Landscape for Creativity and Innovation. In Proc. CHI. 120--127. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Mengu Sukan, Carmine Elvezio, Ohan Oda, Steven Feiner, and Barbara Tversky. 2014. ParaFrustum: Visualization Techniques for Guiding a User to a Constrained Set of Viewing Positions and Orientations. In Proc. UIST. 331--340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Anthony Tang and Omid Fakourfar. 2017. Watching 360º Videos Together. In Proc. CHI. 4501--4506. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Anthony Tang, Omid Fakourfar, Carman Neustaedter, and Scott Bateman. 2017. Collaboration in 360º Videochat: Challenges and Opportunities. In Proc. DIS. 1327--1339. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Vuforia. 2018. Project Chalk. (2018). https://chalk.vuforia.com/.Google ScholarGoogle Scholar
  25. Andrew D. Wilson and Hrvoje Benko. 2010. Combining Multiple Depth Cameras and Projectors for Interactions on, Above and Between Surfaces. In Proc. UIST. 273--282. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. 360Anywhere: Mobile Ad-hoc Collaboration in Any Environment using 360 Video and Augmented Reality

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!