Abstract
A number of consumer-grade spherical cameras have recently appeared, enabling affordable monoscopic VR content creation in the form of full 360° X 180° spherical panoramic photos and videos. While monoscopic content is certainly engaging, it fails to leverage a main aspect of VR HMDs, namely stereoscopic display. Recent stereoscopic capture rigs involve placing many cameras in a ring and synthesizing an omni-directional stereo panorama enabling a user to look around to explore the scene in stereo. In this work, we describe a method that takes images from two 360° spherical cameras and synthesizes an omni-directional stereo panorama with stereo in all directions. Our proposed method has a lower equipment cost than camera-ring alternatives, can be assembled with currently available off-the-shelf equipment, and is relatively small and light-weight compared to the alternatives. We validate our method by generating both stills and videos. We have conducted a user study to better understand what kinds of geometric processing are necessary for a pleasant viewing experience. We also discuss several algorithmic variations, each with their own time and quality trade-offs.
Supplemental Material
Available for Download
Supplemental files.
- Rajat Aggarwal, Amrisha Vohra, and Anoop M Namboodiri. 2016. Panoramic stereo videos with a single camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3755--3763. Google Scholar
Cross Ref
- Robert Anderson, David Gallup, Jonathan T. Barron, Janne Kontkanen, Noah Snavely, Carlos Hernandez, Sameer Agarwal, and Steven M. Seitz. 2016. Jump: Virtual Reality Video. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2016) 35, 6 (November 2016).Google Scholar
- Jonathan T. Barron, Andrew Adams, YiChang Shih, and Carlos Hernández. 2015. Fast Bilateral-Space Stereo for Synthetic Defocus. CVPR (2015).Google Scholar
- Paul Debevec, Greg Downing, Mark Bolas, Hsuen-Yueh Peng, and Jules Urbach. 2015. Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera. In ACM SIGGRAPH 2015 Posters. ACM, 30. Google Scholar
Digital Library
- Piotr Dollár and C. Lawrence Zitnick. 2013. Structured Forests for Fast Edge Detection. In ICCV. Google Scholar
Digital Library
- Facebook. 2016. Facebook Surround 360. https://facebook360.fb.com/facebook-surround-360/. (2016). Accessed: 2016-12-26.Google Scholar
- Andreas Geiger, Martin Roser, and Raquel Urtasun. 2010. Efficient large-scale stereo matching. In Asian conference on computer vision. Springer, 25--38. http://www.cvlibs.net/software/libelas/.Google Scholar
Digital Library
- Dan B Goldman. 2010. Vignette and exposure calibration and compensation. IEEE transactions on pattern analysis and machine intelligence 32, 12 (dec 2010), 2276--88. Google Scholar
Digital Library
- Steven J Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F Cohen. 1996. The lumigraph. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM, 43--54.Google Scholar
Digital Library
- Christopher Grayson. 2016. 3D Cameras and Virtual Reality. http://www.giganti.co/3D-VR-Cameras. (2016). Accessed: 2017-01-16.Google Scholar
- Heiko Hirschmuller. 2008. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on pattern analysis and machine intelligence 30, 2 (2008), 328--341. http://docs.opencv.org/java/3.0.0/org/opencv/calib3d/StereoSGBM.html. Google Scholar
Digital Library
- Hiroshi Ishiguro, Masashi Yamamoto, and Saburo Tsuji. 1990. Omni-directional stereo for making global map. In Third International Conference on Computer Vision. IEEE, 540--547. Google Scholar
Cross Ref
- Hansung Kim and Adrian Hilton. 2013. 3D scene reconstruction from multiple spherical stereo pairs. International journal of computer vision 104, 1 (2013), 94--116. Google Scholar
Digital Library
- Johannes Kopf. 2016. 360° Video Stabilization. ACM Trans. Graph. 35, 6, Article 195 (Nov. 2016), 9 pages. Google Scholar
Digital Library
- Johannes Kopf, Matt Uyttendaele, Oliver Deussen, and Michael F. Cohen. 2007. Capturing and Viewing Gigapixel Images. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2007) 26, 3 (2007), to appear.Google Scholar
- Sanjeev J Koppal, C Lawrence Zitnick, Michael Cohen, Sing Bing Kang, Bryan Ressler, Alex Colburn, and others. 2011. A viewer-centric editor for 3D movies. IEEE Computer Graphics and Applications 31, 1 (2011), 20--35.Google Scholar
Digital Library
- Jungjin Lee, Bumki Kim, Kyehyun Kim, Younghui Kim, and Junyong Noh. 2016. Rich360: optimized spherical representation from structured panoramic camera arrays. ACM Transactions on Graphics (TOG) 35, 4 (2016), 63.Google Scholar
Digital Library
- Marc Levoy and Pat Hanrahan. 1996. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM, 31--42. Google Scholar
Digital Library
- Ce Liu. 2009. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Ph.D. Dissertation. Massachusetts Institute of Technology.Google Scholar
- S.K. Nayar. 1997. Catadioptric Omnidirectional Camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 482--488. Google Scholar
Cross Ref
- Nokia. 2016. Nokia OZO. https://ozo.nokia.com/. (2016). Accessed: 2017-01-16.Google Scholar
- S. Peleg, M. Ben-Ezra, and Y. Pritch. 2001. Omnistereo: panoramic stereo imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 3 (2001), 279--290. Google Scholar
Digital Library
- F Perazzi, A Sorkine-Hornung, H Zimmer, P Kaufmann, O Wang, S Watson, and M Gross. 2015. Panoramic video from unstructured camera arrays. In Computer Graphics Forum, Vol. 34. Wiley Online Library, 57--68. Google Scholar
Digital Library
- Jerome Revaud, Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid. 2015. EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. In Computer Vision and Pattern Recognition.Google Scholar
- C. Richardt, Y. Pritch, H. Zimmer, and A. Sorkine-Hornung. 2013. Megastereo: Constructing High-Resolution Stereo Panoramas. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013) (2013), 1256--1263. Google Scholar
Digital Library
- Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. 2011. ORB: An efficient alternative to SIFT or SURF. In 2011 International Conference on Computer Vision. IEEE, 2564--2571. Google Scholar
Digital Library
- Daniel Scharstein and Richard Szeliski. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision 47, 1--3 (2002), 7--42.Google Scholar
Digital Library
- Heung-Yeung Shum and Li-Wei He. 1999. Rendering with Concentric Mosaics. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 299--306. Google Scholar
Digital Library
- Richard Szeliski and Heung-Yeung Shum. 1997. Creating full view panoramic image mosaics and environment maps. In Computer Graphics (SIGGRAPH'97 Proceedings). Association for Computing Machinery, Inc., Los Angeles, 251--258. http://research.microsoft.com/apps/pubs/default.aspx?id=75673 Google Scholar
Digital Library
- Kenji Tanaka and Susumu Tachi. 2005. TORNADO: Omnistereo video imaging with rotating optics. IEEE transactions on visualization and computer graphics 11, 6 (2005), 614--625. Google Scholar
Digital Library
- Philippe Weinzaepfel, Jerome Revaud, Zaid Harchaoui, and Cordelia Schmid. 2013. Deep-Flow: Large displacement optical flow with deep matching. In IEEE Intenational Conference on Computer Vision (ICCV). Sydney, Australia. http://hal.inria.fr/hal-00873592Google Scholar
- Christian Weissig, Oliver Schreer, Peter Eisert, and Peter Kauff. 2012. The ultimate immersive experience: panoramic 3D video acquisition. In International Conference on Multimedia Modeling. Springer, 671--681. Google Scholar
Digital Library
- Fan Zhang and Feng Liu. 2015. Casual stereoscopic panorama stitching. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2002--2010. Google Scholar
Cross Ref
Index Terms
Low-cost 360 stereo photography and video capture
Recommendations
Omnivergent Stereo
The notion of a virtual camera for optimal 3D reconstruction is introduced. Instead of planar perspective images that collect many rays at a fixed viewpoint, omnivergent cameras collect a small number of rays at many different viewpoints. The resulting ...
Stereo Reconstruction from Multiperspective Panoramas
Abstract--A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular ...
Low-Cost Telepresence for Collaborative Virtual Environments
We present a novel low-cost method for visual communication and telepresence in a CAVE^{\rm{TM}}-like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique ...





Comments