ABSTRACT
We introduce a method to automatically convert a single panoramic input into a multi-cylinder image representation that supports real-time, free-viewpoint view synthesis for virtual reality. We apply an existing convolutional neural network trained on pinhole images to a cylindrical panorama with wrap padding to ensure agreement between the left and right edges. The network outputs a stack of semi-transparent panoramas at varying depths which can be easily rendered and composited with over blending. Initial experiments show that the method produces convincing parallax and cleaner object boundaries than a textured mesh representation.
Supplemental Material
Available for Download
- Benjamin Attal, Selena Ling, Aaron Gokaslan, Christian Richardt, and James Tompkin. 2020. MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images. In European Conference on Computer Vision (ECCV). https://visual.cs.brown.edu/matryodshkaGoogle Scholar
Cross Ref
- Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. 2020. Immersive Light Field Video with a Layered Mesh Representation. 39, 4 (2020), 86:1–86:15.Google Scholar
- Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3D: Learning from RGB-D Data in Indoor Environments. International Conference on 3D Vision (3DV)(2017).Google Scholar
Cross Ref
- Ana Serrano, Incheol Kim, Zhili Chen, Stephen DiVerdi, Diego Gutierrez, Aaron Hertzmann, and Belen Masia. 2019. Motion parallax for 360 RGBD video. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 1817–1827.Google Scholar
Cross Ref
- Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 2020. 3d photography using context-aware layered depth inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8028–8038.Google Scholar
Cross Ref
- Richard Tucker and Noah Snavely. 2020. Single-view View Synthesis with Multiplane Images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google Scholar
Index Terms
PanoSynthVR: View Synthesis From A Single Input Panorama with Multi-Cylinder Images
Recommendations
Deep Reflectance Volumes: Relightable Reconstructions from Multi-view Photometric Images
Computer Vision – ECCV 2020AbstractWe present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting. At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, ...
Deep view synthesis from sparse photometric images
The goal of light transport acquisition is to take images from a sparse set of lighting and viewing directions, and combine them to enable arbitrary relighting with changing view. While relighting from sparse images has received significant attention, ...
Real-Time Rendering of Multi-View Images from a Single Image with Depth
ICCSA '07: Proceedings of the The 2007 International Conference Computational Science and its ApplicationsImage reprojection is a technique to generate novel images by projecting a reference image of an arbitrary view. Previous image reprojection methods often run on the CPU, but these approaches take high-cost for rendering. We present a real-time image ...




Comments