ABSTRACT
Traditional film and broadcast cameras capture the scene from a single view point or in the case of 3D cameras from two slightly shifted viewpoints. Important creative parameters such as the camera position and orientation, the depth of field, and the amount of 3D parallax are burned into the footage during acquisition. To realize artistic effects such as the matrix- or the vertigo-effect, complex equipment and very skilled personnel is required. Within the former effect the scene itself appears frozen, while a camera movement is simulated by placing dozens of cameras in a mainly horizontal arrangement. The latter, however, requires physical movement of the camera which is usually mounted on a dolly and translates towards and backwards from the scene while the zoom (and focus) are changed accordingly. Beside the demanding requisites towards equipment and personnel, the resulting effects can usually not be changed in post-production. In contrast lightfield acquisition techniques allow for changing the mentioned parameters in post-production. Traditionally, in absence of a geometric model of the scene, a dense sampling of the lightfield is required. This can be achieved using large cameras arrays as used by [Wilburn et al. 2005] or hand-held plenoptic cameras as proposed by [Ng et al. 2005]. While the former approach is complex in calibration and operation due to the huge number of cameras, the latter suffers from a low resolution per view, as the total resolution of the imaging sensor needs to be shared between all sub-images captured by the individual micro-lenses.
Supplemental Material
- Kauff, P., Atzpadin, N., Fehn, C., Mller, M., Schreer, O., Smolic, A., and Tanger, R. 2007. Depth map creation and image-based rendering for advanced 3dtv services providing interoperability and scalability. Signal Processing: Image Communication 22, 2, 217--234. Special issue on three-dimensional video and television. Google Scholar
Digital Library
- Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., and Hanrahan, P. 2005. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR 2, 11.Google Scholar
- Wilburn, B., Joshi, N., Vaish, V., Talvala, E.-V., Antunez, E., Barth, A., Adams, A., Horowitz, M., and Levoy, M. 2005. High performance imaging using large camera arrays. In ACM SIGGRAPH 2005 Papers, ACM, New York, NY, USA, SIGGRAPH '05, 765--776. Google Scholar
Digital Library
Recommendations
Lightfield Recovery from Its Focal Stack
The Focal Stack Transform integrates a 4D lightfield over a set of appropriately chosen 2D planes. The result of such integration is an image focused on a determined depth in 3D space. The set of such images is the Focal Stack of the lightfield. This ...
Theory and methods of lightfield photography
SIGGRAPH ASIA '09: ACM SIGGRAPH ASIA 2009 CoursesLightfield photography is based on capturing discrete representations of all light rays in a volume of 3D space. Compared to conventional photography, which captures 2D images, lightfield photography captures 4D data. To multiplex this 4D radiance onto ...
Lightfield photography: theory and methods (Copyright restrictions prevent ACM from providing the full text for this article)
SA '10: ACM SIGGRAPH ASIA 2010 CoursesComputational photography is based on capturing and processing discrete representations of all the light rays in the 3D space of a scene. Compared to conventional photography, which captures 2D images, computational photography captures the entire 4D '...




Comments