Abstract
Depth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational imaging modify the acquisition process to extend the DOF through deconvolution. Because deconvolution quality is a tight function of the frequency power spectrum of the defocus kernel, designs with high spectra are desirable. In this paper we study how to design effective extended-DOF systems, and show an upper bound on the maximal power spectrum that can be achieved. We analyze defocus kernels in the 4D light field space and show that in the frequency domain, only a low-dimensional 3D manifold contributes to focus. Thus, to maximize the defocus spectrum, imaging systems should concentrate their limited energy on this manifold. We review several computational imaging systems and show either that they spend energy outside the focal manifold or do not achieve a high spectrum over the DOF. Guided by this analysis we introduce the lattice-focal lens, which concentrates energy at the low-dimensional focal manifold and achieves a higher power spectrum than previous designs. We have built a prototype lattice-focal lens and present extended depth of field results.
Supplemental Material
Available for Download
Auxiliary material for "4D Frequency Analysis of Computational Cameras for Depth of Field Extension."
- Adams, A., and Levoy, M. 2007. General linear cameras with finite aperture. In EGSR. Google Scholar
Digital Library
- Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., and Cohen, M. 2004. Interactive digital photomontage. In SIGGRAPH. Google Scholar
Digital Library
- Ben-Eliezer, E., Zalevsky, Z., Marom, E., and Konforti, N. 2005. Experimental realization of an imaging system with an extended depth of field. Applied Optics, 2792--2798.Google Scholar
- Brenner, K., Lohmann, A., and Casteneda, J. O. 1983. The ambiguity function as a polar display of the OTF. Opt. Commun. 44, 323--326.Google Scholar
Cross Ref
- Dowski, E., and Cathey, W. 1995. Extended depth of field through wavefront coding. Applied Optics 34, 1859--1866.Google Scholar
Cross Ref
- FitzGerrell, A. R., Dowski, E., and Cathey, W. 1997. Defocus transfer function for circularly symmetric pupils. Applied Optics 36, 5796--5804.Google Scholar
Cross Ref
- George, N., and Chi, W. 2003. Computational imaging with the logarithmic asphere: theory. J. Opt. Soc. Am. A 20, 2260--2273.Google Scholar
Cross Ref
- Goodman, J. W. 1968. Introduction to Fourier Optics. McGraw-Hill Book Company.Google Scholar
- Gu, X., Gortler, S. J., and Cohen, M. F. 1997. Polyhedral geometry and the two-plane parameterization. In EGSR. Google Scholar
Digital Library
- Hasinoff, S., and Kutulakos, K. 2008. Light-efficient photography. In ECCV. Google Scholar
Digital Library
- Hausler, G. 1972. A method to increase the depth of focus by two step image processing. Optics Communications, 3842.Google Scholar
- Horn, B. K. P. 1968. Focusing. Tech. Rep. AIM-160, Massachusetts Institute of Technology.Google Scholar
- Levin, A., Fergus, R., Durand, F., and Freeman, W. 2007. Image and depth from a conventional camera with a coded aperture. SIGGRAPH. Google Scholar
Digital Library
- Levin, A., Freeman, W., and Durand, F. 2008. Understanding camera trade-offs through a Bayesian analysis of light field projections. In ECCV. Google Scholar
Digital Library
- Levin, A., Freeman, W., and Durand, F. 2008. Understanding camera trade-offs through a Bayesian analysis of light field projections. MIT CSAIL TR 2008--049.Google Scholar
- Levin, A., Sand, P., Cho, T. S., Durand, F., and Freeman, W. T. 2008. Motion invariant photography. SIGGRAPH. Google Scholar
Digital Library
- Levin, A., Hasinoff, S., Green, P., Durand, F., and Freeman, W. 2009. 4D frequency analysis of computational cameras for depth of field extension. MIT CSAIL TR 2009--019.Google Scholar
- Levin, A., Weiss, Y., Durand, F., and Freeman, W. 2009. Understanding and evaluating blind deconvolution algorithms. In CVPR.Google Scholar
- Levoy, M., and Hanrahan, P. M. 1996. Light field rendering. In SIGGRAPH. Google Scholar
Digital Library
- Nagahara, H., Kuthirummal, S., Zhou, C., and Nayar, S. 2008. Flexible Depth of Field Photography. In ECCV. Google Scholar
Digital Library
- Ng, R. 2005. Fourier slice photography. SIGGRAPH. Google Scholar
Digital Library
- Ogden, J., Adelson, E., Bergen, J. R., and Burt, P. 1985. Pyramid-based computer graphics. RCA Engineer 30, 5, 4--15.Google Scholar
- Papoulis, A. 1974. Ambiguity function in fourier optics. Journal of the Optical Society of America A 64, 779--788.Google Scholar
Cross Ref
- Rihaczek, A. W. 1969. Principles of high-resolution radar. McGraw-Hill.Google Scholar
- Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., and Tumblin, J. 2007. Dappled photography: Maskenhanced cameras for heterodyned light fields and coded aperture refocusing. SIGGRAPH. Google Scholar
Digital Library
- Zhang, Z., and Levoy, M. 2009. Wigner distributions and how they relate to the light field. In ICCP.Google Scholar
Index Terms
4D frequency analysis of computational cameras for depth of field extension
Recommendations
4D frequency analysis of computational cameras for depth of field extension
SIGGRAPH '09: ACM SIGGRAPH 2009 papersDepth of field (DOF), the range of scene depths that appear sharp in a photograph, poses a fundamental tradeoff in photography---wide apertures are important to reduce imaging noise, but they also increase defocus blur. Recent advances in computational ...
Depth of Field Affects Perceived Depth in Stereographs
Although it has been reported that depth of field influences depth perception in nonstereo photographs, it remains unclear how depth of field affects depth perception under stereo viewing conditions. We showed participants stereo photographs with ...
High quality depth map estimation of object surface from light-field images
Light-field imaging provides a novel solution to the passive 3D imaging technology. However the dense multi-view sub-aperture images decoded from the light-field raw image have extremely narrow baselines, which lead to inconsistent matching with ...





Comments