Generalized Image Acquisition and Analysis

Computational Plenoptic Imaging

The plenoptic function is a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two-dimensional, spatially-varying subset - the common photograph. In this state of the art report, we review approaches that optically encode the dimensions of the plenpotic function transcending those captured by traditional photography and reconstruct the recorded information computationally.


A Theory of Plenoptic Multiplexing

Ivo Ihrke, Gordon Wetzstein, Wolfgang Heidrich
In: Proceedings of CVPR 2010 (oral).


Multiplexing is a common technique for encoding highdimensional image data into a single, two-dimensional image. Examples of spatial multiplexing include Bayer patterns to capture color channels, and integral images to encode light fields. In the Fourier domain, optical heterodyning has been used to acquire light fields. In this paper, we develop a general theory of multiplexing the dimensions of the plenoptic function onto an image sensor. Our theory enables a principled comparison of plenoptic multiplexing schemes, including noise analysis, as well as the development of a generic reconstruction algorithm. The framework also aides in the identification and optimization of novel multiplexed imaging applications.
Project Page


title={{A theory of plenoptic multiplexing}},
author={Ihrke, I. and Wetzstein, G. and Heidrich, W.},
Go to project list