In this project, I reproduce some of the effects of lightfield cameras using real lightfield data. As demonstrated in this paper by Ng et al. (Ren Ng is the founder of the Lytro camera and a Professor at Berkeley; I took his Computer Graphics class Spring 2018), capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects using simple operations like shifting and averaging.
A lightfield is a vector function that describes the amount of light flowing in every direction through every point in space (plenoptic function). While traditional cameras do not capture this domain, we can apply a 17x17 microlens array to achieve the same effects. Using the Stanford Light Field Archive, we can perform post image processing techniques using the lightfield data captured at specific locations.
With the 17x17 microlens array, we can generate multiple images which focus at different depths. We achieve this effect by: 1) obtaining the center image, 2) select a depth value, 3) compute the x,y displacement from the center location for each mircolens image, 4) shift the image using the displacement vector, scaled by the depth value, and 5) averaging all the shifted image.
The resulting images depict focusing at different depths (of 0, 1, 2, respectively):
A gif showing this effect:
I alsp mimic aperture adjustments with the same datasets. This involves averaging over images at a set radius from the center image (in this case, the x=8,y=8 image position of the 17x17 array). The larger the radius, the larger the aperture size, and thus the blurrier the resulting image. Conversely, a smaller radius will result in a small aperture, and sharper image.
Below are some images with varius radius values:
This was a fun project, and while I had a solid understanding of lightfield cameras from Ren Ng's Computer Graphics class, this gave me the chance to dive in to real lightfield data and create something cool.
- CS 194-26 Course Staff at UC Berkeley
- Professor Alexei (Alyosha) Efros
- Professor Ren Ng