Project 4

Image Based Lighting

By Xintong Wu (xwu68) and Cong Shen (cshen19)

About this project

In this project, we focused on rendering synthetic objects into 2d photographs. In order to create realistic-looking photographs, we created HDR images from several LDR images and perform panoramic transformations on photographs. More information about this project can be found on the course website and Debevec's paper:

PART I: Recovering HDR Radiance Maps

1. LDR Images

Here are the LDR images of the mirror ball we have taken:

Exposure: 1/6

Exposure: 1/25

Exposure: 1/100

2. Naive LDR Merging

The simplest approach to create an HDR image is to divide all pixels in LDR images by their exposure time, add three images together and take the average of them. However, this approach can result in relatively poor results as following.

Estimated Log Irradiance:

Exposure: 1/6

Exposure: 1/25

Exposure: 1/100

HDR Log Irradiance

ToneMapped HDR

3. LDR Merging with Weighting Function

Another slightly improved approach is to use a weighing function which gives pixels with small or large values(under- or over-exposed) small weights in the averaging process. Here is the result:

Estimated Log Irradiance:

Exposure: 1/6

Exposure: 1/25

Exposure: 1/100

HDR Log Irradiance

ToneMapped HDR

Weighting Function: $w = @(z) double(128-abs(z-128))$

4. LDR merging and response function estimation

In order to get a better result, we used the film response function($g$) to recover the true irradiance of each image as nearly all cameras apply a non-linear function to recorded raw pixel values in order to better simulate human vision. Since we have three photos, we calcualte the log irradiance corresponding to pixels with value from 1 to 255. We apply the gsolve function in this case and to avoid crashing the memory, we randomly picked 100 pixels from each images and get the following response function.

lambda = 60
Sample pixels = 100
Weight: $w = @(z) double(128-abs(z-128)))$


After calculating the log irradiance versus pixel value function, we recover the image by solving $E_i$ per channal. To be more specific, we calculated the result of $g(Z_{ij})-ln\Delta t_j$ and divide by the weight. And finally we take the inverse log of the value to recover the true color.

Here's the result which looks better than previous results.

Estimated Log Irradiance:

Exposure: 1/6

Exposure: 1/25

Exposure: 1/100

HDR Log Irradiance

ToneMapped HDR

Panoramic transformations

Once the correct HDR image is obtained, we have enough data to preform relighting. However, panoramic transformations are needed so that softwares such as Blender can actually use it to obtain lighting information. As suggested on the course website, we are going to perform a equirectangular transformation on the mirror ball image.

We can calculate reflection vector at each pixel of the mirror ball by the following fomula: $R = V - 2 .* dot(V, N) .* N$, where $V$ is the viewing vector and $N$ is the normal vector. Note that if we assume that the mirror ball has a unit radius, then the normal vector is exactly $(x, y, \sqrt{1 - x ^ 2 - y ^ 2})$ at every $(x, y)$ pixel.

Map for normal vectors.

Map for reflection vectors.

After refelction vectors are calculated, we can easily convert them into phi - theta form according to this fomula.

Map for phi - theta coordinates.

Here is the result of equirectangular transformation:

Rendering synthetic objects

Here we use Blender to merge the object into the scene. We basically shot a background photo and created plane to cover the table area. Then we used some objects from the example and we also imported a fancy drangon from Free3D.com to make the image looks better.

To merge the object seamlessly, we also rendered a mask and an empty scence and finally use the formula $$M.*R+(1-M).*I+(1-M).*(R-E).*p$$ to merge them together. Notice that the $p$ this is parameter to control the shading effect.

$ p = 2.3$

Background:

Empty Scene:

Render mask:

Rendered Objects without background:

Final Result:

Here is another rendered example:

Bells & Whistles

1. Other panoramic transformations

@TODO

2. Photographer/tripod removal

I used the HDR Shop and Debevec's method to remove tripod and photographer

Source Images:

Wrapped Images: (The first one is rotated inorder to match the second one)

Mask for merging: (Sorry for the quality, I created it with the windows picture editor)

Final Result:

3. Local tonemapping operator

@TODO