16-825 Assignment 3

Hyojae Park

0.

1.3

Grid and ray visualizations:

1.4

Sampled points:

1.5

Depth:

Volume rendered:

2.2

Box center: (0.25, 0.25, 0.00) Box side lengths: (2.01, 1.50, 1.50)

2.3

Implicit volume:

3

The learned NeRF is above. As it is not view dependent, the lego looks static; ie, not affected by the angle that it is viewed from. However, it still captures important information like the topolgy and the color of the object.

4.1

Increased view dependence can be beneficial for increasing realism in the object as it does not assume that the material is Lambertian by better encapsulating information such as specularities, reflections, and refractions. However, a downside to this is that it can require more computation to achieve, as the viewing direction is now needed by the model.

Furthermore, view dependence can lower generalization quality because it can overfit to view-specific lighting, preventing it from generalizing to new views. On the other hand, view independence can be used to have the model learn the "base" color and geometry of the object, helping it to generalize to new views and lighting.

5

I implemented sphere tracing with a while loop that iterates until all points touch the surface (SDF = 0) or the maximum number of iterations is reached. At each iteration, I call the implicit function for all points, and update them by the output of the implicit function times the direction of each ray. The masking is done by checking if the SDF of a point is approximately zero.

6

Neural surface output with default settings.

7

Intuitively, alpha controls the scale of the density field. This means that as alpha increases, the overall density increases (similar to a more densely sampled surface vs a sparsely sampled surface in point clouds). A greater alpha will lead to more opaque surfaces.

Beta controls the sharpness of the surface. If beta is low, the CDF is steep, creating a sharp surface. If beta is high, the CDF changes slowly, creating a blurrier surface (there will be less of a distinction between near the surface and on the surface).

  1. A high beta will encourage the SDF to transition gradually from near the surface to the surface. This will cause the output to be fuzzier near the surface. On the other hand, a lower beta will encourage the density to be nearly zero everywhere except close to the surface, creating a sharp and more accurate surface.

  2. An SDF would be easier to train with higher beta. This is because a higher beta creates smoother gradients as the CDF changes more gradually.

  3. An SDF would be more accurate with a lower beta. This is because a lower beta enforces a sharper surface, better modeling a zero level set, while a higher beta will have a more ambiguous level set that is blurrier and thicker.

The above shows the best results that I had. This was achieved with alpha: 9.5, beta: 0.04. I think that by having a small beta, I encouraged the final surface to be sharp and thin, although it took longer for convergence. Modifying alpha to be higher caused it to become too noisy, and a lowering it made the surface too sparse, slowing down convergence. Thus, this middle ground seemed to give me the best performance.

8.3

I implemented the naive density function from the NeuS function. It's hyperparameter only requires s, instead of alpha and beta, where s is approximately proportional to the inverse of beta. After tinkering if varying values of s, I found that the best performance was when s=50, leading to the final output:

This seems to have a slightly worse result than the more refined formulation from part 7. While the colored surface is quite similar, its geometry is not captured well by the naive density function, especially around sharp, thin areas of the object.