Assignment 3 : Neural Volume Rendering and Surface Rendering¶

A. Neural Volume Rendering (80 points)¶

0. Transmittance Calculation (10 points)¶

1. Differentiable Volume Rendering¶

1.5 Volume rendering¶

Below are my visualizations:

2. Optimizing a basic implicit volume¶

2.2 Loss and training¶

Center of the box: (0.25, 0.25, 0.0)

Side lengths: (2.00, 1.50, 1.50)

2.3 Visualization¶

Here is the optimized volume I got:

This looks very similar to the one provided in the homework instructions.

3. Optimizing a Neural Radiance Field (NeRF)¶

This is the result of my NeRF optimization with the default hyperparameter settings:

4. NeRF Extras¶

4.1 View Dependence¶

Result on the lego scene:

Results on the materials scene:

Increased view dependence has the benefits of allowing the network to model specular, reflective, and other non-Lambertian effects and achieve better training image reconstruction overall. However, the downsides are that the network may overfit to training views since the representational capacity is increased. There may also be weaker geometric consistency as the model may try to "fake" appearance changes by changing colors instead of the geometry.

B. Neural Surface Rendering¶

5. Sphere Tracing¶

My implementation initializes points at ray origins then iteratively marches along the rays using SDF values. We track convergence by checking a distance threshold, and it stops when rays either hit the surface or go too far. We also set a maximum number of iterations.

6. Optimizing a Neural SDF¶

The eikonal loss enforces the gradients to have magnitude 1:

torch.square(torch.norm(gradients, dim=-1) - 1).mean()

My MLP is implemented with the default hyperparameters from the config file, except I added a skip connection at the 3rd hidden layer. In total there are 6 hidden layers each with 128 neurons.

7. VolSDF¶

Intuitively, $\alpha$ scales the maximum possible density, controlling how opaque the surface appears, and $\beta$ controls the smoothness of the transition at the object boundary (between empty space and space occupied by the object).

  1. How does high beta bias your learned SDF? What about low beta?

High $\beta$ biases the SDF to be smoother and more uncertain, "blurring" the surface, while low $\beta$ biases it towards a sharper, more precise surface.

  1. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

High $\beta$ -- as $\beta$ approaches zero, the density converges to an indicator function of $\Omega$, at which point the boundary becomes a discontinuity. This may cause vanishing gradients and unstable training.

  1. Would you be more likely to learn an accurate surface with high beta or low beta? Why?

Low $\beta$ -- for the same reasons above. When $\beta$ is close to 0, the boundary is sharper, so we are more likely to learn an accurate surface (the zero level set of the SDF).

I went with the default hyperparamater settings as they seemed to achieve a good balance between ease of training and faithfulness of the reconstruction.

8.2 Fewer Training Views¶

Below is the VolSDF solution trained on 20 random views:

The NeRF solution trained on 20 random views was unable to produce any rendering. This shows that using surface representations allows inference from fewer training views.