16-825 Assignment 3 : Neural Volume Rendering and Surface Rendering

Andrew ID: rajathc

A. Neural Volume Rendering (80 points)

0. Transmittance Calculation (10 points)

1. Differentiable Volume Rendering

1.3. Ray sampling (5 points)

XY and Ray Visualization

1.4. Point sampling (5 points)

Points Visualization

1.5. Volume rendering (20 points)

Cube Render and Depth Visualization

2. Transmittance Calculation (10 points)

2.2. Loss and training (5 points)

After Optimization - Rounded to the nearest 1/100 decimal place:
Box center: (0.25, 0.25, -0.00)
Box side lengths: (2.00, 1.50, 1.50)

2.3. Visualization


The generated visualization is similar to the TA provided visualization.

3. Optimizing a Neural Radiance Field (NeRF) (20 points)


4. NeRF Extras (CHOOSE ONE! More than one is extra credit)

4.1 View Dependence (10 points)


View Dependence - nerf_materials_highres.yaml

Trade-offs between increased view dependence and generalization quality:
Increasing view dependence lets NeRF render more realistic effects like reflections and specular highlights. However, it can cause overfitting to the per-view training views, making the model perform worse when generating unseen angles.

B. Neural Surface Rendering (50 points)

5. Sphere Tracing (10 points)


Sphere Tracing - Torus Surface

Short writeup on my implementation:

6. Optimizing a Neural SDF (15 points)

Input Pointclouds Neural SDF

Changed Hyperparameters:

7. VolSDF (15 points)

VolSDF Geometry VolSDF

Alpha controls the overall opacity scale, affecting how quickly density accumulates along a ray.
Beta adjusts the sharpness of the SDF-to-density lower values create sharper surfaces, while higher values make the transition smoother.

  1. How does high beta bias your learned SDF? What about low beta?
    A high beta makes the SDF-to-density mapping smoother, causing the model to learn a blurrier surface. A low beta results in a sharper transition, encouraging a thinner, more precise surface.
  2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?
    Training is easier with a high beta because the smoother transition provides better gradient flow, helping the network learn more stable updates during optimization.
  3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?
    A low beta is more likely to produce an accurate and well-defined surface, as it sharply localizes the density around the zero level set, aligning better with true geometry.

I reduced beta to 0.03 to achieve a sharper and more defined surface boundary. Apart from that, I used the default hyperparameters as they seemed to work well with my network :)

8. Neural Surface Extras (CHOOSE ONE! More than one is extra credit)

8.2 Fewer Training Views (10 points)

train_idx: tensor([ 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95])

VolSDF Geometry VolSDF NeRF

With only 20 training views, both VolSDF and NeRF were able to learn the overall lego scene geometry.
However, VolSDF produced a blurrier reconstruction compared to NeRF. That said, even NeRF missed fine details, especially on the back of the truck, and lacked the sharpness seen with full 100-view training.