Q0: Transmittance Calculation¶

Image 1

Q1: Differentiable Volume Rendering¶

Q1.3: Ray Sampling¶

Image 1 Image 2

Q1.4: Point Sampling¶

Image 1

Q 1.5: Volume Rendering¶

Image 1 Image 2

Q2: Optimizing a basic implicit volume¶

Q2.3¶

Image 1

Compare this gif to the one below, and attach it in your write-up:

This optimized volume is more wide than our initial box shape


Q3: Optimizing a Neural Radiance Field¶

Image 1

Q4: NeRF Extras¶

Q4.1: View Dependence¶

Image 1 Image 2 Image 3
Caption: materials | lego | materials_highres (left to right)

Discuss the trade-offs between increased view dependence and generalization quality:

Adding view dependence lets the model capture shiny and reflective effects that change with viewpoint. However, it can make the model overfit to the training views and generalize worse to new angles.


Q5: Sphere Tracing¶

Image 1

Please include this in your submission along with a short writeup describing your implementation:

The sphere tracing algorithm iteratively moves each ray origin forward by the predicted signed distance until it reaches the surface. Rays whose final distance falls below a small threshold are marked as hits, efficiently computing ray–surface intersections using the SDF.


Q6: Optimizing A Neural SDF¶

Image 1 Image 2

The former visualizes the input point cloud used for training, and the latter shows your prediction which you should include on the webpage alongwith brief descriptions of your MLP and eikonal loss:

The Neural Surface MLP maps 3D coordinates to signed distance values using harmonic embeddings and skip connections to capture detailed geometry. The eikonal loss enforces unit gradient norms on the SDF, ensuring smooth and consistent surface reconstruction.


Q7: VolSDF¶

SDF Beta Experiments¶

1. How does high beta bias your learned SDF? What about low beta?

  • High beta makes the SDF smoother, which blurs the surface boundary. Low beta makes the surface sharper but can make training unstable.

2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

  • It is easier to train with high beta because the smoother SDF gives more stable gradients during training.

3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?

  • You are more likely to get an accurate surface with low beta because it produces a sharper zero-level set once the model has converged.

4. Experiment results and best hyper-parameters:

  • Best settings: α = 10, β = 0.05 (default)
  • Other tried settings:
    • α = 1.0, β = 0.05 → surface too soft
    • α = 10.0, β = 0.5 → surface blurry

alpha: 10.0 | beta: 0.05

Image 1 Image 2

alpha: 1.0 | beta: 0.05

Image 1 Image 2

alpha: 10.0 | beta: 0.5

Image 1 Image 2

Q8: Neural Surface Extras¶

Q8.2: Fewer Training Views¶

VolSDF Results (20 views)

Image 1 Image 2

NERF Result (20 views)

Image 1

You should also compare the VolSDF solution to a NeRF solution learned using similar views:

Even with less views NeRF seems to produce a better reconstruction compared to VolSDF

Q8.3: Alternate SDF to Density Conversions¶

Image 1 Image 2