Assignment 3 : Neural Volume Rendering and Surface Rendering

A. Neural Volume Rendering (80 points)

0. Transmittance Calculation (10 points)

Transmittance computation

General Transmittance equation for non-homogeneous mediums:
T(x, y) = e − ∫t = 0||x − y||σ(x + wt)

Computation of different Transmittance values:
T(y1, y2) = e − ∫021 = e − (2 − 0) = e − 2


T(y2, y4) = T(y2, y3) ⋅ T(y3, y4) = e − ∫230.5 ⋅ e − ∫3610 = e − (3 ⋅ 0.5 − 2 ⋅ 0.5) ⋅ e − (6 ⋅ 10 − 3 ⋅ 10) = e − 0.5 ⋅ e − 30 = e − 30.5


T(x, y4) = T(x, y1) ⋅ T(y1, y2) ⋅ T(y2, y4) = 1 ⋅ e − 2 ⋅ e − 30.5 = e − 32.5


T(x, y3) = T(x, y1) ⋅ T(y1, y2) ⋅ T(y2, y3) = 1 ⋅ e − 2 ⋅ e − ∫230.5 = e − 2 ⋅ e − 0.5 = e − 2.5

1. Differentiable Volume Rendering

1.3. Ray sampling (5 points)

grid rays
Figure 1: Left: Grid. Right: Rays.

1.4. Point sampling (5 points)

points
Figure 2: Points.

1.5. Volume rendering (20 points)

gif depth
Figure 3: Left: Color. Right: Depth.

2. Optimizing a basic implicit volume

Box center: (0.25, 0.25, 0.00)
Box side lengths: (2.01, 1.50, 1.50)

ta mine
Figure 4: Left: TA result. Right: Mine result

3. Optimizing a Neural Radiance Field (NeRF) (20 points)

lego
Figure 5: NeRF results.

4. NeRF Extras (CHOOSE ONE! More than one is extra credit)

4.1 View Dependence (10 points)

Adding view dependence yields more realistic renderings and often improves reconstruction by modeling specular and other view-dependent effects. The trade-off is reduced generalization to unseen viewpoints, as the network can overfit to training views and entangle appearance with viewing direction—capturing highlights well in-sample but degrading novel-view stability.

materials
Figure 6: NeRF with view dependence results.

B. Neural Surface Rendering (50 points)

5. Sphere Tracing (10 points)

torus
Figure 7: Torus Surface.

6. Optimizing a Neural SDF (15 points)

The implementation uses a network architecture similar to section 4, with the non-linear activation function removed to allow the network to output negative values required for signed distance functions.

The eikonal loss implementation is defined as:

def eikonal_loss(gradients):
    # TODO (Q6): Implement eikonal loss
    return torch.square(torch.norm(gradients, dim=-1) - 1).mean()

This loss function computes the norm of the gradients and penalizes deviations from the desired norm value of 1.

neuralSDF
Figure 8: Neural SDF Prediction.

7. VolSDF (15 points)

Intuitive Explanation of Parameters

Beta (β): Controls the sharpness of the density transition near the surface. It determines how rapidly density falls off as distance from the surface increases.

Alpha (α): Represents the maximum density value for points on the surface.

1. How does high beta bias your learned SDF? What about low beta?

High Beta: Produces a blurry SDF with gradual density transitions, resulting in less defined surfaces and lower peak densities.

Low Beta: Creates sharp density transitions near the surface, yielding more detailed and well-defined geometry.

2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?

Low Beta: More challenging to train as only points near the surface contribute significant density, reducing the effective training signal. However, this produces superior surface quality.

High Beta: Easier to train due to broader density distribution, but yields less accurate surfaces with blurrier boundaries.

3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?

Low beta produces more accurate surfaces by concentrating density gradients near the true surface boundary, enabling precise geometry recovery. High beta spreads density over a larger region, leading to less accurate surface localization.

lego_volsdf lego_geo
Figure 9: Left: Neural Surface. Right: Geometry.

8. Neural Surface Extras (CHOOSE ONE! More than one is extra credit)

8.2 Fewer Training Views (10 points)

lego_nerf lego_volsdf lego_geo
Figure 10: Left: NeRF results. Center: Neural Surface. Right: Geometry.

8.3 Alternate SDF to Density Conversions (10 points)

I implemented the naive SDF-to-density conversion from the NeuS paper, referred to as S-density. The formula is:


f(x)=sesx(1+esx)2

lego_neus lego_geo_neus
Figure 11: VolSDF using naive SDF to density function. Left: Neural Surface. Right: Geometry.