16-825 Assignment 3

ylchen write-up with visuals and GIFs.

0 — Transmittance Calculation

transmittance calculation
depth.png

1 — Depth Visualization

Depth is normalized by the maximum value for display. Also included is the grid image and the spiral render GIF from part 1.

depth visualization
depth.png
grid
grid.png
ray visualization
rays.png
part 1 spiral render
part_1.gif
sample points along rays
sample_points.png

2 — Optimization & Rendering

2.2 Loss & Training

Training converges with very low loss; the learned box center and side lengths are reported by the script.

Box center: (0.25022637844085693, 0.2505774199962616, -0.0004850137047469616) Box side lengths: (2.0051112174987793, 1.503594994544983, 1.5033595561981201)

2.3 Visualization

Spiral render of the optimized volume plus snapshots before/after training.

part 2 spiral render
part_2.gif
before training 0
before 0
before training 1
before 1
before training 2
before 2
before training 3
before 3
after training 0
after 0
after training 1
after 1
after training 2
after 2
after training 3
after 3

3 — NeRF: Materials & Views

Qualitative object appearance across view sweeps.

NeRF lego highres
lego (highres)
NeRF lego lowres
lego (lowres)

4 — NeRF Extras

4.1 View Dependence

NeRF materials highres
materials (highres)
NeRF materials lowres
materials (lowres)

Examples above (materials/lego) illustrate view-dependent effects in the learned radiance field.

5 — Sphere Tracing

sphere tracing result
part_5.gif

6 — Optimizing a Neural SDF

Left: input point cloud used for training. Right: predicted SDF rendering.

input point cloud
part_6_input.gif
neural sdf prediction
part_6.gif

My neural SDF uses an MLP that maps a 3D point to a scalar signed distance value, where negative values indicate inside the surface. The final layer is linear to allow both positive and negative outputs. The Eikonal loss enforces that the gradient norm of the SDF is close to 1: Lₑₖ = Eₓ[(‖∇ₓ f(x)‖₂ - 1)²] This encourages f(x) to behave like a valid SDF, resulting in smoother and more accurate surfaces.

7 — VolSDF: SDF→Density

VolSDF rendering
part_7.gif
VolSDF geometry sweep
part_7_geometry.gif

Alpha controls how strongly density contributes to volume accumulation during rendering while Beta controls how rapidly density increases as points approach the surface. TLDR: Beta affects how crisp or blurry our surfaces are while Alpha affects how visible the surface is (gapping).

High Beta gives a very sharp transition, concentrating density near the surface. Low Beta resulted in a smoother transition, more diffuse density around the surface.

Low Beta is easier to train, since gradients are smoother and more stable. High Beta creates a steep step function that makes learning unstable early in training

High Beta yields sharper, more accurate surfaces, once the network has learned roughly correct geometry, since the density focuses tightly around the zero level set

8 — Neural Surface Extras

VolSDF 20 views
part_7 — 20 views
VolSDF 50 views
part_7 — 50 views
geometry 20 views
geometry — 20 views
geometry 50 views
geometry — 50 views
geometry 50 views
nerf — 20 views
geometry 50 views
nerf — 50 views

VolSDF performs noticeably worse than NeRF under sparse training views, likely because surface-based representations require more stable geometry early in training, while NeRF’s volumetric model is more robust to incomplete view coverage.