
General Transmittance equation for non-homogeneous mediums:
T(x, y) = e − ∫t = 0||x − y||σ(x + wt)
Computation of different Transmittance values:
T(y1, y2) = e − ∫021 = e − (2 − 0) = e − 2
T(y2, y4) = T(y2, y3) ⋅ T(y3, y4) = e − ∫230.5 ⋅ e − ∫3610 = e − (3 ⋅ 0.5 − 2 ⋅ 0.5) ⋅ e − (6 ⋅ 10 − 3 ⋅ 10) = e − 0.5 ⋅ e − 30 = e − 30.5
T(x, y4) = T(x, y1) ⋅ T(y1, y2) ⋅ T(y2, y4) = 1 ⋅ e − 2 ⋅ e − 30.5 = e − 32.5
T(x, y3) = T(x, y1) ⋅ T(y1, y2) ⋅ T(y2, y3) = 1 ⋅ e − 2 ⋅ e − ∫230.5 = e − 2 ⋅ e − 0.5 = e − 2.5

Figure 1: Left: Grid. Right: Rays.

Figure 2: Points.

Figure 3: Left: Color. Right: Depth.
Box center: (0.25, 0.25, 0.00)
Box side lengths: (2.01, 1.50, 1.50)

Figure 4: Left: TA result. Right: Mine result

Figure 5: NeRF results.
Adding view dependence yields more realistic renderings and often improves reconstruction by modeling specular and other view-dependent effects. The trade-off is reduced generalization to unseen viewpoints, as the network can overfit to training views and entangle appearance with viewing direction—capturing highlights well in-sample but degrading novel-view stability.

Figure 6: NeRF with view dependence results.

Figure 7: Torus Surface.
The implementation uses a network architecture similar to section 4, with the non-linear activation function removed to allow the network to output negative values required for signed distance functions.
The eikonal loss implementation is defined as:
def eikonal_loss(gradients):
# TODO (Q6): Implement eikonal loss
return torch.square(torch.norm(gradients, dim=-1) - 1).mean()This loss function computes the norm of the gradients and penalizes deviations from the desired norm value of 1.

Figure 8: Neural SDF Prediction.
Beta (β): Controls the sharpness of the density transition near the surface. It determines how rapidly density falls off as distance from the surface increases.
Alpha (α): Represents the maximum density value for points on the surface.
beta bias your learned SDF? What about low beta?High Beta: Produces a blurry SDF with gradual density transitions, resulting in less defined surfaces and lower peak densities.
Low Beta: Creates sharp density transitions near the surface, yielding more detailed and well-defined geometry.
Low Beta: More challenging to train as only points near the surface contribute significant density, reducing the effective training signal. However, this produces superior surface quality.
High Beta: Easier to train due to broader density distribution, but yields less accurate surfaces with blurrier boundaries.
Low beta produces more accurate surfaces by concentrating density gradients near the true surface boundary, enabling precise geometry recovery. High beta spreads density over a larger region, leading to less accurate surface localization.

Figure 9: Left: Neural Surface. Right: Geometry.

Figure 10: Left: NeRF results. Center: Neural Surface. Right: Geometry.
I implemented the naive SDF-to-density conversion from the NeuS paper, referred to as S-density. The formula is:

Figure 11: VolSDF using naive SDF to density function. Left: Neural Surface. Right: Geometry.