Vaibhav Parekh | Fall 2025
Center of the box after training: (0.25, 0.25, 0.00)
Side lengths of the box after training: (2.00, 1.50, 1.50)
Fig. 2.3(b). Before Training
Fig. 2.3(c). After Training
Trade-offs between View Dependence and Generalization
Sphere tracing advances rays through the scene by repeatedly stepping forward by the signed distance to the nearest surface. At each step, the implicit function estimates how far the current point is from the surface. When this distance falls below a small threshold (epsilon), the ray is considered to have hit the surface. Rays that never reach this threshold within the iteration limit are marked as misses.
The MLP uses a single network with a skip connection at the fourth layer. Unlike NeRF, its distance head ends with a linear layer since SDF values are continuous and unbounded. The eikonal loss enforces that the gradient magnitude equals one by minimizing the mean squared error between ‖∇f(x)‖ and 1, ensuring the network learns a valid signed distance field.
Intuitive explanation of alpha and beta
alpha: The alpha parameter governs how opaque or dense the medium is. It defines how strongly light is absorbed as it moves through the volume. A larger alpha makes regions denser and less transparent—light is blocked or scattered more quickly—resulting in solid-looking surfaces. A smaller alpha keeps the medium thinner and more translucent, allowing more light to pass through. In rendering, increasing alpha produces more opaque, well-defined surfaces.beta: The beta parameter controls how sharply the signed distance function is converted into density. A high beta value creates a steep, localized transition from free space to occupied space, resulting in sharp surface boundaries. A lower beta smooths this transition, spreading density over a wider region and yielding softer surfaces.
Q1. How does high beta bias your learned SDF? What about low beta?
A high beta pushes the model to form a very sharp density boundary around the surface, which can improve precision but also make training unstable and prone to noise or overfitting. A low beta smooths the density transition, making optimization easier and more stable, but may blur the surface and reduce geometric accuracy.
Q2. Would an SDF be easier to train with volume rendering and low beta or high beta? Why?
Training is generally easier with a lower beta, since gradients are smoother and less abrupt. The optimization landscape becomes simpler, leading to faster convergence. In contrast, high beta increases gradient sensitivity near the surface, making the training process harder and less stable.
Q3. Would you be more likely to learn an accurate surface with high beta or low beta? Why?
A higher beta produces sharper and more well-defined surfaces, making it better for capturing fine geometric detail once training has stabilized. However, it is harder to optimize and may introduce artifacts if used too early. A lower beta helps achieve smoother convergence but may result in slightly blurred or less precise surfaces. In practice, accuracy is often improved by starting with a low beta and gradually increasing it during training.