Andrew ID: rajathc
XY and Ray Visualization

Points Visualization

Cube Render and Depth Visualization

After Optimization - Rounded to the nearest 1/100 decimal place:
Box center: (0.25, 0.25, -0.00)
Box side lengths: (2.00, 1.50, 1.50)

The generated visualization is similar to the TA provided visualization.


View Dependence - nerf_materials_highres.yaml
Trade-offs between increased view dependence and generalization quality:
Increasing view dependence lets NeRF render more realistic effects like reflections and specular highlights. However, it can cause overfitting to the per-view training views, making the model perform worse when generating unseen angles.

Sphere Tracing - Torus Surface
Short writeup on my implementation:
points as ray origins and mask to track surface hits.| Input Pointclouds | Neural SDF |
|---|---|
![]() |
![]() |
Changed Hyperparameters:
lr: 0.0005lr_scheduler_step_size: 40lr_scheduler_gamma: 0.9eikonal_weight: 0.08n_layers_distance: 4n_hidden_neurons_distance: 256| VolSDF Geometry | VolSDF |
|---|---|
![]() |
![]() |
Alpha controls the overall opacity scale, affecting how quickly density accumulates along a ray.
Beta adjusts the sharpness of the SDF-to-density lower values create sharper surfaces, while higher values make the transition smoother.
beta bias your learned SDF? What about low beta?beta makes the SDF-to-density mapping smoother, causing the model to learn a blurrier surface. A low beta results in a sharper transition, encouraging a thinner, more precise surface.
beta or high beta? Why?beta because the smoother transition provides better gradient flow, helping the network learn more stable updates during optimization.
beta or low beta? Why?beta is more likely to produce an accurate and well-defined surface, as it sharply localizes the density around the zero level set, aligning better with true geometry.
I reduced beta to 0.03 to achieve a sharper and more defined surface boundary. Apart from that, I used the default hyperparameters as they seemed to work well with my network :)
train_idx: tensor([ 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95])
| VolSDF Geometry | VolSDF | NeRF |
|---|---|---|
![]() |
![]() |
![]() |
With only 20 training views, both VolSDF and NeRF were able to learn the overall lego scene geometry.
However, VolSDF produced a blurrier reconstruction compared to NeRF. That said, even NeRF missed fine details, especially on the back of the truck, and lacked the sharpness seen with full 100-view training.