Assignment 4
Name: Ayush Pandey
andrew id: ayushp
1. Sphere Tracing (30pts)
You can run the code for part 1 with:
python -m a4.main --config-name=torusThis should save part_1.gif in the `images' folder.
2. Optimizing a Neural SDF (30pts)
MLP architecture: 7 Fully connected layers with ReLU as activation after every layer except for the last layer which does not have any activation to allow modelling a signed distance function which take on any real value
Eikonal Loss: To make sure that the function that we learn through neural network represents a signed distance function we enforce the norm of gradients to be close to 1 at each point. We do this by minimizing the distance between the gradient of the function at each point and 1 and taking the average over all points.
After this, you should be able to train a NeuralSurface representation by:
python -m a4.main --config-name=pointsThis should save save part_2_input.gif and part_2.gif in the images folder.
| NeuralSurface Rendering | Ground Truth Point Cloud |
|---|---|
![]() |
![]() |
3. VolSDF (30 pts)
-
SDF to Density: Read section 3.1 of the VolSDF Paper and implement their formula converting signed distance to density in the
sdf_to_densityfunction ina4/renderer.py. In your write-up, give an intuitive explanation of what the parametersalphaandbetaare doing here. Also, answer the following questions:
- How does high
betabias your learned SDF? What about lowbeta? - Would an SDF be easier to train with volume rendering and low
betaor highbeta? Why? - Would you be more likely to learn an accurate surface with high
betaor lowbeta? Why?
Answer: Alpha denotes the density of the object being modelled and beta parameter denotes the spread of this density around the surface i.e. Beta determines how fast does the density goes to zero as you move away from the surface. In the paper alpha is mentioned as the constant density and beta is mentioned by the smoothing amount of alpha around the surface. Beta is the mean absolute deviation of the laplace distribution that
- High
betavalue leads to the density of the object being spread out through a larger area thus making it more transparent in a sense. With highbetavalue the conversion from sdf to density will be similar across a large number of sdf values. Lowbetavalues constrain the density closer to the object surface leading to a better modelling of the 3D object. lowbetavalue will lead to very high density near the surface which quickly reaches a 0 value as you move away from the surface. - SDF will be easier to train with volume rendering and low
betaas compared to highbeta. With high beta the conversion from sdf to density will be similar across a wide range of values (in extreme case constant across all values of sdf) preventing the model from learning useful sdf to model the surface. With lowbetathe density will be higher closer to the sdf = 0 and will quickly go to zero as you move away from the surface helping the model to learn sdf to represent the 3d object better. - We would be likely to learn more accurate surface with low
betavalue as compared to highbeta. This is because of two reasons 1. lowbetavalue captures an surface modelled through sdf better as we will have very high density near the surface and the density reaches a 0 value quickly as we move away from it as compared to highbetavalue. 2. lowbetavalues will lead to better learning of the sdf values as for sdf values = 0 you will actually be able to render a surface and use image based losses as compared to a highbetavalue where the surface will appear more like a medium which will make the rendered image more blurry and make it quite difficult to learn good sdf function.
After implementing these, train an SDF on the lego bulldozer model with
python -m a4.main --config-name=volsdfThis will save part_3_geometry.gif and part_3.gif.
Geometry Images
| Alpha = 10 and Beta = 0.05 | Alpha = 20 and Beta = 0.025 |
|---|---|
![]() |
![]() |
Color Images
| Alpha = 10 and Beta = 0.05 | Alpha = 20 and Beta = 0.025 |
|---|---|
![]() |
![]() |
4. Neural Surface Extras (CHOOSE ONE! More than one is extra credit)
4.1. Render a Large Scene with Sphere Tracing (10 pts)
In Q1, you rendered a (lonely) Torus, but to the power of Sphere Tracing lies in the fact that it can render complex scenes efficiently. To observe this, try defining a ‘scene’ with many (> 20) primitives (e.g. Sphere, Torus, or another SDF from this website at different locations). See Lecture 2 for equations of what the ‘composed’ SDF of primitives is. You can then define a new class in implicit.py that instantiates a complex scene with many primitives, and modify the code for Q1 to render this scene instead of a simple torus.
4.2 Fewer Training Views (10 pts)
You can run the codes as follows. It will generate the files images/part_4_2.gif, images/part_4_2_geometry.gif and images/assignment_4_nerf.gif
python -m a4.main --config-name=volsdf_4_2python main.py --config-name=nerf_lego_4_4_2VolSDFs trained on limited views
| Color | Geometry |
|---|---|
![]() |
![]() |
Comparison between NeRF and VolSDFs generated using limited number of views (20 views). We can see that VolSDF does get the surface much better as compared to NeRF which can be clearly seen at the rake of bulldozer. NeRF has some abberations at the surface whereas VolSDF captures it much more accurately.
| NeRF | VolSDFs |
|---|---|
![]() |
![]() |
4.3 Alternate SDF to Density Conversions (10 pts)
I have implemented the naive method from the NeuS paper. You can run the code as follows
python -m a4.main --config-name=volsdf_4_3It will generate the files images/part_4_3.gif and images/part_4_3_geometry.gif. I have chosen the value of s empircally to be 1000 to have a standard deviation of 0.001 which is comparable to the mean standard deviation of laplace by using a beta of 0.025. For the given hyper-parameters NeuS performs better than VolSDF.
Here are the results.
VolSDF results:
| Color | Geometry |
|---|---|
![]() |
![]() |
NeuS SDF to distance function results
| Color | Geometry |
|---|---|
![]() |
![]() |












