Problem 3¶
The output looks similar to the example, with some exceptions to weird artifacts at the bottom on some frames.
Problem 4¶
Problem 4.2¶
I attempted to implement a coarse and fine network but ran out of time to fix the implementation. This implementation (when it works properly) should produce better results at the cost of training time, since you have to resample the coarse output and feed in the sample into a fine network, meaning you have to train two models instead of one.
Problem 5¶
We first create a distance and mask arrays of the size of (N, 1). With a for loop, we incrementally extend the rays at a fixed distance. When a ray has an absolute distance less than or equal to 0.01, we update the mask to indicate a ray has hit the surface. We also use this mask to indicate which rays to not update.
Problem 6¶
The first layer has an input size of the embedding size and the output size of 128. The default hidden layers are (128, 128) and each output is fed into a ReLU activation function. For the third layer, we concatenate the recent output and the embedded input from the beginning before feeding it into the third layer. The last layer has the size of (128, 1) and does not go through any activation layers, so that the output layer acts as a signed distance function.
Problem 7¶
The alpha variable acts as a scaling factor that controls the magnitude of the function's output. The beta variable controls the "bandwidth" of the function, controlling how small of a distance needs to be to be converted to a non-zero density.
- A high beta will bias the SDF to convert more signed distances to non-zero densities, whereas a low beta will bias the SDF to only convert small signed distancesto non-zero densities.
- An SDF would be easier to train with a high beta because a high beta would result in more non-zero densities to be rendered, giving more features for the model to learn and calculate loss.
- An SDF would be more accurate with a low beta because a low beta would only output non-zero densities for distances that are closer to the surface of an object, giving more accurate features for the model to learn and calculate loss.
For the output below, I used the original parameters given to me.







