HW1Q1 result¶
1.2: Dolly Zoom¶
q1.2 Dolly Zoom Result¶

Question 2: Practicing with Meshes¶
Q2.1 Construct a Tetrahedron¶
Q2.1 Tetrahedron¶
Q2.2 Construct a Cube¶
Q2.2 Cube¶
Q3. Retexturing a mesh¶
Q3 Results¶
Q4. Camera Transformations¶
R_relative and T_relative transforms the camera view with respect to the default camera pose. For example, for transform 1, R_relative is used to rotate the view by the Z axis (pointing into the camera, out of the page) by 90 degrees clockwise; for transform 3, T_relative is used to move the camera 2 units along the Z axis such that the total distance from the cow is now 5 units instead of 3.
Q4 Rendered views: Identity | Transform 1 | Transform 2 | Transform 3 | Transform 4¶





Note that the transforms are named according to the views present in images/transform*.jpg, not in the order presented on the writeup (which shuffled the transform orders)!
Q5.1: Rendering PointCloud from RGBD Images¶
Q5.2 Parametric Functions¶
Q5.2 Results: Torus | Hourglass¶
Q5.3 Implicit Surfaces¶
Q5.3 Implicit Surfaces: Torus | Custom Object¶
Discussion of Tradeoffs: Meshes vs. PointCloud¶
In terms of memory usage, point clouds consume significantly more memory to store a high fidelity recording of an object compared to meshes. Meshes can be sampled to recover point cloud information if required, but the reverse process is not as simple, so storing a mesh of an object compared to a dense point cloud will always be advantageous.
In terms of rendering speed, GPUs are extremely efficient at rendering meshes, whereas point clouds at high resolution can contain orders of magnitude more points that need to be rendered. Running renders of point clouds in this assignment on the CPU frequently required downsampling of the point cloud for a speedup.
In terms of ease of use - especially in deforming, or slicing objects - meshes have defined topologies that make object modification and simulation easier, whereas the points in point clouds have to be individually modified resulting in higher computational loads.
In terms of structure, point clouds have an edge due to being able to capture arbitrary points in space, which would allow quick modification of a scene by taking the union or intersection of points to add or remove specific parts of the scene. It is more difficult to arbitrarily manipulate a single mesh to add, subtract, or modify existing features that disrupt the structure.
So, while point clouds are less efficient to render and store, it allows for greater flexibility. It is always possible to sample from a mesh to create a point cloud, though the reverse process is more involved.
Q6: Do Something Fun - Retexturing the cow again¶
For this part, I wanted to explore coloring the cow mesh creatively, based on each vertex's radial distance from a specified point. I found the tail-most point by getting the point with the largest Z-value, computing the distance to that vertex from every other vertex, then coloring vertices based on their distance to that tail point (red = closer, green = farther).
Result¶

Q7: Sampling Points on Meshes¶
Not attempted.