Assignment 1¶

David Russell (davidrus)¶

Please find the main code add main.py and a very brief readme at README.md. Zero late days were used. SegmentLocal

1.1 360 Degree Renders¶

SegmentLocal

1.2 Dolly Zoom¶

SegmentLocal

2.1 Constructing a Tetrahedron¶

SegmentLocal A tetrahedron has four vertices and four faces.

2.2 Constructing a Cube¶

SegmentLocal A cube has eight vertices and 12 triangular faces.

Retexturing a Mesh¶

SegmentLocal I used pure red $(1, 0, 0)$ and pure blue $(0, 0, 1)$.

4 Camera Transformations¶

In the first case we simply rotate the camera about its Z axis. Because of the transposed convention, this rotation is opposite what we expect. $$R = \begin{bmatrix}0 & 1 & 0 \\-1 & 0 & 0 \\0 & 0 & 1\end{bmatrix}\text{, } t = \begin{bmatrix}0 \\ 0 \\0 \end{bmatrix}$$ SegmentLocal In the second case we simply move the camera away from the subject along its Z axis. This creates a zoomed out effect. $$R = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1& 0 \\0 & 0 & 1\end{bmatrix}\text{, } t = \begin{bmatrix}0 \\ 0 \\3 \end{bmatrix}$$ SegmentLocal In the third case we shift the object left and down, so we are looking at it from above and to the right. $$R = \begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1\end{bmatrix}\text{, } t = \begin{bmatrix}0.5 \\ -0.5 \\0 \end{bmatrix}$$ SegmentLocal Finally, we rotate the camera right about the up axis. Since the camera is not at the origin, we need to compensate for this rotation by translating right and back. $$R = \begin{bmatrix}0 & 0 & 1\\0 & 1 & 0 \\-1 & 0 & 0\end{bmatrix}\text{, } t = \begin{bmatrix}-3 \\ 0 \\3 \end{bmatrix}$$ SegmentLocal

5.1 Rendering Point Clouds from RGB-D Images¶

SegmentLocal SegmentLocal SegmentLocal

5.2 Parametric Functions¶

SegmentLocal

5.3 Implicit Surfaces¶

SegmentLocal

It is computationally more expensive to generate a mesh in the manner we described because rather than simply generating points and the surface of the mesh, we need to generate them for a dense grid in the space the object occupies. However, once a mesh has been generated, there a a number of advantages to it. The first is there is an explicit notion of ray-object intersection. This can be used to texture a mesh with observations from a camera while respecting occlusions or check whether a point is in- or outside a mesh. Furthermore, in regions where curvature is low, a mesh can be represented by only a few vertices compared to the number of points it would take to densely sample the region. This holds true for rendering especially, where a point cloud must be sampled very densely to obtain a solid-looking representation. One final downside to meshes is they cannot allways be created easily. In data-driven applications, where point clouds were observed, it can be very challenging to fit a watertight mesh to the (potentially noisy) observations. This renders the benefits of meshes moot, if they cannot be generated for a given application.

6 Something fun¶

I was interested in volume rendering so I explored volume rendering a simple scene. I took the distance to the center of a torus, inverted it, and appropriately scaled it so the structure was visible in the volumetric render. Below is a visualization of the result. SegmentLocal