Assignment 1: Rendering Basics with PyTorch3D

16-825 Learning for 3D Vision
Prof. Shubham Tulsiani
Switch to Light Mode

1. Practicing with Cameras (15 Points)

1.1 360-degree Renders (5 points)

This task generates a 360-degree gif video that shows many continuous views of the provided cow mesh. The output file is turntable_renders.gif, implemented in turntable_renders.py, and can be run using python -m starter.360_degree_renders.

360-degree turntable renders

1.2 Re-creating the Dolly Zoom (10 points)

This task demonstrates changing the focal length of the camera while moving the camera in a way such that the subject is the same size in the frame. The output file is dolly.gif, implemented in dolly_zoom.py, and can be run using python -m starter.dolly_zoom --duration 0.2 --num_frames 30.

Dolly zoom effect

2. Practicing with Meshes (10 Points)

2.1 Constructing a Tetrahedron (5 points)

This task creates a tetrahedron mesh which has 4 vertices and 4 triangle faces, where each face is a triangle and every vertex is shared by three faces, making it the simplest possible closed mesh in 3D. The output file is tetrahedron.gif, implemented in practicing_with_meshes.py, and can be run using python -m starter.practicing_with_meshes --shape tetrahedron --output images/tetrahedron.gif.

Tetrahedron mesh

2.2 Constructing a Cube (5 points)

This task creates a cube mesh which has 8 vertices and 12 triangle faces, where each of the 6 square faces is split into 2 triangles (6×2=12 triangle faces). The output file is cube.gif, implemented in practicing_with_meshes.py, and can be run using python -m starter.practicing_with_meshes --shape cube --output images/cube.gif.

Cube mesh

3. Re-texturing a mesh (10 points)

This task demonstrates re-texturing a mesh with a gradient from green ([0.0, 0.8, 0.2]) at the front to yellow ([1.0, 0.85, 0.0]) at the back, chosen to create a natural, warm color transition that enhances the 3D depth perception. The output file is retextured_mesh.gif, implemented in retexture_mesh.py, and can be run using python -m starter.retexture_mesh --cow_path data/cow.obj --image_size 256.

Re-textured mesh

4. Camera Transformations (10 points)

This task demonstrates various camera transformations. The output files are transform1.jpg, transform2.jpg, transform3.jpg, and transform4.jpg, implemented in camera_transforms.py, and can be run using python -m starter.camera_transforms.


Transformation Details:

1. Rotating 90° in XY-plane

$$R_{relative} = \begin{bmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}$$
$$T_{relative} = [0, 0, 0]$$

Description: Rotating the camera 90° around the Z-axis (in the XY-plane), so the view is from the side.

Camera transform 1

2. Moving away by 2 units

$$R_{relative} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$$
$$T_{relative} = [0, 0, 2]$$

Description: No rotation; the camera is moving further away from the object along the Z-axis, making the cow appear smaller.

Camera transform 2

3. Translating along -Y and +X

$$R_{relative} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$$
$$T_{relative} = [0.5, -0.5, 0]$$

Description: No rotation; the camera is shifting right (+X) and down (-Y), changing the viewpoint to look at the cow from an offset position.

Camera transform 3

4. Rotating 90° in XZ-plane, translating from X to Z

$$R_{relative} = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{bmatrix}$$
$$T_{relative} = [-3, 0, 3]$$

Description: Rotating the camera 90° around the Y-axis (in the XZ-plane), so the view is from above or below. The translation keeps the camera at a distance of 3 units.

Camera transform 4

5. Rendering Generic 3D Representations (45 Points)

5.1 Rendering Point Clouds from RGB-D Images (10 points)

This task renders point clouds from RGB-D images. The output files are plant_first_image.gif, plant_second_image.gif, and plant_union.gif, implemented in render_point_clouds_rgbd.py, and can be run using python -m starter.render_point_clouds_rgbd.

Plant first image point cloud
Figure 5.1.1: Point cloud corresponding to the first image
Plant second image point cloud
Figure 5.1.2: Point cloud corresponding to the second image
Plant union point cloud
Figure 5.1.3: Point cloud formed by the union of the first 2 point clouds

5.2 Parametric Functions (10 + 5 points)

This task demonstrates parametric functions. The default number of samples is 1000 but can be changed when running the code. The output files are torus.gif and new_object.gif, implemented in starter.render_parametric.py, and can be run using python -m starter.render_parametric.

Torus parametric
Figure 5.2.1: Torus generated using parametric functions
New parametric object
Figure 5.2.2: A Trefoil Knot

5.3 Implicit Surfaces (15 + 5 points)

This task demonstrates implicit surfaces. The output files are torus_implicit.gif and heart_implicit.gif, implemented in render_point_clouds_rgbd.py, and can be run using python -m starter.render_implicit.

Torus implicit surface
Figure 5.3.1: Torus generated using implicit surfaces
Heart implicit surface
Figure 5.3.2: Heart shape generated using implicit surfaces

Point Clouds vs Meshes Comparison:

  • Quality: Whereas point clouds only display discrete samples that may appear sparse or holey with limited lighting effects, meshes offer smooth, continuous surfaces with realistic shading and textures.
  • Speed: Because each point is processed separately, point clouds render significantly more quickly than mesh rendering, which requires more computation for faces, normals, and lighting.
  • Memory Usage: While point clouds are lightweight and primarily store coordinates (and optional colors), meshes demand additional memory (vertices + face indices).
  • Best for: Point clouds are preferable for fast previews, raw data representation, and lightweight visualization, whereas meshes are better for realistic visualization and applications requiring exact geometry.
  • Ease to use: Point clouds are easy to build straight from sensors or samples without addressing connectivity, whereas meshes are more difficult to make and manipulate (surface extraction, connectivity, repairs).

6. Do something fun (10 points)

This creative task demonstrates morphing between a sphere and heart shape. The output file is morph_sphere_heart.gif, implemented in morph_implicit_shapes.py, and can be run using python -m starter.morph_implicit_shapes.

Sphere to heart morphing
Figure 6.1: Morphing animation from sphere to heart using implicit functions

Mesh Morphing: Sphere to Heart via Implicit Functions

Figure 6.1 showcases mesh morphing between a sphere and a heart using implicit surface representations and marching cubes. The process leverages the following implicit functions:

Sphere:

$$F_{sphere}(x, y, z) = x^2 + y^2 + z^2 - 1$$

Heart (rotated about x-axis):

$$y' = y \cos \theta - z \sin \theta$$ $$z' = y \sin \theta + z \cos \theta$$ $$x' = x$$
$$F_{Heart}(x, y, z) = (x^2 + \frac{9}{4}y'^2 + z'^2 - 1)^3 - x^2z'^3 - \frac{9}{80}y'^2z'^3$$

where θ = 90°.

Interpolation:

The morphing is achieved by linearly interpolating the two implicit functions:

$$F_{morph}(x, y, z; \alpha) = (1 - \alpha)F_{sphere}(x, y, z) + \alpha F_{Heart}(x, y, z)$$

where α ∈ [0, 1] controls the blend between sphere (α = 0) and heart (α = 1).


For each interpolation step:

  1. The voxel grid is evaluated with Fmorph.
  2. The marching cubes algorithm extracts the zero level-set mesh.
  3. Vertex colors are interpolated between blue (sphere) and red (heart):
    $$C_{morph} = (1 - \alpha)[0.3, 0.5, 1.0] + \alpha[1.0, 0.1, 0.2]$$
  4. The mesh is rendered from a fixed viewpoint and added as a frame to the GIF.

7. (Extra Credit) Sampling Points on Meshes (10 points)

This extra credit task demonstrates sampling points on meshes, comparing mesh representations with point cloud representations at different sample sizes. The output files are cow_pc_10.gif, cow_pc_100.gif, cow_pc_1000.gif, and cow_pc_10000.gif, implemented in xtra_credit.py, and can be run using python -m starter.xtra_credit.

Cow mesh vs point cloud 10 samples
Figure 7.1: 10 sample points
Cow mesh vs point cloud 100 samples
Figure 7.2: 100 sample points
Cow mesh vs point cloud 1000 samples
Figure 7.3: 1,000 sample points
Cow mesh vs point cloud 10000 samples
Figure 7.4: 10,000 sample points