16-825 Learning for 3D Vision • Fall 2025
Name: Haejoon Lee (andrewid: haejoonl)

Assignment 1: Rendering Basics with PyTorch3D

Table of Contents

1. Practicing with Cameras

1.1 360-degree Renders

Created a smooth 360-degree rotation of the cow mesh, demonstrating camera control and animation techniques.

360-degree cow rotation

360-degree rotation of the cow mesh

1.2 Re-creating the Dolly Zoom

Implemented the famous dolly zoom effect by simultaneously changing the field of view and camera distance to maintain subject size while creating perspective distortion.

Mathematical Implementation:
The dolly zoom effect uses the relationship: distance = focal_length / (2 * tan(fov/2))
By calculating the correct distance for each FOV value, the subject maintains constant size while the background perspective changes dramatically.
Dolly zoom effect

Dolly zoom effect showing perspective distortion

2. Practicing with Meshes

2.1 Constructing a Tetrahedron

Manually defined vertices and faces for a tetrahedron mesh:

Tetrahedron 360 rotation

360-degree rotation of the tetrahedron

2.2 Constructing a Cube

Created a cube mesh with proper triangular face decomposition:

Cube 360 rotation

360-degree rotation of the cube

3. Re-texturing a mesh

Implemented gradient texturing from front to back of the cow using linear interpolation based on z-coordinates.

# Color interpolation formula: alpha = (z - z_min) / (z_max - z_min) color = alpha * color2 + (1 - alpha) * color1

Color Choices:

Gradient cow texture

Gradient texture from pink (front) to green (back)

4. Camera Transformations

Determined the relative camera transformations (R_relative, T_relative) to achieve specific views of the cow with axis.

Transform 1: Z-axis rotation 90°

Transform 1

R_relative: 90° rotation around Z-axis
T_relative: [0,0,0]

Transform 2: Move back by 2 units

Transform 2

R_relative: Identity
T_relative: [0,0,2]

Transform 3: Small translation

Transform 3

R_relative: Identity
T_relative: [0.5,-0.5,0]

Transform 4: Y-axis rotation -90° + translation

Transform 4

R_relative: -90° around Y-axis
T_relative: [3,0,3]

5. Rendering Generic 3D Representations

5.1 Rendering Point Clouds from RGB-D Images

Generated point clouds from RGB-D data using the unproject_depth_image function:

Point cloud 1

Point cloud from first RGB-D image

Point cloud 2

Point cloud from second RGB-D image

Combined point cloud

Combined point cloud (union of both images)

5.2 Parametric Functions

Created torus point cloud using parametric equations:

# Torus parametric equations: x = a * (cos(theta) + b) * cos(phi) y = a * (cos(theta) + b) * sin(phi) z = a * sin(theta)
Torus parametric

360-degree view of torus point cloud with visible hole

5.3 Implicit Surfaces

Created torus mesh using implicit function and marching cubes algorithm:

# Implicit function for torus: (b - sqrt(x² + y²))² + z² - a² = 0
Torus implicit

Torus mesh from implicit function

Tradeoffs: Mesh vs Point Cloud

Mesh Rendering:

  • ✓ Higher rendering quality with smooth surfaces
  • ✓ Better lighting and shading effects
  • ✓ More memory efficient for complex shapes
  • ✓ Faster rendering for large scenes
  • ✗ More complex to generate (requires marching cubes)

Point Cloud Rendering:

  • ✓ Simple to generate from mathematical functions
  • ✓ Easy to modify and manipulate
  • ✓ Good for sparse or noisy data
  • ✗ Lower visual quality (discrete points)
  • ✗ No surface information

6. Do Something Fun

Created an immersive "flying through torus" animation with dynamic camera movement:

Flying through torus

Immersive flying through torus animation

Creative Features:
  • Spiral camera path through torus hole
  • Varying spiral radius for dynamic movement
  • Wide FOV (90°) for immersive tunnel effect
  • Bonus: Multiple torus scene with 5 tori
Multiple torus scene

Multiple torus scene with 5 tori arranged in a circle

7. Sampling Points on Meshes (Extra Credit)

Implemented stratified sampling on triangle meshes using area-weighted face selection and barycentric coordinate sampling:

# Stratified sampling algorithm: 1. Calculate face areas using cross product of edges 2. Sample faces with probability proportional to area 3. Sample barycentric coordinates uniformly 4. Compute 3D points using barycentric interpolation
10 samples

10 samples - Very sparse

100 samples

100 samples - Better coverage

1000 samples

1000 samples - Good detail

10000 samples

10000 samples - Very dense