Environment & Setup
Torch 2.2.2 PyTorch3D (CPU) CUDA: False Device: CPU
Environment created following the official PyTorch3D CPU instructions. Rendering is done with the MeshRenderer and PointsRenderer from PyTorch3D. All outputs on this page are produced by the starter code with small additions described below.
- Renderer convention. Renderers return (B, H, W, 4); I display the RGB channels.
- Lighting. For meshes I use a PointLights source and a HardPhongShader with a white background.
- Cameras. Turntables use look_at_view_transform with fixed distance/elevation and a sweeping azimuth.
0.1 — Rendering Your First Mesh
I load the provided cow mesh, send it to the appropriate device, and render a single frame using a perspective camera and Phong shading. This verified my pipeline (I/O → rasterization → shading) and ensured textures/vertex colors were handled correctly.
1 — Practicing with Cameras
1.1 360-degree Renders (5 pts)
I render a full turntable of the cow by sweeping azimuth while keeping distance and elevation fixed. look_at_view_transform returns the camera (R,T) for each view.
Snippet:
for az in np.linspace(0, 360, N, endpoint=False): R, T = look_at_view_transform(dist=3.0, elev=10.0, azim=float(az)) cams = FoVPerspectiveCameras(R=R, T=T, device=device) img = renderer(mesh, cameras=cams)[0, ..., :3]
1.2 Re-creating the Dolly Zoom (10 pts)
The dolly zoom keeps subject size constant while the FOV changes. If f is the FOV and d is camera distance, set d(f) = d₀ · tan(f₀/2) / tan(f/2).
2 — Practicing with Meshes
I built simple meshes programmatically: a tetrahedron (4 faces) and a triangulated cube (12 triangles). For both, I created Meshes from explicit vertex/face tensors and used TexturesVertex for color. Turntables confirm face winding and shading are correct.
3 — Re-texturing a Mesh
I re-textured the cow via per-vertex colors. Let V be the packed vertices; I normalized a coordinate (e.g., z) and mapped it to a color gradient. Colors were attached with TexturesVertex(verts_features), so Phong shading combines the procedural color with lighting.
4 — Camera Transformations
I explored yaw/pitch/roll and truck/boom/dolly by composing a relative transform R_rel, T_rel with the base camera R₀, T₀. Composition: R = R_rel · R₀, T = R_rel · T₀ + T_rel. Screenshots below show the effect of each individual transform.





5.1 — Point Clouds from RGB-D
Given two RGB-D frames of a plant, I unprojected each depth map to a point cloud using camera intrinsics/extrinsics: for each pixel I formed [x_ndc, y_ndc, depth] and called camera.unproject_points(..., world_coordinates=True). I then collected the corresponding RGB and constructed Pointclouds.
- Three clouds: first view, second view, and their union.
- Rendering: PointsRenderer with a small radius for clean structure; 360° turntable.
- Performance: I cap the union to ~200k points for CPU speed; progress bars help estimate runtime.
5.2 — Parametric Surfaces
Torus. For angles u,v ∈ [0,2π) with major radius R and minor radius r: x=(R+r cos v)cos u, y=(R+r cos v)sin u, z=r sin v. I sampled a dense grid and rendered as points; using a small point radius preserves the hole.
Möbius strip (extra). A band with a half-twist. One parameterization: x=(1+v/2 cos(u/2))cos u, y=(1+v/2 cos(u/2))sin u, z=v/2 sin(u/2), with u∈[0,2π), v∈[-1,1].
5.3 — Implicit Surfaces → Marching Cubes
I represented geometry as the zero level set of F(x,y,z) on a voxel grid and extracted a mesh with marching cubes. For a torus: F = (√(x²+y²)−R)² + z² − r².
Meshes vs Point Clouds (quick discussion)
- Quality: Meshes give crisp silhouettes and specular highlights; point clouds can look sparse unless dense.
- Speed & Memory: Rasterizing meshes is generally faster than splatting millions of points; point clouds are trivial to generate from RGB-D.
- Ease: Point clouds are direct; meshes require connectivity (e.g., marching cubes).
6 — Do Something Fun
I used the classic Stanford Bunny (OBJ without MTL) and added a playful effect: a subtle “breathing” displacement along vertex normals plus animated procedural colors.
Command used: python -m starter.fun_quick --obj data/stanford_bunny.obj --output images/fun_scene.gif
7 — (Extra) Sampling Points on Meshes
I sampled a point cloud uniformly over the cow’s surface area using stratified sampling with area-proportional face selection and uniform barycentric coordinates (√-trick). Below: mesh vs sampled point clouds at different densities.






