16-825 Learning for 3D Vision
Successfully rendered the cow mesh using PyTorch3D with basic lighting and camera setup.
Created a 360-degree turntable animation of the cow mesh using the look_at_view_transform function.
Recreated the dolly zoom effect by adjusting camera distance and FOV simultaneously.
Manually constructed a tetrahedron mesh with vertices and faces.
There are 4 vertices and 4 faces to the tetrahedron.
Constructed a cube mesh using triangular faces.
There are 8 vertices and 12 faces to the cube.
Applied a gradient texture that smoothly transitions from front to back of the cow.
Color Choices: I choose color 1 to be [0, 0.5, 1] and color 2 to be [1, 0.5, 0]. This creates a gradient from blue to orange.
Applied different rotation and translation transformations to achieve specific viewpoints.
Transform 1: Rotation Z
Transform 2: Rotation Y + Translation
Transform 3: Translation Z
Transform 4: Combined Translation
R_relative is a Rotation matrix providing a rotation on the cameras coordinate frame. While T_Relative is a translation matrix translating the cameras coordinate frame in space. By adjusting these two we can create any type of transformation we want.
Constructed point clouds from RGB-D images using depth unprojection.
Point Cloud from Image 1
Point Cloud from Image 2
Combined Point Cloud
Rendered 3D shapes using parametric equations.
Torus (Parametric)
Klein Bottle (Parametric)
Rendered 3D shapes using implicit functions and marching cubes.
Torus (Implicit)
Ellipsoid (Implicit)
Rendering as a Mesh has a much faster render speed than rendering a point cloud. We also get complete shapes without holes. Point cloud rendering take much longer and have holes where information is missing, but they give a much more accurate viewing of the object where information is present and have the ability to easily map the texture and color of the object to the render. As you scale mesh based volumetric rendering up, the memory usage increases highly, while point cloud memory usage scales better. The point clouds are easier to use and render as you don't need to define the function for the shape and just need to take the depth data and render it.
Created an angel emoji by combining a sphere (head) and torus (halo) using implicit function union.
Implemented stratified sampling to generate point clouds from the cow mesh with varying densities.