Assignment 5: Point Cloud Processing

Rodrigo Lopes Catto | rlopesca


Q1. Classification Model

Test Accuracy after 250 epochs: 0.9706

Successful Predictions

Placeholder Image

True: chair | Pred: chair

Placeholder Image

True: lamp | Pred: lamp

Placeholder Image

True: vase | Pred: vase

Failed Predictions

Placeholder Image

True: chair | Pred: lamp

Placeholder Image

True: lamp | Pred: vase

Placeholder Image

True: vase | Pred: chair

Explanation for Failed Predictions:

These examples show how some shapes end up looking surprisingly similar: a compact chair gets mistaken for a lamp, a rounded lamp takes the form of a vase, and a tall vase starts to resemble a chair. In all of these cases, the overall geometry becomes ambiguous enough that the predictions drift toward the wrong class.


Q2. Segmentation Model

Test Accuracy after 250 epochs: 0.9006

GIF Ground Truth vs. Prediction (OBS: GIF only rotates once when page is loaded)

Ground Truth

Ground Truth

Predicted

Predicted

Successful Predictions

Ground Truth

GT

Prediction

Pred

Acc: 0.998

Ground Truth

GT

Prediction

Pred

Acc: 0.998

Ground Truth

GT

Prediction

Pred

Acc: 0.994

Failed Predictions

Ground Truth

GT

Prediction

Pred

Acc: 0.474

Ground Truth

GT

Prediction

Pred

Acc: 0.482

Across the five examples, the good segmentations look very clean. The objects with accuracies around 0.99 have well-defined parts, and the labels stay consistent across the seat, back, and legs. When the geometry is stable and the part boundaries are obvious, everything comes out pretty much perfect. The two bad predictions (≈0.47–0.48) show the opposite behavior: the points around the boundaries get messy, the labels start to bleed across regions, and some parts almost look “merged” together. These failures mostly happen when the shape is noisy or the part structure isn’t super clear, so the segmentation ends up guessing and mixing where it shouldn’t.

Q3. Robustness Analysis

Robustness Analysis: Rotation around X-axis

Classification -> Test accuracy with 90.0° rotation: 0.3158

Successful Predictions

Placeholder Image

True: chair | Pred: chair

Placeholder Image

True: lamp | Pred: lamp

Placeholder Image

True: vase | Pred: vase

Failed Predictions

Placeholder Image

True: chair | Pred: lamp

Placeholder Image

True: lamp | Pred: vase

Placeholder Image

True: vase | Pred: chair

Segmentation -> Test accuracy with 45.0° rotation: 0.6460

Successful Predictions

Ground Truth

GT

Prediction

Pred

Acc: 0.908

Ground Truth

GT

Prediction

Pred

Acc: 0.870

Ground Truth

GT

Prediction

Pred

Acc: 0.826

Failed Predictions

Ground Truth

GT

Prediction

Pred

Acc: 0.123

Ground Truth

GT

Prediction

Pred

Acc: 0.214

Explanation for Robustness Results for rotation:

Rotating the objects around the X-axis significantly degrades the classification accuracy. This drop indicates that the model is highly sensitive to orientation changes, likely because it has learned features that are orientation-specific. When the objects are rotated, these features may no longer be in the expected positions, leading to misclassifications.

Robustness Analysis: Number of points

Model Points Test Accuracy
Classification 3000 0.9811
Classification 100 0.9381
Classification 10 0.2476
Segmentation 800 0.9007
Segmentation 100 0.8283
Segmentation 10 0.7063

Explanation for Robustness Results for number of points:

The classification model shows a clear trend: as the number of points decreases from 3000 to 10, the accuracy drops significantly mainly when getting closer to 10 points. This indicates that the model relies heavily on having a dense representation of the object to capture its features effectively. With fewer points, critical details are lost, leading to misclassifications.