Assignment 5

Q1. Classification Model

Test Accuracy: 95.80%

Random Test Samples

Random Sample 1
Random Sample 2
Random Sample 3
Random Sample 4
Random Sample 5
Random Sample 6

Failure Cases

Chair Failure
Vase Failure
Lamp Failure
The failure occurs mostly at ambiguous class boundaries where objects share similar geometric features. For example, the vase and lamp case show in the viz are very similar to eac other. And the chair case is a folded chair, which make it difficult to classify. The models might reply too much on overall geometric features and perform bad on objects that are on classification boundaries.

Q2. Segmentation Model

Test Accuracy: 85.42%

Segmentation Results

Object 1 - Ground Truth

Object 1 GT

Object 1 - Prediction (42.0%)

Object 1 Pred

Object 2 - Ground Truth

Object 2 GT

Object 2 - Prediction (42.9%)

Object 2 Pred

Object 3 - Ground Truth

Object 3 GT

Object 3 - Prediction (89.2%)

Object 3 Pred

Object 4 - Ground Truth

Object 4 GT

Object 4 - Prediction (99.1%)

Object 4 Pred

Object 5 - Ground Truth

Object 5 GT

Object 5 - Prediction (99.0%)

Object 5 Pred
The model has overal 85 percent accuracy. On first two cases, the performance are not as good as the rest. Possible reasons can be that the model struggle with complex geometric features. For example, when there are handles on the chair, or the boundary for segmentation is not very clear. But in cases where the segmentation is clear , like normal chairs, the model can perform with higher accuracy.

Q3. Robustness Analysis

Experiment 1: Point Density Robustness

Classification Results

- 10000 points: 96.22% (baseline) - 5000 points: 96.33% (+0.11%) - 2000 points: 96.43% (+0.21%) - 1000 points: 96.33% (+0.11%) - 500 points: 96.33% (+0.11%) - 200 points: 95.07% (-1.15%) - 100 points: 93.28% (-2.94%) - 50 points: 78.70% (-17.52%)
10000 Points

10000 Points

1000 Points

1000 Points

100 Points

100 Points

50 Points

50 Points

Segmentation Results

- 1000 points: 85.43% (baseline) - 500 points: 85.07% (-0.36%) - 250 points: 83.80% (-1.63%) - 100 points: 79.67% (-5.76%) - 50 points: 75.81% (-9.62%)
1000 Points GT

1000 Points - Ground Truth

1000 Points Pred

1000 Points - Prediction (92.9%)

250 Points GT

250 Points - Ground Truth

250 Points Pred

250 Points - Prediction (94.0%)

100 Points GT

100 Points - Ground Truth

100 Points Pred

100 Points - Prediction (82.0%)

50 Points GT

50 Points - Ground Truth

50 Points Pred

50 Points - Prediction (94.0%)

The procedure is running with different num_points agruments. The classification model shows remarkable robustness to point reduction, maintaining >96% accuracy down to 50 points. The segmentation model is more sensitive but still maintains reasonable accuracy. Both model shows good performancen even with sparse point clouds.

Experiment 2: Rotation Robustness (Z-axis)

Classification Results

- 0°: 96.22% (baseline) - 15°: 93.70% (-2.52%) - 30°: 68.00% (-28.22%) - 45°: 31.79% (-64.43%) - 60°: 26.13% (-70.09%) - 90°: 24.97% (-71.25%) - 120°: 27.39% (-68.83%) - 180°: 60.76% (-35.46%)
0 Degrees

0° - Correct

30 Degrees

30° - Misclassified

90 Degrees

90° - Misclassified

180 Degrees

180° - Misclassified

Segmentation Results

- 0°: 85.35% (baseline) - 30°: 68.73% (-16.62%) - 60°: 50.74% (-34.61%) - 90°: 45.23% (-40.12%) - 120°: 34.29% (-51.06%) - 180°: 35.71% (-49.64%)
0 Degrees GT

0° - Ground Truth

0 Degrees Pred

0° - Prediction (90.0%)

30 Degrees GT

30° - Ground Truth

30 Degrees Pred

30° - Prediction (80.4%)

90 Degrees GT

90° - Ground Truth

90 Degrees Pred

90° - Prediction (48.4%)

180 Degrees GT

180° - Ground Truth

180 Degrees Pred

180° - Prediction (14.9%)

Both models show significant sensitivity to Z-axis rotation. I have rotated around y axis, but result are the saem, as no big geomoetric transofrmation is applied on y axis. So I experiemtned on z axis. Classification performs rapidly after 15°, while segmentation also shows degradation, juts slower. This shows that both models learnign are very much dependent on the orientation of the object.