ASSIGNMENT 5¶

Andrew ID : kgaddoba

Q1. Classification Model¶

Test Accuracy : 0.9832

Success :

Chair Vase Lamp
q1_1a q1_1b q1_1c
True Class Chair Vase Lamp
Pred Class Lamp Lamp Chair

Failure :

Chair Vase Lamp
q1_2a q1_2b q1_2c
True Class Chair Vase Lamp
Pred Class Chair Vase Lamp

Interpretation :

  1. For the misclassified chair, the model predicted it as a lamp. The chair’s 3D structure in this sample is less distinct than in correctly classified examples, making it harder for the model to recognize typical chair features like the seat and legs.
  2. In the vase misclassification, the model labeled it as a lamp. The point cloud was sparsely sampled due to computational constraints, resulting in missing regions that likely prevented the model from capturing the vase’s full shape.
  3. For the lamp incorrectly predicted as a chair, the object is unusually thin and elongated. Unlike other lamp examples with clear defining features, this shape lacks strong global cues, causing the model to confuse it with the chair class.

Q2. Segmentation Model¶

Test Accuracy : 0.8986

Success :

Chair Vase Lamp
Ground Truth q2_1a q2_1b q2_1c
Prediction q2_2a q2_2b q2_2c
True Class Chair Vase Lamp
Pred Class Chair Vase Lamp
Accuracy 0.9417 0.9839 0.9517

Failure :

Chair Vase Lamp
Ground Truth q2_3a q2_3b q2_3c
Prediction q2_4a q2_4b q2_4c
True Class Chair Vase Lamp
Pred Class Vase Lamp Vase
Accuracy 0.6438 0.6700 0.5524

Interpretation :

  1. In this poor segmentation example, the model struggles with neighboring regions, mislabeling the armrest and side panels. While the backrest and seat are partially identified correctly, noisy and inconsistent boundaries reduce the per-object accuracy.
  2. Another low-accuracy case shows that the model identifies major segments but fails to clearly separate side and back panels. Label spillover occurs across adjacent regions, reflecting the difficulty in handling geometrically similar parts and abrupt spatial transitions.
  3. These observations indicate that while the model can capture the overall shape and main regions of the chair, fine-grained distinctions between adjacent components remain challenging, resulting in reduced accuracy for certain object parts.

Q3. Robustness Analysis¶

EXPERIMNET 1 : ROTATION

a. Procedure :

  1. Applied rotations to all test-set point clouds by transforming them about the X-axis at four angles in radians (0.3, 0.6, 0.9, 2).
  2. Performed inference using pretrained models (best_model.pt for both classification and segmentation) on each rotated version of the dataset.
  3. Evaluated performance consistency by computing test accuracy at each angle and visualizing outputs on six fixed benchmark objects (indices: 562, 397, 308, 434, 490, 413).

b. Interpretation :

  1. Classification performance degrades sharply under rotation, showing that the model relies on orientation-specific global features learned from upright training examples.
  2. Accuracy collapses at higher rotations because the model fails to recognize objects when their canonical structure is altered, confirming that the global feature extraction pipeline is not rotation-invariant.
  3. Segmentation remains significantly more stable, with only mild accuracy loss across rotations; the model continues to identify object parts reliably due to its focus on local geometric patterns rather than global orientation.

c. Classification Results :

Rotation (rad) Success Rate Object 1 Object 2 Object 3 Object 4
Ground Truth (0) 0.9832 q3_1_1a q3_1_1e q3_1_1c q3_1_1d
Rotation (rad) Success Rate Object 1 Object 2 Object 3 Object 4
0.3 0.9296 q3_1_2a q3_1_2e q3_1_2c q3_1_2d
Rotation (rad) Success Rate Object 1 Object 2 Object 3 Object 4
0.6 0.6804 q3_1_3a q3_1_3e q3_1_3c q3_1_3d
Rotation (rad) Success Rate Object 1 Object 2 Object 3 Object 4
0.9 0.3462 q3_1_4a q3_1_4e q3_1_4c q3_1_4d
Rotation (rad) Success Rate Object 1 Object 2 Object 3 Object 4
2.0 0.3168 q3_1_5a q3_1_5e q3_1_5c q3_1_5d

d. Segmentation Results :

0.0 rad Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_2_1a q3_2_1d q3_2_1b q3_2_1c
Prediction 0.8986 q3_2_2a q3_2_2d q3_2_2b q3_2_2c
0.3 rad Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_2_3a q3_2_3d q3_2_3b q3_2_3c
Prediction 0.8133 q3_2_4a q3_2_4d q3_2_4b q3_2_4c
0.6 rad Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_2_5a q3_2_5d q3_2_5b q3_2_5c
Prediction 0.7133 q3_2_6a q3_2_6d q3_2_6b q3_2_6c
0.9 rad Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_2_7a q3_2_7d q3_2_7b q3_2_7c
Prediction 0.6037 q3_2_8a q3_2_8d q3_2_8b q3_2_8c
2.0 rad Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_2_9a q3_2_9d q3_2_9b q3_2_9c
Prediction 0.2654 q3_2_10a q3_2_10d q3_2_10b q3_2_10c

EXPERIMNET 2 : NUMBER OF POINTS

a. Procedure :

  1. Generated multiple point-density by randomly subsampling each object's 10,000-point cloud down to 500, 1000, 2000, and 5000 points.
  2. Ran inference with the pretrained models on each density level.
  3. Measured accuracy at varying sparsity levels and visualized predictions on the same six reference objects (indices: 562, 397, 308, 434, 490, 413).

b. Interpretation :

  1. For classification, the model maintains strong accuracy even as input point density is reduced, showing only a minor decline, which indicates it effectively captures the overall geometric structure of objects. The network can extract global features from sparse point clouds, enabling reliable performance even with fewer sampled points.
  2. The segmentation model remains effective with fewer input points, as global max-pooling preserves meaningful point-wise feature representations despite lower sampling. Overall, while classification shows greater robustness, segmentation handles sparse inputs better than it does rotated point clouds, according to both visual and quantitative results.

c. Classification Results :

Points Success Object 1 Object 2 Object 3 Object 4
10000 0.9832 q3_3_1a q3_3_1d q3_3_1b q3_3_1c
Points Success Object 1 Object 2 Object 3 Object 4
5000 0.9811 q3_3_2a q3_3_2d q3_3_2b q3_3_2c
Points Success Object 1 Object 2 Object 3 Object 4
2000 0.9800 q3_3_3a q3_3_3d q3_3_3b q3_3_3c
Points Success Object 1 Object 2 Object 3 Object 4
1000 0.9701 q3_3_4a q3_3_4d q3_3_4b q3_3_4c
Points Success Object 1 Object 2 Object 3 Object 4
500 0.9737 q3_3_5a q3_3_5d q3_3_5b q3_3_5c

d. Segmentation Results :

10000 Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_4_1a q3_4_1d q3_4_1b q3_4_1c
Prediction 0.8986 q3_4_2a q3_4_2d q3_4_2b q3_4_2c
5000 Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_4_3a q3_4_3d q3_4_3b q3_4_3c
Prediction 0.8993 q3_4_4a q3_4_4d q3_4_4b q3_4_4c
2000 Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_4_5a q3_4_5d q3_4_5b q3_4_5c
Prediction 0.8992 q3_4_6a q3_4_6d q3_4_6b q3_4_6c
1000 Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_4_7a q3_4_7d q3_4_7b q3_4_7c
Prediction 0.8962 q3_4_8a q3_4_8d q3_4_8b q3_4_8c
500 Success Object 1 Object 2 Object 3 Object 4
Ground Truth q3_4_9a q3_4_9d q3_4_9b q3_4_9c
Prediction 0.8859 q3_4_10a q3_4_10d q3_4_10b q3_4_10c