16825 Fall 2025 HW5 Report¶

Q1. Classification Model¶

Test Acuracy: 0.9706

Accuracy for each class:

  • Chair(0): 0.9983
  • Vase(1): 0.8912
  • Lamp(2): 0.9316
Ground Truth Predicted PointClouds
chair lamp
vase lamp
lamp chair
chair chair
vase vase
lamp lamp

Intepretation: I think generally the model can predict correct category while may still get confused on non typical shapes. For the 3 failure case, I think all these are not looked like typical chair, vase, and lamp from geometrical features. The flower in the vase seems to become a noise which may make it more like a lamp.

Q2. Segmentation Model¶

Test Accuracy: 0.8975

Ground Truth Predicted Accuracy
0.9926
0.9948
0.9928
0.535
0.4506

Intepretation: I think the model can segment object with clear structural feature. The failure case both have one dominating parts with almost spherical surface but the model seems prone to segment in equal parts.

Q3. Robustness Analysis¶

Random Roatation¶

I apply a random 4x4 rotation matrix with a random sampled $\theta$ on the test sample points per batch.

Test Accuracy: 0.3841

Accuracy in each category:

  • Chair(0): 0.2431
  • Vase(1): 0.8039
  • Lamp(2): 0.5726

Intepretation: Random rotation has make the accuracy of classification model drops drasically. The model may take the relative position of each part of object into consideration while random rotation may break this relation.

Classification¶

Test Accuracy:

Original Rotated Ground Truth Predicted Original Predicted Rotated
chair chair chair
chair chair vase
vase vase vase
lamp lamp chair
lamp lamp chair

Segmentation¶

Test Accuracy: 0.4622

Intepretation: Random rotation also hurts a lot on the performance of segmentation model but not that intense like the effect in classification. I think the relative positional relation broken under rotation is also a problem here.

Original Rotated Ground Truth Accuracy Original Accuracy Rotated
0.9926 0.9069
0.535 0.0221
0.4506 0.4456
0.9948 0.7942
0.9929 0.0544

Down Sample Points to 1000¶

I set num_points down to 1000 while originally sample 10000.

Classification¶

Test Accuracy: 0.9674

Accuracy in each category:

  • Chair(0): 0.9674
  • Vase(1): 0.8921
  • Lamp(2): 0.9231

Intepretation: The model actually keep the similar performance when smapleing 1/10 points.

Original 1000 Ground Truth Predicted Original Predicted 1000
chair chair chair
chair chair chair
vase vase vase
lamp lamp lamp
lamp lamp lamp

Segmentation¶

Test Accuracy: 0.8932

Intepretation: The model actually keep the similar performance when smapleing 1/10 points.

Original 1000 Ground Truth Accuracy Original Accuracy 1000
0.9926 0.992
0.535 0.543
0.4506 0.451
0.9948 0.994
0.9929 0.991

Q4. Bonus Question - Locality¶

I implement DGCNN for this question.

Classification¶

Test Acuracy: 0.9727

Accuracy for each class:

  • Chair: 1.0
  • Vase: 0.8039
  • Lamp: 0.9743

Original:

  • Chair(0): 0.9983
  • Vase(1): 0.8912
  • Lamp(2): 0.9316

Intepretation: Generally, the model performance is slightly increase from the original Pointnet. But on some category there seems to have a slightly drop in accuracy.

Ground Truth Predicted Pointnet Predicted DGCNN PointClouds Intepretation
chair lamp chair
vase lamp lamp
lamp chair lamp
chair chair chair
vase vase vase
lamp lamp lamp

Segementation¶

_ Test Accuracy: 0.9175

Intepretation: Generally, the model performance is slightly increase from the original Pointnet.

Ground Truth Predicted Accuracy Pointnet Accuracy DGCNN
0.9926 0.9937
0.9948 0.9979
0.9928 0.9918
0.535 0.5489
0.4506 0.4948