Test Acuracy: 0.9706
Accuracy for each class:
| Ground Truth | Predicted | PointClouds |
|---|---|---|
| chair | lamp | ![]() |
| vase | lamp | ![]() |
| lamp | chair | ![]() |
| chair | chair | ![]() |
| vase | vase | ![]() |
| lamp | lamp | ![]() |
Intepretation: I think generally the model can predict correct category while may still get confused on non typical shapes. For the 3 failure case, I think all these are not looked like typical chair, vase, and lamp from geometrical features. The flower in the vase seems to become a noise which may make it more like a lamp.
Test Accuracy: 0.8975
| Ground Truth | Predicted | Accuracy |
|---|---|---|
![]() |
![]() |
0.9926 |
![]() |
![]() |
0.9948 |
![]() |
![]() |
0.9928 |
![]() |
![]() |
0.535 |
![]() |
![]() |
0.4506 |
Intepretation: I think the model can segment object with clear structural feature. The failure case both have one dominating parts with almost spherical surface but the model seems prone to segment in equal parts.
I apply a random 4x4 rotation matrix with a random sampled $\theta$ on the test sample points per batch.
Test Accuracy: 0.3841
Accuracy in each category:
Intepretation: Random rotation has make the accuracy of classification model drops drasically. The model may take the relative position of each part of object into consideration while random rotation may break this relation.
Test Accuracy:
| Original | Rotated | Ground Truth | Predicted Original | Predicted Rotated |
|---|---|---|---|---|
![]() |
![]() |
chair | chair | chair |
![]() |
![]() |
chair | chair | vase |
![]() |
![]() |
vase | vase | vase |
![]() |
![]() |
lamp | lamp | chair |
![]() |
![]() |
lamp | lamp | chair |
Test Accuracy: 0.4622
Intepretation: Random rotation also hurts a lot on the performance of segmentation model but not that intense like the effect in classification. I think the relative positional relation broken under rotation is also a problem here.
| Original | Rotated | Ground Truth | Accuracy Original | Accuracy Rotated |
|---|---|---|---|---|
![]() |
![]() |
![]() |
0.9926 | 0.9069 |
![]() |
![]() |
![]() |
0.535 | 0.0221 |
![]() |
![]() |
![]() |
0.4506 | 0.4456 |
![]() |
![]() |
![]() |
0.9948 | 0.7942 |
![]() |
![]() |
![]() |
0.9929 | 0.0544 |
I set num_points down to 1000 while originally sample 10000.
Test Accuracy: 0.9674
Accuracy in each category:
Intepretation: The model actually keep the similar performance when smapleing 1/10 points.
| Original | 1000 | Ground Truth | Predicted Original | Predicted 1000 |
|---|---|---|---|---|
![]() |
![]() |
chair | chair | chair |
![]() |
![]() |
chair | chair | chair |
![]() |
![]() |
vase | vase | vase |
![]() |
![]() |
lamp | lamp | lamp |
![]() |
![]() |
lamp | lamp | lamp |
Test Accuracy: 0.8932
Intepretation: The model actually keep the similar performance when smapleing 1/10 points.
| Original | 1000 | Ground Truth | Accuracy Original | Accuracy 1000 |
|---|---|---|---|---|
![]() |
![]() |
![]() |
0.9926 | 0.992 |
![]() |
![]() |
![]() |
0.535 | 0.543 |
![]() |
![]() |
![]() |
0.4506 | 0.451 |
![]() |
![]() |
![]() |
0.9948 | 0.994 |
![]() |
![]() |
![]() |
0.9929 | 0.991 |
I implement DGCNN for this question.
Test Acuracy: 0.9727
Accuracy for each class:
Original:
Intepretation: Generally, the model performance is slightly increase from the original Pointnet. But on some category there seems to have a slightly drop in accuracy.
| Ground Truth | Predicted Pointnet | Predicted DGCNN | PointClouds | Intepretation |
|---|---|---|---|---|
| chair | lamp | chair | ![]() |
|
| vase | lamp | lamp | ![]() |
|
| lamp | chair | lamp | ![]() |
|
| chair | chair | chair | ![]() |
|
| vase | vase | vase | ![]() |
|
| lamp | lamp | lamp | ![]() |
_ Test Accuracy: 0.9175
Intepretation: Generally, the model performance is slightly increase from the original Pointnet.
| Ground Truth | Predicted | Accuracy Pointnet | Accuracy DGCNN |
|---|---|---|---|
![]() |
![]() |
0.9926 | 0.9937 |
![]() |
![]() |
0.9948 | 0.9979 |
![]() |
![]() |
0.9928 | 0.9918 |
![]() |
![]() |
0.535 | 0.5489 |
![]() |
![]() |
0.4506 | 0.4948 |