Assignment 5¶

Q1. Classification Model (40 points)¶

Train Accuracy: 0.9780
Test Accuracy: 0.9811

Point Cloud Ground Truth Label Predicted Label
q1 0 (chair) 0 (chair)
q1 1 (vase) 1 (vase)
q1 2 (lamp) 2 (lamp)
q1 0 (chair) 2 (lamp)
q1 1 (vase) 2 (lamp)
q1 2 (lamp) 0 (chair)

Comments: Overall, there is a very high classification accuracy and the model does not seem to overfit to training data. The few cases where there were incorrect classification tend to be atypical shapes. For example, the chair that got misclassified as a lamp is a really flat chair, which was probably not a type of chair that was trained on. Similarly, the vase that got classified as a lamp seems to have a lid on it and is short when compared to most of the talled slender vases in the training data, and the lamp classified as a chair is sideways, but most of the data tends to see vertical lamps.

Q2. Segmentation Model (40 points)¶

Train Accuracy: 0.8846
Test Accuracy: 0.8958

Ground Truth Segmentation Predicted Segmentation Prediction Accuracy
q1 q1 0.9281
q1 q1 0.9567
q1 q1 0.7998
q1 q1 0.9477
q1 q1 0.8442
q1 q1 0.9579
q1 q1 0.9702

Comments: We see an overall high accuracy in segment prediction and the ones with lower accuracy seem to be the objects with atypical shapes.

Q3. Robustness Analysis (20 points)¶

  1. Reducing the number of points in the point cloud when given to the model.
    I first decided to look at n=5000 points, and becuase the model needs 10000 points for the input pointclouds, the remaining 5000 points are just a random sample of the initial 5000 points, i.e. randomly selected duplicated points.

For this test, the accuracy did not really change both for the classification and segmentation task, below are 4 visaulizations that are the subset of the ones used as above:

Classification
Test accuracy: 0.9779

Point Cloud Ground Truth Label Predicted Label
q1 0 0
q1 2 2
q1 0 2
q1 1 2

Segmentation
Test accuracy: 0.8955

Ground Truth Segmentation Predicted Segmentation Prediction Accuracy
q1 q1 0.9255
q1 q1 0.9505
q1 q1 0.7981
q1 q1 0.969

However, I then tried to reduce the number of points much more drastically to 100, with the same strategy to have a point cloud with 10000 points as input to the model. These results show a more evident drop off in accuracy:
Classification
Test accuracy: 0.8940

Point Cloud Ground Truth Label Predicted Label
q1 0 0
q1 2 2
q1 0 2
q1 1 2

Segmentation
Test accuracy: 0.7942

Ground Truth Segmentation Predicted Segmentation Prediction Accuracy
q1 q1 0.8948
q1 q1 0.7627
q1 q1 0.4196
q1 q1 0.7372

From this, we learn that reducing the points by even half does not have much of an effect on the model, making it quite robust to that, but when the number of points are reduced by 100x, there is a much higher accuracy fall off. That said, classification still performes much better showing that classification seems to rely more on course structure rather than higher resolution details.

  1. Rotating the point cloud by 90 degrees.
    I rotated each input point cloud by 90 degrees by using a rotation matrix and multiplying it to the point cloud. This rotation shows a much higher decrease in accuracy, both for segmentation and classification, but classification seems to have dropped off more, which indicates that the model is not really learning view independent geometry and shape but rather actual positions of points. Below are the same examples with rotation:

Classification
Test accuracy: 0.5121

Point Cloud Ground Truth Label Predicted Label
q1 0 1
q1 2 2
q1 0 2
q1 1 1

Segmentation
Test accuracy: 0.5910

Ground Truth Segmentation Predicted Segmentation Prediction Accuracy
q1 q1 0.7881
q1 q1 0.8021
q1 q1 0.2755
q1 q1 0.6784