The following are the images from each stage and the relevant information


Data Collection procedure:

  1. First we put the image we are working with in the same directory as the program
  2. Next we call img = threshold(filepath); to get the thresholded image
  3. Then we call [segImg, count] = segregator(img); to get a segregated image and the number of segments found so that statistics can handle it.
  4. After that we call [centers, cov, nump] = statistics(segImg,count); which filters out the pixels and prints the centers, numpixels and covariances for each
  5. Lastly, we call tennisTriangulate(centers,x); where x is half the wood length in inches. This provides the distance in feet away from the camera.
This was done for each image and recorded above.


The graph of the error:


Analysis

We can see here that as the distance increases, the amount of error increases. This is most likely due to pixel density. It was found that when we used a higher resolution camera, we were able to get better results at further distances because we just had more information and could use better thresholds to more accurately get the center of each tennis ball. Currently, with the lower resolutions and the increased distance, the shade of the balls becomes less distinct and so the entire ball is not captured perfectly. Thus the center of the ball is misjudged and since that is what is most signficantly used to calculate the distance, the error in the distance will be more. The smoothing function used in the first step of the pipeline would play a larger effect on images that are smaller since it would dilate and erode them proportionally much more than the larger images. Other factors that contribute to error is the yellow tape that has a similar hue to the tennis ball, when it is near the ball, its pixels may be added to the mass of the ball, skewing the distance metric. Also, as we got further away, the intensity of each tennis ball pixel decreased and this may have affected the hue and saturation we used to catch those pixels.

Furthermore, the tilt of the camera also has an affect on the resulting errror. A skewed angle make the balls appear farther than they actually are because they appear to be closer together than they actually are. A possible fix for this issue is to use use the slight change in y values to detect skews and use the orientation of all four balls for triangulation. It was found when we took pictures with slightly different angles of the 8 feet large targe, we got error of +- .131 inches which is relatively high. The skew is visible in the picture for the large target at distance 16.

Moreover, we see that the smaller target was more accurate for closer distances. This could be because the constant used in the last stage of the pipeline was calibrated using the small target at 2 and 4 feet. Furthermore, the small difference in the hue of the tennis balls between the small and big targets could have led to the difference in errors seen as perhaps not all the tennis ball pixels were filtered in correctly. Another cause of this error is focus of the camera. If the camera was not focused exactly on the tennis balls used for measuring distance, there is possible blurring of pixels that could lead to innacurate centers of for the balls. It was seen with experimentation that similar shots at the same distance had error around (0.126). This error may be due to angle or focus, but it more likely due to focus as the angle was held as constant as possible. In addition, we can see from the threshold image for the large target at distance 2 feet that there is dark smudge on one of the tennis balls. This did not meet the value of the threshold (modying the threshold here would have caused some wood and more tape to be counted) and so the ball center was skewed.



The source code used in the pipeline