...
Our first focus for testing will be TargetAcquisition, and we’ll start by testing the predict functionality. The rest of this document will show an example test plan for TargetAcquisition.
The First Test
Goal: Assert that model training is sufficient, assert that the predict method functions as expected and interfaces with the underlying pre-built model. We define the conditions for correctness as an image having the correct number of bounding boxes, an image having bounding boxes in the correct position, and the predict function not crashing.
Assertion: Given an image I, if I has pylons in it assert that the result of the predict function on I has a non-zero number of boxes, else assert that it has no bounding boxes. Manually assert the position of the bounding boxes.
Expected Data: In this case the test developer will manually label each image file has having or not having bounding boxes.
Suggested Cases: Try and aim for 5-8 images containing pylons at various points in the image, various clusters and various altitudes. Add in 2-3 images of pylon-less images from our flight, photos with just one pylon visible, at a very low atltitude, or other unique edge cases as you wish.
Test Procedure: I suggest you use PyTest as the harness.
...
Load the test images
...
In test logic, instantiate TargetAcquisition
...
For each test image, set the current frame to the test image, run the predict method, and assert that it has bounding boxes if it’s supposed to have bounding boxes. Save the result of running predict on an image to a test results directory
...
Tests:
1-module multi-unit tests: DecklinkSrc
Test Statement: Given an frame F, dumped into OBS, run the get_frame method in decklinkSrc, denoting the return value as F_cap, assert F == F_cap (with tolerance of maybe 10% of all pixels being failed, or looking at each R/G/B value and ensuring its within 10%)
Conditions: OBS running, decklinksrc running, test image in test folder
multi-module tests: DecklinkSrc + targetAcquisition
Test Statement: Given a frame F, run the YOLOv5 model on F outside the cv system to get the bounding boxes B_expected. Set up a video stream that OpenCv caputres from in decklinkSrc, known as S. Each frame of S is = F, Si where i = 1, 2, 3 … n: Si = F. Run the decklinksrc and targetacquisition worker functions with the output of the worker process being B_actual (bounding boxes), B will be an array that looks like [[x_1, y_1], [x_2, y_2]]. for each x_i, y_i in i = 1, 2 assert that B_actual[i][x_i and y_i] is within 5% of B_expected[i][x_i and y_i]
TLDR: The stream is an array of the same frame, we are making sure that the result of the frame is the same as about what it would be outside of the model.