Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Goal: Develop an overall approach to integration testing for the current computer vision system architecture.

Architecture Overview:

The current CV system architecture revolves around the following abstractions

  • Programs: Orchestrate the use of modules to perform a specific task

  • Modules: Contain processing logic for specific tasks that can be reused between programs

    • Modules are run using worker functions

  • Pipelines: Python queues that are passed into worker functions, worker functions take data from pipelines and put data into them

How can we test this?

The testing goal will primary be module logic, integration testing will involve either testing logic in a module that spans across multiple units (functions), or a module worker function, or integration between two modules. The testing approach will be loose bottom-up, assuming the functions that can be unit tested have been unit tested, we will now start building up from higher-level module logic up to integration between multiple modules.

Our first focus for testing will be TargetAcquisition, and we’ll start by testing the predict functionality. The rest of this document will show an example test plan for TargetAcquisition.

The First Test

Goal: Assert that model training is sufficient, assert that the predict method functions as expected and interfaces with the underlying pre-built model. We define the conditions for correctness as an image having the correct number of bounding boxes, an image having bounding boxes in the correct position, and the predict function not crashing.

Assertion: Given an image I, if I has pylons in it assert that the result of the predict function on I has a non-zero number of boxes, else assert that it has no bounding boxes. Manually assert the position of the bounding boxes.

Expected Data: In this case the test developer will manually label each image file has having or not having bounding boxes.

Suggested Cases: Try and aim for 5-8 images containing pylons at various points in the image, various clusters and various altitudes. Add in 2-3 images of pylon-less images from our flight, photos with just one pylon visible, at a very low atltitude, or other unique edge cases as you wish.

Test Procedure: I suggest you use PyTest as the harness.

  • Load the test images

  • In test logic, instantiate TargetAcquisition

  • For each test image, set the current frame to the test image, run the predict method, and assert that it has bounding boxes if it’s supposed to have bounding boxes. Save the result of running predict on an image to a test results directory

  • Manually, check the results to ensure that the boxes are in the correct position.

  • No labels