ML Models Overview

Task of ML Model Team

“The model is only as good as the data.” —Daniel Puratich?

As the AEAC Student UAS Competition CONOPS is different every year, a new model needs to be trained every year as well. Unfortunately, this takes a lot of manual labour: the process looks roughly like what’s outlined below, where (1-3) are usually performed one-time for the raw dataset, and (4-6) are done after:

See Dataset Creation and Preparation for details on steps (1-3).

  1. Data Collection: Creating a dataset of raw in-flight images.

  2. Data Cleaning: “Clean” images by removing certain anomalies.

  3. Data Labelling: Label the images as per the required task (e.g. landing pad detection).

See Dataset Augmentation for details on steps (4-5).

  1. Data Augmentation: Modifying the raw dataset by adding augmentations for increased variety.

  2. Data Download and Recombination: Downloading the dataset and refactoring, in preparation to train the model.

See Model Training Repository for details on step (6):

  1. Model Training.

 

Ideally, we want to be able to re-use the major ideas built over this process to train the same model for various use cases. For instance, although we may be training to detect landing pads, the methods we use above should be generalisable to train the model to detect other objects, just by modifying the dataset used.

With this in mind, we also want to prioritize using and creating models that work well when trained on smaller datasets, since steps (1-3) and (6) described above are laborious and/or resource intensive. However, we must be careful that models trained on smaller datasets still perform well on a full set of testing data.

  • To-do: Research model evaluation methods to find comprehensive ways to test model performance.

 

Brief Overview of Current Model

Currently, we are using YOLOv8 for object detection with the following dataset: insert dataset link.

Update information about the current model here! Also include link(s) to previous versions/models/tasks that we may have, for future reference.