2024-04-03 Dataset Augmentation

Various Methods of Data Augmentation for Drone Computer Vision

Background

Last year(2023), the competition team used the following settings for their dataset: 

I do not know how well these augmentations worked for the team. 

Moving Forward:

Roboflow documentation suggests that changing the exposure, blur, and noise be used with what we used last year. They also have a detailed video on data augmentation for aerial data.

Cropping is suggested to help train the model in identifying various objects at different zooms, this includes cutting out part of the landing pad. We do not want partially cut-out images of the landing pad so this augmentation is a no-go. If we have the time, we could try to train a model that can detect sections of the pads as well but it may not be necessary.

For blur/noise, if we have an excellent camera, noise may not be a big problem, a little blur could help replicate the quality of the drone’s camera. Blur would be useful since the landing pad may not be as clear from higher altitudes. If the dataset images are of identical quality to the landing pad images from the data labeling documentation, blur and noise should be incorporated.

Use the 14-day free premium plan (No credit card needed) and we can use 5x instead of 3x for a larger dataset. 

 

For augmentations to test out in the future (ranges are still being decided):

Augmentation 1:

  • Flip Horizontal, Vertical,

  • Hue: -27* and 27*

  • Saturation: -75* and 75*

  • Brightness: -25* and 0*

 

Augmentation 2:

  • Flip: Horizontal, Vertical

  • Hue: -27* and 27* [range]

  • Saturation: -75* and 75* [range]

  • Brightness: -25* and 0* [range]

  • Blur: []

  • Noise: []

 

Augmentation 3:

  • Flip: Horizontal, Vertical

  • Hue: -27* and 27* [range]

  • Saturation: -75* and 75* [range]

  • Brightness: -25* and 0* [range]

  • Crop: []

  • Blur: []

  • Noise: []