Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Various Methods of Data Augmentation for Drone Computer Vision

Background

Last year the competition team used the following settings for their dataset: 

I do not know how well these augmentations worked for the team. 

Moving Forward

Roboflow documentation suggests that cropping and changing the exposure be used in conjunction with what we used last year. They also have a detailed video on data augmentation for aerial data.

Cropping is suggested to help train the model in identifying various objects at different altitudes so basically smaller vs larger size of an object. I do not know how useful this will be for us since we only need to detect the landing pad. 

For blur/noise, if we have a really good camera, noise may not be a super big problem and blur would be useful since the landing pad may not be as clear to see from higher altitudes. If the images look like the images from the data labelling documentation, then we should definitely incorporate it.

As well, use the 14 day free premium plan (No credit card needed) and we can use 5x instead of 3x for a larger dataset. 

Augmentation 1:

  • Flip Horizontal, Vertical,

  • Hue: -27* and 27*

  • Saturation: -75* and 75*

  • Brightness: -25* and 0*

Augmentation 2:

  • Flip: Horizontal, Vertical

  • Hue: -27* and 27* [range]

  • Saturation: -75* and 75* [range]

  • Brightness: -25* and 0* [range]

  • Blur: []

  • Noise: []

Augmentation 3:

  • Flip: Horizontal, Vertical

  • Hue: -27* and 27* [range]

  • Saturation: -75* and 75* [range]

  • Brightness: -25* and 0* [range]

  • Crop: []

  • Blur: []

  • Noise: []

  • No labels