TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation

TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation

Samuel G. Müller, Frank Hutter

Automatic augmentation methods have recently become a crucial pillar for strong model performance in vision tasks. While existing automatic augmentation methods need to trade off simplicity, cost and performance, we present a most simple baseline, TrivialAugment, that outperforms previous methods for almost free. TrivialAugment is parameter-free and only applies a single augmentation to each image. Thus, TrivialAugment’s effectiveness is very unexpected to us and we performed very thorough experiments to study its performance. First, we compare TrivialAugment to previous state-of-the-art methods in a variety of image classification scenarios. Then, we perform multiple ablation studies with different augmentation spaces, augmentation methods and setups to understand the crucial requirements for its performance. Additionally, we provide a simple interface to facilitate the widespread adoption of automatic augmentation methods, as well as our full code base for reproducibility. Since our work reveals a stagnation in many parts of automatic augmentation research, we end with a short proposal of best practices for sustained future progress in automatic augmentation methods.


The paper has been implemented by the Torchvision team as explained here:

The implementation seems to resume to a random sampling a.k.a OneOf + Choice for a given list of augmentations and parameters magnitude space.

Any opinions ?