Multi-view and multi-augmentation for self-supervised visual representation learning

Van Nhiem Tran, Chi En Huang, Shen Hsuan Liu, Muhammad Saqlain Aslam, Kai Lin Yang, Yung-Hui Li, Jia Ching Wang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

In the real world, the appearance of identical objects depends on factors as varied as resolution, angle, illumination conditions, and viewing perspectives. This suggests that the data augmentation pipeline could benefit downstream tasks by exploring the overall data appearance in a self-supervised framework. Previous work on self-supervised learning that yields outstanding performance relies heavily on data augmentation such as cropping and color distortion. However, most methods use a static data augmentation pipeline, limiting the amount of feature exploration. To generate representations that encompass scale-invariant, explicit information about various semantic features and are invariant to nuisance factors such as relative object location, brightness, and color distortion, we propose the Multi-View, Multi-Augmentation (MVMA) framework. MVMA consists of multiple augmentation pipelines, with each pipeline comprising an assortment of augmentation policies. By refining the baseline self-supervised framework to investigate a broader range of image appearances through modified loss objective functions, MVMA enhances the exploration of image features through diverse data augmentation techniques. Transferring the resultant representation learning using convolutional networks (ConvNets) to downstream tasks yields significant improvements compared to the state-of-the-art DINO across a wide range of vision tasks and classification tasks: +4.1% and +8.8% top-1 on the ImageNet dataset with linear evaluation and k-NN classifier, respectively. Moreover, MVMA achieves a significant improvement of +5% AP50 and +7% AP50m on COCO object detection and segmentation.

Original languageEnglish
Pages (from-to)629-656
Number of pages28
JournalApplied Intelligence
Volume54
Issue number1
DOIs
StatePublished - Jan 2024

Keywords

  • Data augmentation policies
  • Metric learning
  • Multi-augmentation
  • Nuisance factors
  • Scale-invariant representation learning
  • SSL augmentation pipelines

Fingerprint

Dive into the research topics of 'Multi-view and multi-augmentation for self-supervised visual representation learning'. Together they form a unique fingerprint.

Cite this