Multi-view and multi-augmentation for self-supervised visual representation learning

Van Nhiem Tran, Chi En Huang, Shen Hsuan Liu, Muhammad Saqlain Aslam, Kai Lin Yang, Yung-Hui Li, Jia Ching Wang

研究成果: 雜誌貢獻期刊論文同行評審

2 引文 斯高帕斯(Scopus)


In the real world, the appearance of identical objects depends on factors as varied as resolution, angle, illumination conditions, and viewing perspectives. This suggests that the data augmentation pipeline could benefit downstream tasks by exploring the overall data appearance in a self-supervised framework. Previous work on self-supervised learning that yields outstanding performance relies heavily on data augmentation such as cropping and color distortion. However, most methods use a static data augmentation pipeline, limiting the amount of feature exploration. To generate representations that encompass scale-invariant, explicit information about various semantic features and are invariant to nuisance factors such as relative object location, brightness, and color distortion, we propose the Multi-View, Multi-Augmentation (MVMA) framework. MVMA consists of multiple augmentation pipelines, with each pipeline comprising an assortment of augmentation policies. By refining the baseline self-supervised framework to investigate a broader range of image appearances through modified loss objective functions, MVMA enhances the exploration of image features through diverse data augmentation techniques. Transferring the resultant representation learning using convolutional networks (ConvNets) to downstream tasks yields significant improvements compared to the state-of-the-art DINO across a wide range of vision tasks and classification tasks: +4.1% and +8.8% top-1 on the ImageNet dataset with linear evaluation and k-NN classifier, respectively. Moreover, MVMA achieves a significant improvement of +5% AP50 and +7% AP50m on COCO object detection and segmentation.

頁(從 - 到)629-656
期刊Applied Intelligence
出版狀態已出版 - 1月 2024


深入研究「Multi-view and multi-augmentation for self-supervised visual representation learning」主題。共同形成了獨特的指紋。