Skip to content

Westlake-AI/Awesome-Mixup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome-Mixup

Awesome GitHub stars GitHub forks

Welcome to Awesome-Mixup, a carefully curated survey of Mixup algorithms implemented in the PyTorch library, aiming to meet various needs of the research community. Mixup is a kind of methods that focus on alleviating model overfitting and poor generalization. As a "data-centric" way, Mixup can be applied to various training paradigms and data modalities.

If this repository has been helpful to you, please consider giving it a ⭐️ to show your support. Your support helps us reach more researchers and contributes to the growth of this resource. Thank you!

Introduction

We summarize awesome mixup data augmentation methods for visual representation learning in various scenarios from 2018 to 2024.

The list of awesome mixup augmentation methods is summarized in chronological order and is on updating. The main branch is modified according to Awesome-Mixup in OpenMixup and Awesome-Mix, and we are working on a comperhensive survey on mixup augmentations. You can read our survey: A Survey on Mixup Augmentations and Beyond see more detailed information.

  • To find related papers and their relationships, check out Connected Papers, which visualizes the academic field in a graph representation.
  • To export BibTeX citations of papers, check out ArXiv or Semantic Scholar of the paper for professional reference formats.

Figuer of Contents

You can see the figuer of mixup augmentation methods deirtly that we summarized.

Table of Contents

Table of Contents
    Sample Mixup Policies in SL
    1. Static Linear
    2. Feature-based
    3. Cutting-based
    4. K Samples Mixup
    5. Random Policies
    6. Style-based
    7. Saliency-based
    8. Attention-based
    9. Generating Samples
    Label Mixup Policies in SL
    1. Optimizing Calibration
    2. Area-based
    3. Loss Object
    4. Random Label Policies
    5. Optimizing Mixing Ratio
    6. Generating Label
    7. Attention Score
    8. Saliency Token
    Self-Supervised Learning
    1. Contrastive Learning
    2. Masked Image Modeling
    Semi-Supervised Learning
    1. Semi-Supervised Learning
    CV Downstream Tasks
    1. Regression
    2. Long tail distribution
    3. Segmentation
    4. Object Detection
    Training Paradigms
    1. Federated Learning
    2. Adversarial Attack and Adversarial Training
    3. Domain Adaption
    4. Knowledge Distillation
    5. Multi Modal
    Beyond Vision
    1. NLP
    2. GNN
    3. 3D Point
    4. Other
  1. Analysis and Theorem
  2. Survey
  3. Benchmark
  4. Classification Results on Datasets
  5. Related Datasets Link
  6. Contribution
  7. License
  8. Acknowledgement
  9. Related Project

Sample Mixup Policies in SL

Static Linear

  • mixup: Beyond Empirical Risk Minimization
    Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz
    ICLR'2018 [Paper] [Code]

    MixUp Framework

  • Between-class Learning for Image Classification
    Yuji Tokozume, Yoshitaka Ushiku, Tatsuya Harada
    CVPR'2018 [Paper] [Code]

    BC Framework

  • Preventing Manifold Intrusion with Locality: Local Mixup
    Raphael Baena, Lucas Drumetz, Vincent Gripon
    EUSIPCO'2022 [Paper]

    LocalMixup Framework

  • AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
    Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan
    ICLR'2020 [Paper] [Code]

    AugMix Framework

  • DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness
    Ryuichiro Hataya, Hideki Nakayama
    arXiv'2021 [Paper]

    DJMix Framework

  • PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures
    Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, Jacob Steinhardt
    CVPR'2022 [Paper] [Code]

    PixMix Framework

  • IPMix: Label-Preserving Data Augmentation Method for Training Robust Classifiers
    Zhenglin Huang, Xiaoan Bao, Na Zhang, Qingqi Zhang, Xiaomei Tu, Biao Wu, Xi Yang
    NIPS'2023 [Paper] [Code]

    IPMix Framework

(back to top)

Feature-based

  • Manifold Mixup: Better Representations by Interpolating Hidden States
    Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio
    ICML'2019 [Paper] [Code]

    ManifoldMix Framework

  • PatchUp: A Regularization Technique for Convolutional Neural Networks
    Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar
    arXiv'2020 [Paper] [Code]

    PatchUp Framework

  • On Feature Normalization and Data Augmentation
    Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, Kilian Q. Weinberger
    CVPR'2021 [Paper] [Code]

    MoEx Framework

  • Catch-Up Mix: Catch-Up Class for Struggling Filters in CNN
    Minsoo Kang, Minkoo Kang, Suhyun Kim
    AAAI'2024 [Paper]

    Catch-Up-Mix Framework

(back to top)

Cutting-based

  • CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
    Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo
    ICCV'2019 [Paper] [Code]

    CutMix Framework

  • Improved Mixed-Example Data Augmentation
    Cecilia Summers, Michael J. Dinneen
    WACV'2019 [Paper] [Code]

    MixedExamples Framework

  • Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy
    Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu
    arXiv'2019 [Paper]

    Pani VAT Framework

  • FMix: Enhancing Mixed Sample Data Augmentation
    Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare
    arXiv'2020 [Paper] [Code]

    FMix Framework

  • SmoothMix: a Simple Yet Effective Data Augmentation to Train Robust Classifiers
    Jin-Ha Lee, Muhammad Zaigham Zaheer, Marcella Astrid, Seung-Ik Lee
    CVPRW'2020 [Paper] [Code]

    SmoothMix Framework

  • GridMix: Strong regularization through local context mapping
    Kyungjune Baek, Duhyeon Bang, Hyunjung Shim
    Pattern Recognition'2021 [Paper] [Code]

    GridMixup Framework

  • ResizeMix: Mixing Data with Preserved Object Information and True Labels
    Jie Qin, Jiemin Fang, Qian Zhang, Wenyu Liu, Xingang Wang, Xinggang Wang
    arXiv'2020 [Paper] [Code]

    ResizeMix Framework

  • StackMix: A complementary Mix algorithm
    John Chen, Samarth Sinha, Anastasios Kyrillidis
    UAI'2022 [Paper]

    StackMix Framework

  • SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
    Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika, Mahmoud Melkemi
    arXiv'2022 [Paper] [Code]

    SuperpixelGridCut Framework

  • A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective
    Chanwoo Park, Sangdoo Yun, Sanghyuk Chun
    NIPS'2022 [Paper] [Code]

    MSDA Framework

  • You Only Cut Once: Boosting Data Augmentation with a Single Cut
    Junlin Han, Pengfei Fang, Weihao Li, Jie Hong, Mohammad Ali Armin, Ian Reid, Lars Petersson, Hongdong Li
    ICML'2022 [Paper] [Code]

    YOCO Framework

  • StarLKNet: Star Mixup with Large Kernel Networks for Palm Vein Identification
    Xin Jin, Hongyu Zhu, Mounîm A.El Yacoubi, Hongchao Liao, Huafeng Qin, Yun Jiang
    arXiv'2024 [Paper]

    StarMix Framework

(back to top)

K Samples Mixup

  • You Only Look Once: Unified, Real-Time Object Detection
    Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi
    CVPR'2016 [Paper] [Code]

    Mosaic

  • Data Augmentation using Random Image Cropping and Patching for Deep CNNs
    Ryo Takahashi, Takashi Matsubara, Kuniaki Uehara
    IEEE TCSVT'2020 [Paper]

    RICAP

  • k-Mixup Regularization for Deep Learning via Optimal Transport
    Kristjan Greenewald, Anming Gu, Mikhail Yurochkin, Justin Solomon, Edward Chien
    arXiv'2021 [Paper]

    k-Mixup Framework

  • Observations on K-image Expansion of Image-Mixing Augmentation for Classification
    Joonhyun Jeong, Sungmin Cha, Youngjoon Yoo, Sangdoo Yun, Taesup Moon, Jongwon Choi
    IEEE Access'2021 [Paper] [Code]

    DCutMix Framework

  • MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
    Alexandre Rame, Remy Sun, Matthieu Cord
    ICCV'2021 [Paper]

    MixMo Framework

  • Cut-Thumbnail: A Novel Data Augmentation for Convolutional Neural Network
    Tianshu Xie, Xuan Cheng, Minghui Liu, Jiali Deng, Xiaomin Wang, Ming Liu
    ACM MM;2021 [Paper]

    Cut-Thumbnail

(back to top)

Random Policies

  • RandomMix: A mixed sample data augmentation method with multiple mixed modes
    Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
    arXiv'2022 [Paper]

    RandomMix Framework

  • AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance
    Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
    ICME'2022 [Paper]

    AugRmixAT Framework

(back to top)

Style-based

  • StyleMix: Separating Content and Style for Enhanced Data Augmentation
    Minui Hong, Jinwoo Choi, Gunhee Kim
    CVPR'2021 [Paper] [Code]

    StyleMix Framework

  • Domain Generalization with MixStyle
    Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang
    ICLR'2021 [Paper] [Code]

    MixStyle Framework

  • AlignMix: Improving representation by interpolating aligned features
    Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis
    CVPR'2022 [Paper] [Code]

    AlignMixup Framework

  • Embedding Space Interpolation Beyond Mini-Batch, Beyond Pairs and Beyond Examples
    Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis
    NIPS'2023 [Paper]
    MultiMix Framework

(back to top)

Saliency-based

  • SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization
    A F M Shahab Uddin and Mst. Sirazam Monira and Wheemyung Shin and TaeChoong Chung and Sung-Ho Bae
    ICLR'2021 [Paper] [Code]

    SaliencyMix Framework

  • Attentive CutMix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification
    Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, Marios Savvides
    ICASSP'2020 [Paper] [Code]

    AttentiveMix Framework

  • SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data
    Shaoli Huang, Xinchao Wang, Dacheng Tao
    AAAI'2021 [Paper] [Code]

    SnapMix Framework

  • Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition
    Hao Li, Xiaopeng Zhang, Hongkai Xiong, Qi Tian
    VCIP'2020 [Paper]

    AttributeMix Framework

  • Where to Cut and Paste: Data Regularization with Selective Features
    Jiyeon Kim, Ik-Hee Shin, Jong-Ryul, Lee, Yong-Ju Lee
    ICTC'2020 [Paper] [Code]

    FocusMix Framework

  • PuzzleMix: Exploiting Saliency and Local Statistics for Optimal Mixup
    Jang-Hyun Kim, Wonho Choo, Hyun Oh Song
    ICML'2020 [Paper] [Code]

    PuzzleMix Framework

  • Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
    Jang-Hyun Kim, Wonho Choo, Hosan Jeong, Hyun Oh Song
    ICLR'2021 [Paper] [Code]

    Co-Mixup Framework

  • SuperMix: Supervising the Mixing Data Augmentation
    Ali Dabouei, Sobhan Soleymani, Fariborz Taherkhani, Nasser M. Nasrabadi
    CVPR'2021 [Paper] [Code]

    SuperMix Framework

  • AutoMix: Unveiling the Power of Mixup for Stronger Classifiers
    Zicheng Liu, Siyuan Li, Di Wu, Zihan Liu, Zhiyuan Chen, Lirong Wu, Stan Z. Li
    ECCV'2022 [Paper] [Code]

    AutoMix Framework

  • Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
    Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li
    arXiv'2021 [Paper] [Code]

    SAMix Framework

  • RecursiveMix: Mixed Learning with History
    Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang
    NIPS'2022 [Paper] [Code]

    RecursiveMix Framework

  • TransformMix: Learning Transformation and Mixing Strategies for Sample-mixing Data Augmentation
    Tsz-Him Cheung, Dit-Yan Yeung
    OpenReview'2023 [Paper]

    TransformMix Framework

  • GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps
    Minsoo Kang, Suhyun Kim
    AAAI'2023 [Paper] [Code]

    GuidedMixup Framework

  • GradSalMix: Gradient Saliency-Based Mix for Image Data Augmentation
    Tao Hong, Ya Wang, Xingwu Sun, Fengzong Lian, Zhanhui Kang, Jinwen Ma
    ICME'2023 [Paper]

    GradSalMix Framework

  • LGCOAMix: Local and Global Context-and-Object-Part-Aware Superpixel-Based Data Augmentation for Deep Visual Recognition
    Fadi Dornaika, Danyang Sun
    TIP'2023 [Paper] [Code]

    LGCOAMix Framework

  • Adversarial AutoMixup
    Huafeng Qin, Xin Jin, Yun Jiang, Mounim A. El-Yacoubi, Xinbo Gao
    ICLR'2024 [Paper] [Code]

    AdAutoMix Framework

(back to top)

Attention-based

  • TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers
    Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu
    ECCV'2022 [Paper] [Code]

    TokenMix Framework

  • TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
    Hyeong Kyu Choi, Joonmyung Choi, Hyunwoo J. Kim
    NIPS'2022 [Paper] [Code]

    TokenMixup Framework

  • ScoreNet: Learning Non-Uniform Attention and Augmentation for Transformer-Based Histopathological Image Classification
    Thomas Stegmüller, Behzad Bozorgtabar, Antoine Spahr, Jean-Philippe Thiran
    WACV'2023 [Paper]

    ScoreMix Framework

  • MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer
    Qihao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu
    ICLR'2023 [Paper] [Code]

    MixPro Framework

  • SMMix: Self-Motivated Image Mixing for Vision Transformers
    Mengzhao Chen, Mingbao Lin, ZhiHang Lin, Yuxin Zhang, Fei Chao, Rongrong Ji
    ICCV'2023 [Paper] [Code]

    SMMix Framework

(back to top)

Generating Samples

  • Data Augmentation via Latent Space Interpolation for Image Classification
    *Xiaofeng Liu, Yang Zou, Lingsheng Kong, Zhihui Diao, Junliang Yan, Jun Wang, Site Li, Ping Jia, Jane You
    ICPR'2018 [Paper]

    AEE Framework

  • On Adversarial Mixup Resynthesis
    Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R Devon Hjelm, Yoshua Bengio, Christopher Pal
    NIPS'2019 [Paper] [Code]

    AMR Framework

  • AutoMix: Mixup Networks for Sample Interpolation via Cooperative Barycenter Learning
    Jianchao Zhu, Liangliang Shi, Junchi Yan, Hongyuan Zha
    ECCV'2020 [Paper]

    AutoMix Framework

  • VarMixup: Exploiting the Latent Space for Robust Training and Inference
    Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, Vineeth N Balasubramanian
    CVPRW'2021 [Paper]

    VarMixup Framework

  • DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models
    Khawar Islam, Muhammad Zaigham Zaheer, Arif Mahmood, Karthik Nandakumar
    CVPR'2024 [Paper] [Code]

    DiffuseMix Framework

(back to top)

Label Mixup Policies in SL

Optimizing Calibration

  • Combining Ensembles and Data Augmentation can Harm your Calibration
    Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran
    ICLR'2021 [Paper] [Code]

    CAMix Framework

  • RankMixup: Ranking-Based Mixup Training for Network Calibration
    Jongyoun Noh, Hyekang Park, Junghyup Lee, Bumsub Ham
    ICCV'2023 [Paper] [Code]

    RankMixup Framework

  • SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness
    Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Doguk Kim, Jinwoo Shin
    NIPS'2021 [Paper] [Code]

    SmoothMixup Framework

(back to top)

Area-based

  • TransMix: Attend to Mix for Vision Transformers
    Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai
    CVPR'2022 [Paper] [Code]

    TransMix Framework

  • Data Augmentation using Random Image Cropping and Patching for Deep CNNs
    Ryo Takahashi, Takashi Matsubara, Kuniaki Uehara
    IEEE TCSVT'2020 [Paper]

    RICAP

  • RecursiveMix: Mixed Learning with History
    Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang
    NIPS'2022 [Paper] [Code]

    RecursiveMix Framework

(back to top)

Loss Object

  • Harnessing Hard Mixed Samples with Decoupled Regularizer
    Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li
    NIPS'2023 [Paper] [Code]

    DecoupledMix Framework

  • MixupE: Understanding and Improving Mixup from Directional Derivative Perspective
    Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi
    UAI'2023 [Paper] [Code]

    MixupE Framework

(back to top)

Random Label Policies

  • Mixup Without Hesitation
    Hao Yu, Huanyu Wang, Jianxin Wu
    ICIG'2022 [Paper] [Code]

  • RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
    Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania
    NIPS'2022 [Paper] [Code]

    RegMixup Framework

(back to top)

Optimizing Mixing Ratio

  • MixUp as Locally Linear Out-Of-Manifold Regularization
    Hongyu Guo, Yongyi Mao, Richong Zhang
    AAAI'2019 [Paper]

    AdaMixup Framework

  • RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
    Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania
    NIPS'2022 [Paper] [Code]

    RegMixup Framework

  • Metamixup: Learning adaptive interpolation policy of mixup with metalearning
    Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, Heng Tao Shen
    IEEE TNNLS'2021 [Paper]

    MetaMixup Framework

  • LUMix: Improving Mixup by Better Modelling Label Uncertainty
    Shuyang Sun, Jie-Neng Chen, Ruifei He, Alan Yuille, Philip Torr, Song Bai
    ICASSP'2024 [Paper] [Code]

    LUMix Framework

  • SUMix: Mixup with Semantic and Uncertain Information
    Huafeng Qin, Xin Jin, Hongyu Zhu, Hongchao Liao, Mounîm A. El-Yacoubi, Xinbo Gao
    ECCV'2024 [Paper] [Code]

    SUMix Framework

(back to top)

Generating Label

  • GenLabel: Mixup Relabeling using Generative Models
    Jy-yong Sohn, Liang Shang, Hongxu Chen, Jaekyun Moon, Dimitris Papailiopoulos, Kangwook Lee
    ICML'2022 [Paper]
    GenLabel Framework

(back to top)

Attention Score

  • All Tokens Matter: Token Labeling for Training Better Vision Transformers
    Zihang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Yujun Shi, Xiaojie Jin, Anran Wang, Jiashi Feng
    NIPS'2021 [Paper] [Code]

    Token Labeling Framework

  • TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers
    Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu
    ECCV'2022 [Paper] [Code]

    TokenMix Framework

  • TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
    Hyeong Kyu Choi, Joonmyung Choi, Hyunwoo J. Kim
    NIPS'2022 [Paper] [Code]

    TokenMixup Framework

  • MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer
    Qihao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu
    ICLR'2023 [Paper] [Code]

    MixPro Framework

  • Token-Label Alignment for Vision Transformers
    Han Xiao, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu
    ICCV'2023 [Paper] [Code]

    TL-Align Framework

(back to top)

Saliency Token

  • SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data
    Shaoli Huang, Xinchao Wang, Dacheng Tao
    AAAI'2021 [Paper] [Code]

    SnapMix Framework

  • Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing
    Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang
    AAAI'2022 [Paper]

    Saliency Grafting Framework

(back to top)

Self-Supervised Learning

Contrastive Learning

  • MixCo: Mix-up Contrastive Learning for Visual Representation
    Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun
    NIPSW'2020 [Paper] [Code]

    MixCo Framework

  • Hard Negative Mixing for Contrastive Learning
    Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus
    NIPS'2020 [Paper] [Code]

    MoCHi Framework

  • i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning
    Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
    ICLR'2021 [Paper] [Code]

    i-Mix Framework

  • Beyond Single Instance Multi-view Unsupervised Representation Learning
    Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei
    BMVC'2022 [Paper]

    BSIM Framework

  • Improving Contrastive Learning by Visualizing Feature Transformation
    Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen
    ICCV'2021 [Paper] [Code]

    FT Framework

  • Mix-up Self-Supervised Learning for Contrast-agnostic Applications
    Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann
    ICME'2021 [Paper]

    MixSSL Framework

  • Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
    Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng
    NIPS'2021 [Paper] [Code]

    Co-Tuning Framework

  • Center-wise Local Image Mixture For Contrastive Representation Learning
    Hao Li, Xiaopeng Zhang, Hongkai Xiong
    BMVC'2021 [Paper]

    CLIM Framework

  • Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning
    Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng
    OpenReview'2021 [Paper]

    PCEA Framework

  • Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
    Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li
    arXiv'2021 [Paper] [Code]

    SAMix Framework

  • MixSiam: A Mixture-based Approach to Self-supervised Representation Learning
    Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du
    OpenReview'2021 [Paper]

    MixSiam Framework

  • Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing
    Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das
    NIPS'2021 [Paper] [Code]

    CoMix Framework

  • Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
    Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing
    AAAI'2022 [Paper] [Code]

    Un-Mix Framework

  • m-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning
    Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang
    KDD'2022 [Paper] [Code]

    m-Mix Framework

  • A Simple Data Mixing Prior for Improving Self-Supervised Learning
    Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie
    CVPR'2022 [Paper] [Code]

    SDMP Framework

  • CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping
    Junlin Han, Lars Petersson, Hongdong Li, Ian Reid
    arXiv'2022 [Paper] [Code]

    CropMix Framework

  • Mixing up contrastive learning: Self-supervised representation learning for time series
    Kristoffer Wickstrøm, Michael Kampffmeyer, Karl Øyvind Mikalsen, Robert Jenssen
    PR Letter'2022 [Paper]

    MCL Framework

  • Towards Domain-Agnostic Contrastive Learning
    Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le
    ICML'2021 [Paper]

    DACL Framework

  • ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning
    Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li
    ICML'2022 [Paper] [Code]

    ProGCL Framework

  • Evolving Image Compositions for Feature Representation Learning
    Paola Cascante-Bonilla, Arshdeep Sekhon, Yanjun Qi, Vicente Ordonez
    BMVC'2021 [Paper]

    PatchMix Framework

  • On the Importance of Asymmetry for Siamese Representation Learning
    Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen
    CVPR'2022 [Paper] [Code]

    ScaleMix Framework

  • Geodesic Multi-Modal Mixup for Robust Fine-Tuning
    Changdae Oh, Junhyuk So, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song
    NIPS'2023 [Paper] [Code]

    m2-Mix Framework

(back to top)

Masked Image Modeling

  • i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable
    Kevin Zhang, Zhiqiang Shen
    arXiv'2022 [Paper] [Code]

    i-MAE Framework

  • MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
    Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li
    CVPR'2023 [Paper] [Code]

    MixMAE Framework

  • Mixed Autoencoder for Self-supervised Visual Representation Learning
    Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung
    CVPR'2023 [Paper]

    MixedAE Framework

(back to top)

Semi-Supervised Learning

  • MixMatch: A Holistic Approach to Semi-Supervised Learning
    David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel
    NIPS'2019 [Paper] [Code]

    MixMatch Framework

  • ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
    David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel
    ICLR'2020 [Paper] [Code]

    ReMixMatch Framework

  • DivideMix: Learning with Noisy Labels as Semi-supervised Learning
    Junnan Li, Richard Socher, Steven C.H. Hoi
    ICLR'2020 [Paper] [Code]

    DivideMix Framework

  • MixPUL: Consistency-based Augmentation for Positive and Unlabeled Learning
    Tong Wei, Feng Shi, Hai Wang, Wei-Wei Tu. Yu-Feng Li
    arXiv'2020 [Paper]

    MixPUL Framework
  • Milking CowMask for Semi-Supervised Image Classification
    Geoff French, Avital Oliver, Tim Salimans
    NIPS'2020 [Paper] [Code]

    CowMask Framework

  • Epsilon Consistent Mixup: Structural Regularization with an Adaptive Consistency-Interpolation Tradeoff
    Vincent Pisztora, Yanglan Ou, Xiaolei Huang, Francesca Chiaromonte, Jia Li
    arXiv'2021 [Paper]

    Epsilon Consistent Mixup (ϵmu) Framework

  • Who Is Your Right Mixup Partner in Positive and Unlabeled Learning
    Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang
    ICLR'2021 [Paper]

    P3Mix Framework

  • Interpolation Consistency Training for Semi-Supervised Learning
    Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, David Lopez-Paz
    NN'2022 [Paper]

    ICT Framework

  • Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation
    Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong
    arXiv'2023 [Paper]

    DCPA Framework

  • MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection
    JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak
    CVPR'2022 [Paper] [Code]

    MUM Framework

  • Harnessing Hard Mixed Samples with Decoupled Regularizer
    Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li
    NIPS'2023 [Paper] [Code]

    DFixMatch Framework

  • Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise
    Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi
    arXiv'2023 [Paper] [Code]

    MixEMatch Framework

  • LaserMix for Semi-Supervised LiDAR Semantic Segmentation
    Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu
    CVPR'2023 [Paper] [Code] [project]

    LaserMix Framework

  • PCLMix: Weakly Supervised Medical Image Segmentation via Pixel-Level Contrastive Learning and Dynamic Mix Augmentation
    Yu Lei, Haolun Luo, Lituan Wang, Zhenwei Zhang, Lei Zhang
    arXiv'2024 [Paper] [Code]

    PCLMix Framework

(back to top)

CV Downstream Tasks

Regression

  • RegMix: Data Mixing Augmentation for Regression
    Seong-Hyeon Hwang, Steven Euijong Whang
    arXiv'2021 [Paper]

    MixRL Framework

  • C-Mixup: Improving Generalization in Regression
    Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn
    NIPS'2022 [Paper] [Code]

    C-Mixup Framework

  • ExtraMix: Extrapolatable Data Augmentation for Regression using Generative Models
    Kisoo Kwon, Kuhwan Jeong, Sanghyun Park, Sangha Park, Hoshik Lee, Seung-Yeon Kwak, Sungmin Kim, Kyunghyun Cho
    OpenReview'2022 [Paper]

    SupReMix Framework

  • Rank-N-Contrast: Learning Continuous Representations for Regression
    Kaiwen Zha, Peng Cao, Jeany Son, Yuzhe Yang, Dina Katabi
    NIPS'2023 [Paper] [Code]

  • Anchor Data Augmentation
    Nora Schneider, Shirin Goshtasbpour, Fernando Perez-Cruz
    NIPS'2023 [Paper]

  • Mixup Your Own Pairs
    Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen Zhou
    arXiv'2023 [Paper] [Code]

    SupReMix Framework

  • Tailoring Mixup to Data using Kernel Warping functions
    Quentin Bouniot, Pavlo Mozharovskyi, Florence d'Alché-Buc
    arXiv'2023 [Paper] [Code]

    Warped Mixup Framework

  • OmniMixup: Generalize Mixup with Mixing-Pair Sampling Distribution
    Anonymous
    Openreview'2023 [Paper]

  • Augment on Manifold: Mixup Regularization with UMAP
    Yousef El-Laham, Elizabeth Fons, Dillon Daudert, Svitlana Vyetrenko
    ICASSP'2024 [Paper]

(back to top)

Long tail distribution

  • Remix: Rebalanced Mixup
    Hsin-Ping Chou, Shih-Chieh Chang, Jia-Yu Pan, Wei Wei, Da-Cheng Juan
    ECCVW'2020 [Paper]

    Remix Framework

  • Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective
    Zhengzhuo Xu, Zenghao Chai, Chun Yuan
    NIPS'2021 [Paper] [Code]

    UniMix Framework

  • Label-Occurrence-Balanced Mixup for Long-tailed Recognition
    Shaoyu Zhang, Chen Chen, Xiujuan Zhang, Silong Peng
    ICASSP'2022 [Paper]

    OBMix Framework

  • DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation for Long-Tailed Visual Recognition
    Jae Soon Baik, In Young Yoon, Jun Won Choi
    PR'2024 [Paper]

    DBN-Mix Framework

(back to top)

Segmentation

  • ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning
    Viktor Olsson, Wilhelm Tranheden, Juliano Pinto, Lennart Svensson
    WACV'2021 [Paper] [Code]

    ClassMix Framework

  • ChessMix: Spatial Context Data Augmentation for Remote Sensing Semantic Segmentation
    Matheus Barros Pereira, Jefersson Alex dos Santos
    SIBGRAPI'2021 [Paper]

    ChessMix Framework

  • CycleMix: A Holistic Strategy for Medical Image Segmentation from Scribble Supervision
    Ke Zhang, Xiahai Zhuang
    CVPR'2022 [Paper] [Code]

    CyclesMix Framework

  • InsMix: Towards Realistic Generative Data Augmentation for Nuclei Instance Segmentation
    Yi Lin, Zeyu Wang, Kwang-Ting Cheng, Hao Chen
    MICCAI'2022 [Paper] [Code]

    InsMix Framework

  • LaserMix for Semi-Supervised LiDAR Semantic Segmentation
    Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu
    CVPR'2023 [Paper] [Code] [project]

    LaserMix Framework

  • Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation
    Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong
    arXiv'2023 [Paper]

    DCPA Framework

  • SA-MixNet: Structure-aware Mixup and Invariance Learning for Scribble-supervised Road Extraction in Remote Sensing Images
    Jie Feng, Hao Huang, Junpeng Zhang, Weisheng Dong, Dingwen Zhang, Licheng Jiao
    arXiv'2024 [Paper] [Code]

    SA-MixNet Framework

  • Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image Segmentation
    Qinghe Ma, Jian Zhang, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
    CVPR'2024 [Paper] [Code]

    MiDSS Framework

  • UniMix: Towards Domain Adaptive and Generalizable LiDAR Semantic Segmentation in Adverse Weather
    Haimei Zhao, Jing Zhang, Zhuo Chen, Shanshan Zhao, Dacheng Tao
    CVPR'2024 [Paper] [Code]

  • ModelMix: A Holistic Strategy for Medical Image Segmentation from Scribble Supervision
    Ke Zhang, Vishal M. Patel
    MICCAI'2024 [Paper]

    ModelMix Framework

(back to top)

Object Detection

  • MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection
    JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak
    CVPR'2022 [Paper] [Code]

    MUM Framework

  • Mixed Pseudo Labels for Semi-Supervised Object Detection
    Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang
    arXiv'2023 [Paper] [Code]

    MixPL Framework

  • MS-DETR: Efficient DETR Training with Mixed Supervision
    Chuyang Zhao, Yifan Sun, Wenhao Wang, Qiang Chen, Errui Ding, Yi Yang, Jingdong Wang
    arXiv'2024 [Paper] [Code]

    MS-DETR Framework

(back to top)

Other Applications

Training Paradigms

Federated Learning

  • XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning
    MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim
    ICML'2020 [Paper] [Code]

  • FedMix: Approximation of Mixup Under Mean augmented Federated Learning
    Tehrim Yoon, Sumin Shin, Sung Ju Hwang, Eunho Yang
    ECCV'2022 [Paper] [Code]

  • Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
    Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim
    IEEE Communications Letters'2020 [Paper]

  • StatMix: Data augmentation method that relies on image statistics in federated learning
    Dominik Lewy, Jacek Mańdziuk, Maria Ganzha, Marcin Paprzycki
    ICONIP'2022 [Paper]

(back to top)

Adversarial Attack and Adversarial Training

  • Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
    Alfred Laugros, Alice Caplier, Matthieu Ospici
    ECCV'2020 [Paper]

  • Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
    Tianyu Pang, Kun Xu, Jun Zhu
    ICLR'2020 [Paper] [Code]

  • Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
    Saehyung Lee, Hyungyu Lee, Sungroh Yoon
    CVPR'2020 [Paper] [Code]

  • Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup
    Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, Xuan Li
    EMNLP'2021 [Paper]

  • Adversarially Optimized Mixup for Robust Classification
    Jason Bunk, Srinjoy Chattopadhyay, B. S. Manjunath, Shivkumar Chandrasekaran
    arXiv'2021 [Paper]

  • Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning
    Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang
    ACL'2021 [Paper]

  • Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy
    Alex Lamb, Vikas Verma, Kenji Kawaguchi, Alexander Matyasko, Savya Khosla, Juho Kannala, Yoshua Bengio
    NN'2021 [Paper]

  • Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction
    Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen, Qi Zhu
    ICCV'2023 [Paper]

  • Mixup as directional adversarial training
    Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang
    NIPS'2019 [Paper] [Code]

  • On the benefits of defining vicinal distributions in latent space
    Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, Vineeth N Balasubramanian
    CVPRW'2021 [Paper]

(back to top)

Domain Adaption

  • Mix-up Self-Supervised Learning for Contrast-agnostic Applications
    Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann
    ICDE'2022 [Paper]

  • Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing
    Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das
    NIPS'2021 [Paper] [Code]

  • Virtual Mixup Training for Unsupervised Domain Adaptation
    Xudong Mao, Yun Ma, Zhenguo Yang, Yangbin Chen, Qing Li
    arXiv'2019 [Paper] [Code]

  • Improve Unsupervised Domain Adaptation with Mixup Training
    Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, Liu Ren
    arXiv'2020 [Paper]

  • Adversarial Domain Adaptation with Domain Mixup
    Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, Wenjun Zhang
    AAAI'2020 [Paper] [Code]

  • Dual Mixup Regularized Learning for Adversarial Domain Adaptation
    Yuan Wu, Diana Inkpen, Ahmed El-Roby
    ECCV'2020 [Paper] [Code]

  • Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation
    Aadarsh Sahoo, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das
    WACV'2023 [Paper] [Code]

  • Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation
    Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan
    MICCAI'2023 [Paper] [Code]

(back to top)

Knowledge Distillation

  • MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps
    Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li
    NIPS'2021 [Paper]

  • MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
    Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang
    ECCV'2022 [Paper]

  • Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study
    Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang
    WACV'2023 [Paper]

(back to top)

Multi-Modal

  • MixGen: A New Multi-Modal Data Augmentation
    Xiaoshuai Hao, Yi Zhu, Srikar Appalaraju, Aston Zhang, Wanqian Zhang, Bo Li, Mu Li
    arXiv'2023 [Paper] [Code]

  • VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix
    Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo
    ICML'2022 [Paper]

    VLMixer Framework

  • Geodesic Multi-Modal Mixup for Robust Fine-Tuning
    Changdae Oh, Junhyuk So, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song
    NIPS'2023 [Paper] [Code]

  • PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis
    Efthymios Georgiou, Yannis Avrithis, Alexandros Potamianos
    arXiv'2023 [Paper]

    PowMix Framework

  • Enhance image classification via inter-class image mixup with diffusion model
    Efthymios Georgiou, Yannis Avrithis, Alexandros Potamianos
    CVPR'2024 [Paper] [Code]

  • Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation
    Keji He, Chenyang Si, Zhihe Lu, Yan Huang, Liang Wang, Xinchao Wang
    NIPS'2023 [Paper] [Code]

(back to top)

Beyond Vision

NLP

  • Augmenting Data with Mixup for Sentence Classification: An Empirical Study
    Hongyu Guo, Yongyi Mao, Richong Zhang
    arXiv'2019 [Paper] [Code]

  • Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup
    Guang Liu, Yuzhao Mao, Hailong Huang, Weiguo Gao, Xuan Li
    EMNLP'2021 [Paper]

  • SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup
    Hongyu Guo, Yongyi Mao, Richong Zhang
    EMNLP'2020 [Paper] [Code]

  • Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks
    Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S. Yu, Lifang He
    COLING'2020 [Paper]

  • Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data
    Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang
    EMNLP'2020 [Paper] [Code]

  • Augmenting NLP Models using Latent Feature Interpolations
    Amit Jindal, Arijit Ghosh Chowdhury, Aniket Didolkar, Di Jin, Ramit Sawhney, Rajiv Ratn Shah
    COLING'2020 [Paper]

  • MixText: Linguistically-informed Interpolation of Hidden Space for Semi-Supervised Text Classification
    Jiaao Chen, Zichao Yang, Diyi Yang
    ACL'2020 [Paper] [Code]

  • Sequence-Level Mixed Sample Data Augmentation
    Jiaao Chen, Zichao Yang, Diyi Yang
    EMNLP'2020 [Paper] [Code]

  • AdvAug: Robust Adversarial Augmentation for Neural Machine Translation
    Yong Cheng, Lu Jiang, Wolfgang Macherey, Jacob Eisenstein
    ACL'2020 [Paper] [Code]

  • Local Additivity Based Data Augmentation for Semi-supervised NER
    Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang
    ACL'2020 [Paper] [Code]

  • Mixup Decoding for Diverse Machine Translation
    Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang
    EMNLP'2021 [Paper]

  • TreeMix: Compositional Constituency-based Data Augmentation for Natural Language Understanding
    Le Zhang, Zichao Yang, Diyi Yang
    NAALC'2022 [Paper] [Code]

  • STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation
    Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang
    ACL'2022 [Paper] [Code]

  • AdMix: A Mixed Sample Data Augmentation Method for Neural Machine Translation
    Chang Jin, Shigui Qiu, Nini Xiao, Hao Jia
    IJCAI'2022 [Paper]

  • Enhancing Cross-lingual Transfer by Manifold Mixup
    Huiyun Yang, Huadong Chen, Hao Zhou, Lei Li
    ICLR'2022 [Paper] [Code]

  • Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation
    Yong Cheng, Ankur Bapna, Orhan Firat, Yuan Cao, Pidong Wang, Wolfgang Macherey
    ACL'2022 [Paper]

(back to top)

GNN

  • Node Augmentation Methods for Graph Neural Network based Object Classification
    Yifan Xue; Yixuan Liao; Xiaoxin Chen; Jingwei Zhao
    CDS'2021 [Paper]

  • Mixup for Node and Graph Classification
    Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Bryan Hooi
    WWW'2021 [Paper] [Code]

  • Graph Mixed Random Network Based on PageRank
    Qianli Ma, Zheng Fan, Chenzhi Wang, Hongye Tan
    Symmetry'2022 [Paper]

  • GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks
    Tianxiang Zhao, Xiang Zhang, Suhang Wang
    WSDM'2021 [Paper]

  • GraphMix: Improved Training of GNNs for Semi-Supervised Learning
    Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
    AAAI'2021 [Paper] [Code]

  • GraphMixup: Improving Class-Imbalanced Node Classification on Graphs by Self-supervised Context Prediction
    Lirong Wu, Haitao Lin, Zhangyang Gao, Cheng Tan, Stan.Z.Li
    ECML-PKDD'2022 [Paper] [Code]

  • Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation
    Joonhyung Park, Hajin Shim, Eunho Yang
    AAAI'2022 [Paper] [Code]

  • G-Mixup: Graph Data Augmentation for Graph Classification
    Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
    ICML'2022 [Paper]

  • Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications
    Xinyu Ma, Xu Chu, Yasha Wang, Yang Lin, Junfeng Zhao, Liantao Ma, Wenwu Zhu
    NIPS'2023 [Paper] [code]

  • iGraphMix: Input Graph Mixup Method for Node Classification
    Jongwon Jeong, Hoyeop Lee, Hyui Geon Yoon, Beomyoung Lee, Junhee Heo, Geonsoo Kim, Kim Jin Seon
    ICLR'2024 [Paper]

(back to top)

3D Point

  • PointMixup: Augmentation for Point Clouds
    Yunlu Chen, Vincent Tao Hu, Efstratios Gavves, Thomas Mensink, Pascal Mettes, Pengwan Yang, Cees G.M. Snoek
    ECCV'2020 [Paper] [Code]

  • PointCutMix: Regularization Strategy for Point Cloud Classification
    Jinlai Zhang, Lyujie Chen, Bo Ouyang, Binbin Liu, Jihong Zhu, Yujing Chen, Yanmei Meng, Danfeng Wu
    Neurocomputing'2022 [Paper] [Code]

  • Regularization Strategy for Point Cloud via Rigidly Mixed Sample
    Dogyoon Lee, Jaeha Lee, Junhyeop Lee, Hyeongmin Lee, Minhyeok Lee, Sungmin Woo, Sangyoun Lee
    CVPR'2021 [Paper] [Code]

  • Part-Aware Data Augmentation for 3D Object Detection in Point Cloud
    Jaeseok Choi, Yeji Song, Nojun Kwak
    IROS'2021 [Paper] [Code]

  • Point MixSwap: Attentional Point Cloud Mixing via Swapping Matched Structural Divisions
    Ardian Umam, Cheng-Kun Yang, Yung-Yu Chuang, Jen-Hui Chuang, Yen-Yu Lin
    ECCV'2022 [Paper] [Code]

(back to top)

Other

  • Embedding Expansion: Augmentation in Embedding Space for Deep Metric Learning
    Byungsoo Ko, Geonmo Gu
    CVPR'2020 [Paper] [Code]

  • SalfMix: A Novel Single Image-Based Data Augmentation Technique Using a Saliency Map
    Jaehyeop Choi, Chaehyeon Lee, Donggyu Lee, Heechul Jung
    Sensor'2021 [Paper]

  • Octave Mix: Data Augmentation Using Frequency Decomposition for Activity Recognition
    Tatsuhito Hasegawa
    IEEE Access'2021 [Paper]

  • Guided Interpolation for Adversarial Training
    Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama
    arXiv'2021 [Paper]

  • Recall@k Surrogate Loss with Large Batches and Similarity Mixup
    Yash Patel, Giorgos Tolias, Jiri Matas
    CVPR'2022 [Paper] [Code]

  • Contrastive-mixup Learning for Improved Speaker Verification
    Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke
    ICASSP'2022 [Paper]

  • Noisy Feature Mixup
    Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney
    ICLR'2022 [Paper] [Code]

  • It Takes Two to Tango: Mixup for Deep Metric Learning
    Shashanka Venkataramanan, Bill Psomas, Ewa Kijak, Laurent Amsaleg, Konstantinos Karantzalos, Yannis Avrithis
    ICLR'2022 [Paper] [Code]

  • Representational Continuity for Unsupervised Continual Learning
    Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang
    ICLR'2022 [Paper] [Code]

  • Expeditious Saliency-guided Mix-up through Random Gradient Thresholding
    Remy Sun, Clement Masson, Gilles Henaff, Nicolas Thome, Matthieu Cord.
    ICPR'2022 [Paper]

  • Guarding Barlow Twins Against Overfitting with Mixed Samples
    Wele Gedara Chaminda Bandara, Celso M. De Melo, Vishal M. Patel
    arXiv'2023 [Paper] [Code]

  • Infinite Class Mixup
    Thomas Mensink, Pascal Mettes
    arXiv'2023 [Paper]

  • Semantic Equivariant Mixup
    Zongbo Han, Tianchi Xie, Bingzhe Wu, Qinghua Hu, Changqing Zhang
    arXiv'2023 [Paper]

  • G-Mix: A Generalized Mixup Learning Framework Towards Flat Minima
    Xingyu Li, Bo Tang
    arXiv'2023 [Paper]

  • Inter-Instance Similarity Modeling for Contrastive Learning
    Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang
    arXiv'2023 [Paper] [Code]

  • Single-channel speech enhancement using learnable loss mixup
    Oscar Chang, Dung N. Tran, Kazuhito Koishida
    arXiv'2023 [Paper]

  • Selective Volume Mixup for Video Action Recognition
    Yi Tan, Zhaofan Qiu, Yanbin Hao, Ting Yao, Xiangnan He, Tao Mei
    arXiv'2023 [Paper]

  • Rethinking Data Augmentation for Image Super-resolution: A Comprehensive Analysis and a New Strategy
    Jaejun Yoo, Namhyuk Ahn, Kyung-Ah Sohn
    CVPR'2020 & IJCV'2024 [Paper] [Code]

  • DNABERT-S: Learning Species-Aware DNA Embedding with Genome Foundation Models
    Zhihan Zhou, Weimin Wu, Harrison Ho, Jiayi Wang, Lizhen Shi, Ramana V Davuluri, Zhong Wang, Han Liu
    arXiv'2024 [Paper] [Code]

  • ContextMix: A context-aware data augmentation method for industrial visual inspection systems
    Hyungmin Kim, Donghun Kim, Pyunghwan Ahn, Sungho Suh, Hansang Cho, Junmo Kim
    EAAI'2024 [Paper]

  • Robust Image Denoising through Adversarial Frequency Mixup
    Donghun Ryou, Inju Ha, Hyewon Yoo, Dongwan Kim, Bohyung Han
    CVPR'2024 [Paper] [Code]

(back to top)

Analysis and Theorem

  • Understanding Mixup Training Methods
    Daojun Liang, Feng Yang, Tian Zhang, Peter Yang
    NIPS'2019 [Paper]

  • MixUp as Locally Linear Out-Of-Manifold Regularization
    Hongyu Guo, Yongyi Mao, Richong Zhang
    AAAI'2019 [Paper]

  • MixUp as Directional Adversarial Training
    Chanwoo Park, Sangdoo Yun, Sanghyuk Chun
    NIPS'2019 [Paper]

  • On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks
    Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak
    NIPS'2019 [Paper] [Code]

  • On Mixup Regularization
    Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert
    arXiv'2020 [Paper]

  • Mixup Training as the Complexity Reduction
    Masanari Kimura
    arXiv'2021 [Paper]

  • How Does Mixup Help With Robustness and Generalization
    Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou
    ICLR'2021 [Paper]

  • Mixup Without Hesitation
    Hao Yu, Huanyu Wang, Jianxin Wu
    ICIG'2022 [Paper] [Code]

  • RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
    Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania
    NIPS'2022 [Paper] [Code]

  • A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective
    Chanwoo Park, Sangdoo Yun, Sanghyuk Chun
    NIPS'2022 [Paper] [Code]

  • Towards Understanding the Data Dependency of Mixup-style Training
    Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge
    ICLR'2022 [Paper] [Code]

  • When and How Mixup Improves Calibration
    Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou
    ICML'2022 [Paper]

  • Provable Benefit of Mixup for Finding Optimal Decision Boundaries
    Junsoo Oh, Chulhee Yun
    ICML'2023 [Paper]

  • On the Pitfall of Mixup for Uncertainty Calibration
    Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang
    CVPR'2023 [Paper]

  • Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study
    Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga
    WACV'2023 [Paper] [Code]

  • Over-Training with Mixup May Hurt Generalization
    Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao
    ICLR'2023 [Paper]

  • Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability
    Soyoun Won, Sung-Ho Bae, Seong Tae Kim
    arXiv'2023 [Paper]

  • Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup
    Damien Teney, Jindong Wang, Ehsan Abbasnejad
    ICML'2024 [Paper]

  • Pushing Boundaries: Mixup's Influence on Neural Collapse
    Quinn Fisher, Haoming Meng, Vardan Papyan
    ICLR'2024 [Paper]

(back to top)

Survey

  • A survey on Image Data Augmentation for Deep Learning
    Connor Shorten and Taghi Khoshgoftaar
    Journal of Big Data'2019 [Paper]

  • An overview of mixing augmentation methods and augmentation strategies
    Dominik Lewy and Jacek Ma ́ndziuk
    Artificial Intelligence Review'2022 [Paper]

  • Image Data Augmentation for Deep Learning: A Survey
    Suorong Yang, Weikang Xiao, Mengcheng Zhang, Suhan Guo, Jian Zhao, Furao Shen
    arXiv'2022 [Paper]

  • A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability
    Chengtai Cao, Fan Zhou, Yurou Dai, Jianping Wang
    arXiv'2022 [Paper] [Code]

  • A Survey of Automated Data Augmentation for Image Classification: Learning to Compose, Mix, and Generate
    Tsz-Him Cheung, Dit-Yan Yeung
    IEEE TNNLS'2023 [Paper]

  • Survey: Image Mixing and Deleting for Data Augmentation
    Humza Naveed, Saeed Anwar, Munawar Hayat, Kashif Javed, Ajmal Mian
    EAAI'2024 [Paper]

  • A Survey on Mixup Augmentations and Beyond
    Xin Jin, Hongyu Zhu, Siyuan Li, Zedong Wang, Zecheng Liu, Chang Yu, Huafeng Qin, Stan. Z. Li
    arXiv'2024 [Paper]

Benchmark

  • OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification
    Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Weiyang Jin, Stan Z. Li
    arXiv'2024 [Paper] [Code]

(back to top)

Classification Results on Datasets

Mixup methods classification results on general datasets: CIFAR10 \ CIFAR100, TinyImageNet, and ImageNet-1K. $(\cdot)$ denotes training epochs based on ResNet18 (R18), ResNet50 (R50), ResNeXt50 (RX50), PreActResNet18 (PreActR18), and Wide-ResNet28 (WRN28-10, WRN28-8).

Method Publish CIFAR10 CIFAR100 CIFAR100 CIFAR100 CIFAR100 CIFAR100 Tiny-ImageNet Tiny-ImageNet ImageNet-1K ImageNet-1K
R18 R18 RX50 PreActR18 WRN28-10 WRN28-8 R18 RX50 R18 R50
MixUp ICLR'2018 96.62(800) 79.12(800) 82.10(800) 78.90(200) 82.50(200) 82.82(400) 63.86(400) 66.36(400) 69.98(100) 77.12(100)
CutMix ICCV'2019 96.68(800) 78.17(800) 78.32(800) 76.80(1200) 83.40(200) 84.45(400) 65.53(400) 66.47(400) 68.95(100) 77.17(100)
Manifold Mixup ICML'2019 96.71(800) 80.35(800) 82.88(800) 79.66(1200) 81.96(1200) 83.24(400) 64.15(400) 67.30(400) 69.98(100) 77.01(100)
FMix arXiv'2020 96.18(800) 79.69(800) 79.02(800) 79.85(200) 82.03(200) 84.21(400) 63.47(400) 65.08(400) 69.96(100) 77.19(100)
SmoothMix CVPRW'2020 96.17(800) 78.69(800) 78.95(800) - - 82.09(400) - - - 77.66(300)
GridMix PR'2020 96.56(800) 78.72(800) 78.90(800) - - 84.24(400) 64.79(400) - - -
ResizeMix arXiv'2020 96.76(800) 80.01(800) 80.35(800) - 85.23(200) 84.87(400) 63.47(400) 65.87(400) 69.50(100) 77.42(100)
SaliencyMix ICLR'2021 96.20(800) 79.12(800) 78.77(800) 80.31(300) 83.44(200) 84.35(400) 64.60(400) 66.55(400) 69.16(100) 77.14(100)
Attentive-CutMix ICASSP'2020 96.63(800)n 78.91(800) 80.54(800) - - 84.34(400) 64.01(400) 66.84(400) - 77.46(100)
Saliency Grafting AAAI'2022 - 80.83(800) 83.10(800) - 84.68(300) - 64.84(600) 67.83(400) - 77.65(100)
PuzzleMix ICML'2020 97.10(800) 81.13(800) 82.85(800) 80.38(1200) 84.05(200) 85.02(400) 65.81(400) 67.83(400) 70.12(100) 77.54(100)
Co-Mix ICLR'2021 97.15(800) 81.17(800) 82.91(800) 80.13(300) - 85.05(400) 65.92(400) 68.02(400) - 77.61(100)
SuperMix CVPR'2021 - - - 79.07(2000) 93.60(600) - - - - 77.60(600)
RecursiveMix NIPS'2022 - 81.36(200) - 80.58(2000) - - - - - 79.20(300)
AutoMix ECCV'2022 97.34(800) 82.04(800) 83.64(800) - - 85.18(400) 67.33(400) 70.72(400) 70.50(100) 77.91(100)
SAMix arXiv'2021 97.50(800) 82.30(800) 84.42(800) - - 85.50(400) 68.89(400) 72.18(400) 70.83(100) 78.06(100)
AlignMixup CVPR'2022 - - - 81.71(2000) - - - - - 78.00(100)
MultiMix NIPS'2023 - - - 81.82(2000) - - - - - 78.81(300)
GuidedMixup AAAI'2023 - - - 81.20(300) 84.02(200) - - - - 77.53(100)
Catch-up Mix AAAI'2023 - 82.10(400) 83.56(400) 82.24(2000) - - 68.84(400) - - 78.71(300)
LGCOAMix TIP'2024 - 82.34(800) 84.11(800) - - - 68.27(400) 73.08(400) - -
AdAutoMix ICLR'2024 97.55(800) 82.32(800) 84.42(800) - - 85.32(400) 69.19(400) 72.89(400) 70.86(100) 78.04(100)

Mixup methods classification results on ImageNet-1K dataset use ViT-based models: DeiT, Swin Transformer (Swin), Pyramid Vision Transformer (PVT), and ConvNext trained 300 epochs.

Method Publish ImageNet-1K ImageNet-1K ImageNet-1K ImageNet-1K ImageNet-1K ImageNet-1K ImageNet-1K
DieT-Tiny DieT-Small DieT-Base Swin-Tiny PVT-Tiny PVT-Small ConvNeXt-Tiny
MixUp ICLR'2018 74.69 77.72 78.98 81.01 75.24 78.69 80.88
CutMix ICCV'2019 74.23 80.13 81.61 81.23 75.53 79.64 81.57
FMix arXiv'2020 74.41 77.37 - 79.60 75.28 78.72 81.04
ResizeMix arXiv'2020 74.79 78.61 80.89 81.36 76.05 79.55 81.64
SaliencyMix ICLR'2021 74.17 79.88 80.72 81.37 75.71 79.69 81.33
Attentive-CutMix ICASSP'2020 74.07 80.32 82.42 81.29 74.98 79.84 81.14
PuzzleMix ICML'2020 73.85 80.45 81.63 81.47 75.48 79.70 81.48
AutoMix ECCV'2022 75.52 80.78 82.18 81.80 76.38 80.64 82.28
SAMix arXiv'2021 75.83 80.94 82.85 81.87 76.60 80.78 82.35
TransMix CVPR'2022 74.56 80.68 82.51 81.80 75.50 80.50 -
TokenMix ECCV'2022 75.31 80.80 82.90 81.60 75.60 - 73.97
TL-Align ICCV'2023 73.20 80.60 82.30 81.40 75.50 80.40 -
SMMix ICCV'2023 75.56 81.10 82.90 81.80 75.60 81.03 -
Mixpro ICLR'2023 73.80 81.30 82.90 82.80 76.70 81.20 -
LUMix ICASSP'2024 - 80.60 80.20 81.70 - - 82.50

(back to top)

Related Datasets Link

Summary of datasets for mixup methods tasks. Link to dataset websites is provided.

Dataset Type Label Task Total data number Link
MINIST Image 10 Classification 70,000 MINIST
Fashion-MNIST Image 10 Classification 70,000 Fashion-MINIST
CIFAR10 Image 10 Classification 60,000 CIFAR10
CIFAR100 Image 100 Classification 60,000 CIFAR100
SVHN Image 10 Classification 630,420 SVHN
GTSRB Image 43 Classification 51,839 GTSRB
STL10 Image 10 Classification 113,000 STL10
Tiny-ImageNet Image 200 Classification 100,000 Tiny-ImageNet
ImageNet-1K Image 1,000 Classification 1,431,167 ImageNet-1K
CUB-200-2011 Image 200 Classification, Object Detection 11,788 CUB-200-2011
FGVC-Aircraft Image 102 Classification 10,200 FGVC-Aircraft
StanfordCars Image 196 Classification 16,185 StanfordCars
Oxford Flowers Image 102 Classification 8,189 Oxford Flowers
Caltech101 Image 101 Classification 9,000 Caltech101
SOP Image 22,634 Classification 120,053 SOP
Food-101 Image 101 Classification 101,000 Food-101
SUN397 Image 899 Classification 130,519 SUN397
iNaturalist Image 5,089 Classification 675,170 iNaturalist
CIFAR-C Image 10,100 Corruption Classification 60,000 CIFAR-C
CIFAR-LT Image 10,100 Long-tail Classification 60,000 CIFAR-LT
ImageNet-1K-C Image 1,000 Corruption Classification 1,431,167 ImageNet-1K-C
ImageNet-A Image 200 Classification 7,500 ImageNet-A
Pascal VOC 102 Image 20 Object Detection 33,043 Pascal VOC 102
MS-COCO Detection Image 91 Object Detection 164,062 MS-COCO Detection
DSprites Image 737,280*6 Disentanglement 737,280 DSprites
Place205 Image 205 Recognition 2,500,000 Place205
Pascal Context Image 459 Segmentation 10,103 Pascal Context
ADE20K Image 3,169 Segmentation 25,210 ADE20K
Cityscapes Image 19 Segmentation 5,000 Cityscapes
StreetHazards Image 12 Segmentation 7,656 StreetHazards
PACS Image 7*4 Domain Classification 9,991 PACS
BRACS Medical Image 7 Classification 4,539 BRACS
BACH Medical Image 4 Classification 400 BACH
CAME-Lyon16 Medical Image 2 Anomaly Detection 360 CAME-Lyon16
Chest X-Ray Medical Image 2 Anomaly Detection 5,856 Chest X-Ray
BCCD Medical Image 4,888 Object Detection 364 BCCD
TJU600 Palm-Vein Image 600 Classification 12,000 TJU600
VERA220 Palm-Vein Image 220 Classification 2,200 VERA220
CoNLL2003 Text 4 Classification 2,302 CoNLL2003
20 Newsgroups Text 20 OOD Detection 20,000 20 Newsgroups
WOS Text 134 OOD Detection 46,985 WOS
SST-2 Text 2 Sentiment Understanding 68,800 SST-2
Cora Graph 7 Node Classification 2,708 Cora
Citeseer Graph 6 Node Classification 3,312 Citeseer
PubMed Graph 3 Node Classification 19,717 PubMed
BlogCatalog Graph 39 Node Classification 10,312 BlogCatalog
Google Commands Speech 30 Classification 65,000 Google Commands
VoxCeleb2 Speech 6,112 Sound Classification 1,000,000+ VoxCeleb2
VCTK Speech 110 Enhancement 44,000 VCTK
ModelNet40 3D Point Cloud 40 Classification 12,311 ModelNet40
ScanObjectNN 3D Point Cloud 15 Classification 15,000 ScanObjectNN
ShapeNet 3D Point Cloud 16 Recognition, Classification 16,880 ShapeNet
KITTI360 3D Point Cloud 80,256 Detection, Segmentation 14,999 KITTI360
UCF101 Video 101 Action Recognition 13,320 UCF101
Kinetics400 Video 400 Action Recognition 260,000 Kinetics400
Airfoil Tabular - Regression 1,503 Airfoil
NO2 Tabular - Regression 500 NO2
Exchange-Rate Timeseries - Regression 7,409 Exchange-Rate
Electricity Timeseries - Regression 26,113 Electricity

(back to top)

Contribution

Feel free to send pull requests to add more links with the following Markdown format. Note that the abbreviation, the code link, and the figure link are optional attributes.

* **TITLE**<br>
*AUTHER*<br>
PUBLISH'YEAR [[Paper](link)] [[Code](link)]
   <details close>
   <summary>ABBREVIATION Framework</summary>
   <p align="center"><img width="90%" src="link_to_image" /></p>
   </details>

Citation

If you feel that our work has contributed to your research, please cite it, thanks. 🥰

@article{jin2024survey,
  title={A Survey on Mixup Augmentations and Beyond},
  author={Jin, Xin and Zhu, Hongyu and Li, Siyuan and Wang, Zedong and Liu, Zicheng and Yu, Chang and Qin, Huafeng and Li, Stan Z},
  journal={arXiv preprint arXiv:2409.05202},
  year={2024}
}

Current contributors include: Siyuan Li (@Lupin1998), Xin Jin (@JinXins), Zicheng Liu (@pone7), and Zedong Wang (@Jacky1128). We thank all contributors for Awesome-Mixup!

(back to top)

License

This project is released under the Apache 2.0 license.

Acknowledgement

This repository is built using the OpenMixup library and Awesome README repository.

Related Project