This is a repository for organizing articles related to Domain generalization, OOD, optimization, data-centric learning, prompt learning, robutness, and causality. Most papers are linked to my reading notes. Feel free to visit my personal homepage and contact me for collaboration and discussion.
I'm the first year Ph.D. student at the State Key Laboratory of Pattern Recognition, the University of Chinese Academy of Sciences, advised by Prof. Tieniu Tan. I have also spent time at Microsoft, advised by Prof. Jingdong Wang.
- Our paper Towards Principled Disentanglement for Domain Generalization is accepted to CVPR2022. 😊 [Reading Notes] [Code] [paper]
- Domain generalization/OOD papers on ICLR 2022 have been updated.
- Implicit Neural Representation (INR) papers on 2D images have been updated.
- Generalization/OOD
- Robutness/Adaptation/Fairness
- Causality
- Data-Centric/Prompt
- Optimization/GNN/Energy/Others
- CVPR(CMU) Towards Principled Disentanglement for Domain Generalization(将解耦用于DG,新理论,新方法)
- ICLR Oral A Fine-Grained Analysis on Distribution Shift(如何准确的定义distribution shift,以及如何系统的测量模型的鲁棒性)
- ICLR Oral Fine-Tuning Distorts Pretrained Features and Underperforms Out-of-Distribution(fine-tuning(微调)和linear probing相辅相成)
- ICLR Spotlight Towards a Unified View of Parameter-Efficient Transfer Learning(统一的参数高效微调理论框架)
- ICLR Spotlight How Do Vision Transformers Work?(Vision Transformers (ViTs)的优良特性)
- ICLR Spotlight On Predicting Generalization using GANs(使用源域数据训练出的GAN来预测测试误差)
- ICLR Poster Uncertainty Modeling for Out-of-Distribution Generalization(域泛化时考虑特征的不确定性,一种新的数据增强方法)
- ICLR Poster Gradient Matching for Domain Generalization(鼓励来自不同域的梯度之间的内积更大)
- ICCV CrossNorm and SelfNorm for Generalization under Distribution Shifts(思路简单的正则化技术用于DG)
- ICCV A Style and Semantic Memory Mechanism for Domain Generalization(尝试着去使用intra-domain style invariance来提升模型的泛化性能)
- Arxiv: Towards a Theoretical Framework of Out-of-Distribution Generalization (新理论)
- Arxiv(Yoshua Bengio) Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization (当OOD遇到信息瓶颈理论)
- Arxiv Generalization of Reinforcement Learning with Policy-Aware Adversarial Data Augmentation
- Arxiv Embracing the Dark Knowledge: Domain Generalization Using Regularized Knowledge Distillation(使用知识蒸馏作为正则化手段)
- Arxiv Delving Deep into the Generalization of Vision Transformers under Distribution Shifts (视觉transformer的泛化性讨论)
- Arxiv Training Data Subset Selection for Regression with Controlled Generalization Error (从大量训练实例中选择数据子集,并保持可比的泛化性)
- Arxiv(MIT) Measuring Generalization with Optimal Transport (网络复杂度与泛化性的理论研究,)
- Arxiv(SJTU) OoD-Bench: Benchmarking and Understanding Out-of-Distribution Generalization Datasets and Algorithms (揭示OOD的评测标准尚不完善并提出评测方案)
- Arxiv (Tsinghu) Domain-Irrelevant Representation Learning for Unsupervised Domain Generalization (新的task:无监督的DG,源域的数据标签不可以用)
- ICML Oral: Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? (彩票模型寻找模型中泛化能力更强的小模型)
- ICML Oral:Domain Generalization using Causal Matching (contrastive loss特征对齐+特征不变性约束)
- ICML Oral: Just Train Twice: Improving Group Robustness without Training Group Information
- ICML Spotlight: Environment Inference for Invariant Learning (没有域标签如何学习域不变性特征?)
- ICLR Poster: Understanding the failure modes of out-of-distribution generalization (OOD失败的两种原因)
- ICLR Poster: An Empirical Study of Invariant Risk Minimization(对IRM的实验性探索,如可见域的diversity如何影响IRM性能等)
- ICLR Poster In Search of Lost Domain Generalization (没有model selection的方法不是好方法,如何根据验证集选择模型?)
- ICLR Poster Modeling the Second Player in Distributionally Robust Optimization(用对抗学习建模DRO的uncertainty set)
- ICLR Poster Learning perturbation sets for robust machine learning(使用生成模型学习扰动集合)
- ICLR Spotlight(Yoshua Bengio) Systematic generalisation with group invariant predictions (将每个类分成不同的domain(environment inference,然后约束每个域的特征尽可能一致从而避免虚假依赖))
- CVPR Oral: Reducing Domain Gap by Reducing Style Bias (channel-wise 均值作为图像风格,减少CNN对风格的依赖)
- AISTATS Linear Regression Games: Convergence Guarantees to Approximate Out-of-Distribution Solutions
- AISTATS Oral Does Invariant Risk Minimization Capture Invariance(IRM只有在满足特定条件的情况下才能真正捕捉不变形特征)
- NeurIPS Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests(本文使用因果工具设计了一个可行的算法,将反事实推理与域泛化(OOD)联系起来,进行有效的“stress test”,比如变化一个句子包含的的gender信息,看最后情感分类会不会改变。)
- NeurIPS Adaptive Risk Minimization: Learning to Adapt to Domain Shift(利用未标记的数据来更好地处理新domain引起的distribution shift)
- NeurIPS An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers(基于domain adaptation的理论测量方法不能准确地捕捉OOD泛化行为)
- NeurIPS Spotlight On Inductive Biases for Heterogeneous Treatment Effect Estimation(使用因果工具设计了一个可行的算法,将反事实推理与域泛化(OOD)联系起来)
- NeurIPS Spotlight Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization(在test的阶段,我们在依然会选择更新模型头部的linear层)
- NeurIPS Why Do Better Loss Functions Lead to Less Transferable Features?(本文研究了训练目标的选择如何影响卷积神经网络在ImageNet上训练得到的可迁移性)
- Arxiv I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models(将Domain Adaptation看作是因果图推理问题)
- Arxiv (Stanford)Distributionally Robust Lossesfor Latent Covariate Mixtures.
- NeurIPS Energy-based Out-of-distribution Detection(使用能量模型检测OOD样本)
- NeurIPS Fairness without demographics through adversarially reweighted learning (利用对抗学习对难样本进行加权,希望加权后的样本使得分类器的损失更大)
- NeurIPS Self-training Avoids Using Spurious Features Under Domain Shift (使用target domain的无标签数据训练有助于避免使用虚假特征)
- NeurIPS What shapes feature representations? Exploring datasets, architectures, and training(Simplicity Bias,神经网络倾向于拟合“容易”的特征)
- Arxiv Invariant Risk Minimization (奠基之作,跳出经验风险最小化--不变风险最小化)
- ICLR Poster The Risks of Invariant Risk Minimization (不变风险最小化的缺陷:域数目过少IRM即失败)
- ICLR Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization(GroupDRO: 拥有强正则的DRO)
- ICML An investigation of why overparameterizationexacerbates spurious correlations(神经网络的过参数化是造成网络使用虚假相关性的重要原因)
- ICML UDA workshop Learning Robust Representations with Score Invariant Learning(非归一化统计模型:用能量学习的方式做OOD)
- ICML 2018 Oral (Stanford) Fairness Without Demographics in Repeated Loss Minimization.
- ICCV 2017 CCSA--Unified Deep Supervised Domain Adaptation and Generalization (对比损失对齐源域目标域样本空间)
- JSTOR (Peters)Causal inference by using invariant prediction: identification and confidence intervals.
- ICML 2015 [Towards a Learning Theory of Cause-Effect Inference](使用kernel mean embedding和分类器进行casual inference )
- IJCAI 2020 (CMU) Causal Discovery from Heterogeneous/Nonstationary Data
- ICLR Poster Learning perturbation sets for robust machine learning(使用生成模型学习扰动集合)
- ICCV Generalized Source-free Domain Adaptation(不使用源域数据,只有源域预训练的模型时如何adaptation并保证source domain的性能)
- ICCV Adaptive Adversarial Network for Source-free Domain Adaptation(在模型优化过程中,我们能否寻找一种新的针对目标的分类器,并使其适应目标特征)
- ICCV Gradient Distribution Alignment Certificates Better Adversarial Domain Adaptation(该算法通过特征提取器和鉴别器之间的对抗学习来减小特征梯度在两个域之间的分布差异)
- FAccT Algorithmic recourse: from counterfactual explanations to interventions(提出了causal recourse的概念)
- ICML WorkShop On the Fairness of Causal Algorithmic Recourse(本文在group recourse的基础上考虑了多个变量之间的相互影响即所谓的因果关系。)
- NeurIPS Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?(DA为什么需要两个encoder?)
- NeurIPS Gradual Domain Adaptation without Indexed Intermediate Domains(没有domaparameterin label的Gradual domain adaption(GDA))
- NeurIPS Implicit Semantic Response Alignment for Partial Domain Adaptation(PDA如何利用好额外类)
- NeurIPS The balancing principle for parameter choice in distance-regularized domain adaptation(如何挑选分类损失和正则化项的tradeoff parameter)
- Available at Optimization Online Kullback-Leibler Divergence Constrained Distributionally Robust Optimization(开篇之作,使用KL散度构造DRO中的uncertainty set)
- ICLR 2018 Oral Certifying Some Distributional Robustnesswith Principled Adversarial Training(基于 Wasserstein-ball构造uncertainty set,用于adversarial robustness)
- ICML 2018 Oral Does Distributionally Robust Supervised Learning Give Robust Classifiers?(DRO就一定比ERM好?不一定!必须引入额外信息)
- NeurIPS 2019 Distributionally Robust Optimization and Generalization in Kernel Methods(本文使用MMD(maximummean discrepancy)对uncertainty set进行建模,得到了MMD DRO)
- EMNLP 2019 Distributionally Robust Language Modeling(Coarse-grained mixture models在NLP中的经典案例)
- Arxiv 2019 Equalizing recourse across groups(基础的recourse测量的是单个样本,本文给出了一个group级别的recourse度量。)
- ICML 2020 Oral Continuously indexed domain adaptation(连续变化的domain)
- ICML 2017 Estimating individual treatment effect: generalization bounds and algorithms(本文第一次提出了ITE的概念,并使用DA的一套理论对其进行bound,依次设计了一套行而有效的算法。)
- NeurIPS 2019 Adapting Neural Networks for the Estimation of Treatment Effects(这篇文章的核心思想是这样的:我们没必要使用所有的协方差变量X进行adjustment。)
- PNAS 2019 Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning(本文提出了一种新的框架X-learner,当各个treatment组的数据非常不均衡的时候,这种框架非常有效。)
- AAAI 2020 Learning Counterfactual Representations for Estimating Individual Dose-Response Curves(本文提出了新的metric,新的数据集,和训练策略,允许对任意数量的treatment的outcome进行估计。)
- ICLR 2021 Oral: VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments(本文基于varying coefficient model,让每个treatment对应的branch成为treatment的函数,而不需要单独设计branch,依次达到真正的连续性。)
- Arxiv 2021 Neural Counterfactual Representation Learning for Combinations of Treatments(本文考虑更复杂的情况:多种treatment共同作用。)
- NeurIPS 2021 Spotlight On Inductive Biases for Heterogeneous Treatment Effect Estimation(本文提出了新框架FlexTENet,直接对条件因果值τ进行估计,而不是对μ1,μ2分别估计)
- NeurIPS 2021 Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms(本文分析了进来进行 individual treatment effect的各种算法范式,)
- Arxiv 2021 Cycle-Balanced Representation Learning For Counterfactual Inference
- AISTATS 2019 Towards Optimal Transport with Global Invariances(如何对齐两个数据集?)
- NeurIPS 2020 Geometric Dataset Distances via Optimal Transport(如何定义两个数据集之间的距离?)
- ICML 2021 Dataset Dynamics via Gradient Flows in Probability Space(如何进行数据集优化,使得两个数据集尽可能的像?)
- ACL 2021 WARP: Word-level Adversarial ReProgramming(Continuous Prompt开篇之作)
- Arxiv 2021 StanfordPrefix-Tuning: Optimizing Continuous Prompts for Generation(Continuous Prompt用于NLG的各种任务)(将prompt用于NLG任务上)
- Arxiv 2021 GoogleThe Power of Scale for Parameter-Efficient Prompt Tuning(目前最简单的preifx training:只对input添加prefix)
- Arxiv 2021 DeepMindMultimodal Few-Shot Learning with Frozen Language Models(利用图像编码器把图像作为一种动态的prefix,与文本一起送入LM中)
- ICML 2021 An End-to-End Framework for Molecular Conformation Generation via Bilevel Programming
- NeurIPS 2021 Deep Structural Causal Models for Tractable Counterfactual Inference
- ICML 2018 Bilevel Programming for Hyperparameter Optimization and Meta-Learning(用bi-level programming建模超参数搜索与meta-learning)
- NeurIPS 2021 Energy-based Out-of-distribution Detection
- NeurIPS 2020: The Lottery Ticket Hypothesis for Pre-trained BERT Networks (彩票假设用于BERT fine-tune))
- ICML 2021 Oral: Can Subnetwork Structure be the Key to Out-of-Distribution Generalization? (彩票假设用于OOD泛化)
- CVPR 2021: The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models (彩票假设用于视觉模型预训练)
- Estimation of Non-Normalized Statistical Models by Score Matching(使用分步积分(Score Matching)的方法解决非归一化分布的估计问题)
- UAI 2019 Sliced Score Matching: A Scalable Approach to Density and Score Estimation(将高维的梯度场沿随即方向投影到一维的标量场再进行score-macthing)
- NeurIPS 2019 Oral Generative Modeling by Estimating Gradients of the Data Distribution(通过添加噪声的方法,增强Langevin MCMC对低概率密度区域的建模能力)
- NeurIPS 2020 improved techniques for training score-based generative models(对score-based generative model失败案例的分析和改进,生成能力开始媲美GAN)
- NeurIPS 2020 Denoising Diffusion Probabilistic Models(除VAE, GAN, FLOW外又一生成范式)
- ICLR 2021 Outstanding Paper Award Score-Based Generative Modeling through Stochastic Differential Equations
- Arxiv 2021 Diffusion Models Beat GANs on Image Synthesis(Diffusion Models在图像和合成上超越GAN)
- Arxiv 2021 Variational Diffusion Models