This repository provides a curated collection of research papers focused on few-shot learning on graphs. It is derived from our survey paper: A Survey of Few-Shot Learning on Graphs: From Meta-Learning to Pre-Training and Prompting. We will update this list regularly. If you notice any errors or missing papers, please feel free to open an issue or submit a pull request.
- Awesome Few-Shot Learning on Graphs
-
Graph Prototypical Networks for Few-shot Learning on Attributed Networks. In CIKM'2020, Paper, Code.
-
Adaptive Attentional Network for Few-Shot Knowledge Graph Completion. In EMNLP'2020, Paper, Code.
-
HMNet: Hybrid Matching Network for Few-Shot Link Prediction. In DASFAA'2021, Paper.
-
Tackling Long-Tailed Relations and Uncommon Entities in Knowledge Graph Completion. In EMNLP'2019, Paper, Code.
-
Relative and absolute location embedding for few-shot node classification on graph. In AAAI'2021, Paper, Code. π
-
Meta-learning on heterogeneous information networks for cold-start recommendation. In KDD'2020, Paper, Code.
-
Graph meta learning via local subgraphs. In NeurIPS'2020, Paper, Code.
-
Learning to extrapolate knowledge: Transductive few-shot out-of-graph link prediction. In NeurIPS'2020, Paper, Code.
-
Towards locality-aware meta-learning of tail node embeddings on networks. In CIKM'2020, Paper, Code.
-
Graph few-shot learning via knowledge transfer. In AAAI'2020, Paper.
-
Meta-Inductive Node Classification across Graphs. In SIGIR'2021, Paper, Code.
-
Prototypical networks for few-shot learning. In NeurIPS'2017, Paper, Code.
-
Graph Few-shot Learning with Attribute Matching. In CIKM'2020, Paper.
-
Adaptive-Step Graph Meta-Learner for Few-Shot Graph Classification. In CIKM'2020, Paper.
-
Few-shot link prediction in dynamic networks. In WSDM'2022, Paper.
-
Deep Graph Contrastive Representation Learning. Preprint, Paper, Code.
-
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training, In KDD'2020, Paper, Code.
-
Graph Contrastive Learning with Augmentations, In NeurIPS'2020, Paper, Code.
-
SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation, In WWW'2022, Paper, Code.
-
Self-supervised Graph-level Representation Learning with Local and Global Structure, In ICML'2021, Paper, Code.
-
InfoGraph: Unsupervised Representation Learning on Graphs, In ICLR'2020, Paper, Code.
-
Subgraph Contrast for Scalable Self-Supervised Graph Representation Learning, Preprint, Paper, Code.
-
Contrastive Multi-View Representation Learning on Graphs, In ICML'2020, Paper, Code.
-
Automated Graph Contrastive Learning, In NeurIPS'2021, Paper, Code.
-
Contrastive General Graph Matching with Adaptive Augmentation Sampling, In IJCAI'2024, Paper, Code.
-
Bringing Your Own View: Graph Contrastive Learning with Generated Views, In WSDM'2022, Paper, Code.
-
Graph Contrastive Learning with Adaptive Augmentation, In WWW'2021, Paper, Code.
-
Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning, In KDD'2021, Paper, Code.
-
Contrastive Pre-Training of GNNs on Heterogeneous Graphs, In CIKM'2021, Paper, Code.
-
Pre-training on Large-scale Heterogeneous Graph, In KDD'2021, Paper, Code.
-
A Self-supervised Riemannian GNN with Time Varying Curvature for Temporal Graph Learning, In CIKM'2022, Paper.
-
Self-supervised Representation Learning on Dynamic Graphs, In CIKM'2021, Paper.
-
CPDG: A Contrastive Pre-training Method for Dynamic Graph Neural Networks, In ICDE'2024, Paper, Code.
-
Protein representation learning by geometric structure pretraining, In ICLR'2023, Paper, Code.
-
GPT-GNN: Generative Pre-Training of Graph Neural Networks, In KDD'2020, Paper, Code.
-
What's Behind the Mask: Understanding Masked Graph Modeling for Graph Autoencoders, In KDD'2023, Paper, Code.
-
Graph Auto-encoder via Neighborhood Wasserst Reconstruction, In ICLR'2022, Paper, Code.
-
Self-supervised Representation Learning via Latent Graph Prediction, In NeurIPS'2022, Paper.
-
GraphMAE: Self-Supervised Masked Graph Autoencoders, In KDD'2022, Paper, Code.
-
GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner, In WWW'2023, Paper, Code.
-
Mask and Reason: Pre-Training Knowledge Graph Transformers for Complex Logical Queries, In KDD'2022, Paper, Code.
-
Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer, In WWW'2023, Paper, Code.
-
Zero-shot Item-based Recommendation via Multi-task Product Knowledge Graph Pre-Training, In CIKM'2023, Paper.
-
Pre-training on Dynamic Graph Neural Networks, In Neurocomputing'2022, Paper, Code.
-
Pre-training Enhanced Spatial-temporal Graph Neural Network for Multivariate Time Series Forecastingn, In KDD'2022, Paper, Code.
-
Pre-training Graph Transformer with Multimodal Side Information for Recommendation, In MM'2021, Paper, Code.
-
Multi-task Item-attribute Graph Pre-training for Strict Cold-start Item Recommendation, In RecSys'2023, Paper, Code.
-
GPPT: Graph pre-training and prompt tuning to generalize graph neural networks. In KDD'2022, Paper, Code.
-
Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural Networks. In CIKM'2023, Paper, Code.
-
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. In WWW'2023, Paper, Code. π
-
Motif-based prompt learning for universal cross-domain recommendation. In WSDM'2024, Paper.
-
Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs. In TKDE'2024, Paper,Code. π
-
Non-Homophilic Graph Pre-Training and Prompt Learning. Preprint, Paper.
-
Text-Free Multi-domain Graph Pre-training: Toward Graph Foundation Models. Preprint, Paper.
-
MultiGPrompt for multi-task pre-training and prompting on graphs. In WWW'2024, Paper, Code.
-
HetGPT: Harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks. In WWW'2024, Paper.
-
Universal prompt tuning for graph neural networks. In NeurIPS'2023, Paper, Code.
-
Inductive Graph Alignment Prompt: Bridging the Gap between Graph Pre-training and Inductive Fine-tuning From Spectral Perspective. In WWW'2024, Paper.
-
Sgl-pt: A strong graph learner with graph prompt tuning. Preprint, Paper.
-
HGPrompt: Bridging homogeneous and heterogeneous graphs for few-shot prompt learning. In AAAI'2024, Paper, Code.
-
PSP: Pre-training and structure prompt tuning for graph neural networks. In Preprint, Paper, Code.
-
ULTRA-DP: Unifying graph pre-training with multi-task graph dual prompt. Preprint, Paper, Code.
-
Virtual node tuning for few-shot node classification. In KDD'2023, Paper.
-
All in one: Multi-task prompting for graph neural networks. In KDD'2023, Paper, Code. π
-
DyGPrompt: Learning Feature and Time Prompts on Dynamic Graphs. Preprint, Paper.
-
Prompt learning on temporal interaction graphs. Preprint, Paper.
-
Augmenting low-resource text classification with graph-grounded pre-training and prompting. In SIGIR'2023, Paper, Code. π
-
Prompt tuning on graph-augmented low-resource text classification. In TKDE'2024, Paper, Code.
-
GraphGPT: Graph instruction tuning for large language models. In SIGIR'2024, Paper, Code.
-
Natural language is all a graph needs. In EACL'2024, Paper, Code.
-
GIMLET: A unified graph-text model for instruction-based molecule zero-shot learning. In NeurIPS'2023, Paper, Code.
-
One for all: Towards training one graph model for all classification tasks. In ICLR'2024, Paper, Code. π
-
HiGPT: Heterogeneous graph language model. In KDD'2024, Paper, Code.
π Contributions to this repository are highly encouraged!
If you have any relevant resources to share, please feel free to open an issue or submit a pull request.
If you find this repository useful, please feel free to cite the following works:
@article{yu2024few,
title={Few-Shot Learning on Graphs: from Meta-learning to Pre-training and Prompting},
author={Yu, Xingtong and Fang, Yuan and Liu, Zemin and Wu, Yuxia and Wen, Zhihao and Bo, Jianyuan and Zhang, Xinming and Hoi, Steven CH},
journal={arXiv preprint arXiv:2402.01440},
year={2024}
}
GraphPrompt A Representative Prompt Learning Method on Graphs. One of the Most Influential Papers in WWW'23 by Paper Digest (2023-09 Version).
@inproceedings{liu2023graphprompt,
title={Graphprompt: Unifying pre-training and downstream tasks for graph neural networks},
author={Liu, Zemin and Yu, Xingtong and Fang, Yuan and Zhang, Xinming},
booktitle={WWW},
pages={417--428},
year={2023}
}
GraphPrompt+ A Generalized Graph Prompt Method.
@article{yu2023generalized,
title={Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs},
author={Yu, Xingtong and Liu, Zhenghao and Fang, Yuan and Liu, Zemin and Chen, Sihong and Zhang, Xinming},
journal={IEEE Transactions on Knowledge and Data Engineering},
year={2024}
}
HGPrompt A Heterogeneous Graph Prompt Method.
@inproceedings{yu2023hgprompt,
title={HGPROMPT: Bridging Homogeneous and Heterogeneous Graphs for Few-shot Prompt Learning},
author={Yu, Xingtong and Liu, Zemin and Fang, Yuan and Zhang, Xinming},
booktitle={AAAI},
pages={16578--16586},
year={2024}
}
DyGPrompt A Dynamic Graph Prompt Method.
@article{yu2024dygprompt,
title={DyGPrompt: Learning Feature and Time Prompts on Dynamic Graphs},
author={Yu, Xingtong and Liu, Zhenghao and Fang, Yuan and Zhang, Xinming},
journal={arXiv preprint arXiv:2405.13937},
year={2024}
}
ProNoG A Non-homophilic Graph Prompt Method.
@article{yu2024non,
title={Non-Homophilic Graph Pre-Training and Prompt Learning},
author={Yu, Xingtong and Zhang, Jie and Fang, Yuan and Jiang, Renhe},
journal={arXiv preprint arXiv:2408.12594},
year={2024}
}
MultiGPrompt A Multi-task Pre-training and Graph Prompt Method.
@inproceedings{yu2024multigprompt,
title={MultiGPrompt for Multi-Task Pre-Training and Prompting on Graphs},
author={Yu, Xingtong and Zhou, Chang and Fang, Yuan and Zhang, Xinming},
booktitle={WWW},
pages={515--526},
year={2024}
}
MDGPT A Multi-domain Pre-training and Graph Prompt Method.
@article{yu2024few,
title={Few-Shot Learning on Graphs: from Meta-learning to Pre-training and Prompting},
author={Yu, Xingtong and Fang, Yuan and Liu, Zemin and Wu, Yuxia and Wen, Zhihao and Bo, Jianyuan and Zhang, Xinming and Hoi, Steven CH},
journal={arXiv preprint arXiv:2402.01440},
year={2024}
}
Methods for Structure Scarce Problem.
@inproceedings{liu2021tail,
title={Tail-GNN: Tail-node graph neural networks},
author={Liu, Zemin and Nguyen, Trung-Kien and Fang, Yuan},
booktitle={KDD},
pages={1109--1119},
year={2021}
}
@inproceedings{lu2020meta,
title={Meta-learning on heterogeneous information networks for cold-start recommendation},
author={Lu, Yuanfu and Fang, Yuan and Shi, Chuan},
booktitle={KDD},
pages={1563--1573},
year={2020}
}
@article{liu2023locality,
title={Locality-aware tail node embeddings on homogeneous and heterogeneous networks},
author={Liu, Zemin and Fang, Yuan and Zhang, Wentao and Zhang, Xinming and Hoi, Steven CH},
journal={IEEE TKDE},
volume={36},
number={6},
pages={2517--2532},
year={2023},
publisher={IEEE}
}
@inproceedings{liu2020towards,
title={Towards locality-aware meta-learning of tail node embeddings on networks},
author={Liu, Zemin and Zhang, Wentao and Fang, Yuan and Zhang, Xinming and Hoi, Steven CH},
booktitle={CIKM},
pages={975--984},
year={2020}
}