Codes and datasets for our paper "Continual Relation Learning via Episodic Memory Activation and Reconsolidation"
If you use the code, please cite the following paper:
@inproceedings{han2018neural,
title={Continual Relation Learning via Episodic Memory Activation and Reconsolidation},
author={Han, Xu and Dai, Yi and Gao, Tianyu and Lin, Yankai and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie},
booktitle={Proceedings of ACL},
year={2020}
}
The model is implemented using PyTorch. The versions of packages used are shown below.
-
numpy==1.18.0
-
scikit-learn==0.22.1
-
scipy==1.4.1
-
torch==1.3.0
-
tqdm==4.41.1
The main experimental settings come from the project [Lifelong Relation Detection](https://github.com/hongwang600/ Lifelong_Relation_Detection).
We adapt some typical lifelong learning methods for continual relation learning, including EMR, AGEM and EWC. The code of these models can be found in the folder "./baseline/".
We provide all the datasets and word embeddings used in our experiments.
unzip data.zip -d data/
unzip glove.zip -d glove/
cp -r data/ fewrel/
cp -r glove/ fewrel/
cd fewrel
python run_multi_proto.py
cp -r data/ simque/
cp -r glove/ simque/
cd simque
python run_multi_proto.py
cp -r data/ tacred/
cp -r glove/ tacred/
cd tacred
python run_multi_proto.py
All the config files can be found in "./fewrel/config/", "./tacred/config/", and "./simque/config/". By changing the config file name in the code "run_multi_proto.py", we can run experiments with different settings. In "./fewrel/config/", "./tacred/config/", and "./simque/config/", we also provide code to generate customized settings.