Pytorch Implementation of Deep Learning Approach for Relation Extraction Challenge(SemEval-2010 Task #8: Multi-Way Classification of Semantic Relations Between Pairs of Nominals) via Convolutional Neural Network with multi-size convolution kernels.
通过多尺寸卷积核卷积神经网络的深度学习方法进行关系抽取/分类的PyTorch实现。
Welcome to watch, star or fork.
This repo was tested on Python 3.5+ and PyTorch 0.4.1/1.0.0. The requirements are:
- torch >= 0.4.1
- numpy
- sklearn
- tqdm
- Given: a sentence marked with a pair of nominals
- Goal: recognize the semantic relation between these nominals.
- Example:
- "There were apples, pears and oranges in the bowl." => Content-Container(e1,e2)
- “The cup contained tea from dried ginseng.” => Entity-Origin(e1,e2)
- Cause-Effect: An event or object leads to an effect(those cancers were caused by radiation exposures)
- Instrument-Agency: An agent uses an instrument(phone operator)
- Product-Producer: A producer causes a product to exist (a factory manufactures suits)
- Content-Container: An object is physically stored in a delineated area of space (a bottle full of honey was weighed) Hendrickx, Kim, Kozareva, Nakov, O S´ eaghdha, Pad ´ o,´ Pennacchiotti, Romano, Szpakowicz Task Overview Data Creation Competition Results and Discussion The Inventory of Semantic Relations (III)
- Entity-Origin: An entity is coming or is derived from an origin, e.g., position or material (letters from foreign countries)
- Entity-Destination: An entity is moving towards a destination (the boy went to bed)
- Component-Whole: An object is a component of a larger whole (my apartment has a large kitchen)
- Member-Collection: A member forms a nonfunctional part of a collection (there are many trees in the forest)
- Message-Topic: An act of communication, written or spoken, is about a topic (the lecture was about semantics)
- Other: If none of the above nine relations appears to be suitable.
Relation | Train Data | Test Data | Total Data |
---|---|---|---|
Cause-Effect | 1,003 (12.54%) | 328 (12.07%) | 1331 (12.42%) |
Instrument-Agency | 504 (6.30%) | 156 (5.74%) | 660 (6.16%) |
Product-Producer | 717 (8.96%) | 231 (8.50%) | 948 (8.85%) |
Content-Container | 540 (6.75%) | 192 (7.07%) | 732 (6.83%) |
Entity-Origin | 716 (8.95%) | 258 (9.50%) | 974 (9.09%) |
Entity-Destination | 845 (10.56%) | 292 (10.75%) | 1137 (10.61%) |
Component-Whole | 941 (11.76%) | 312 (11.48%) | 1253 (11.69%) |
Member-Collection | 690 (8.63%) | 233 (8.58%) | 923 (8.61%) |
Message-Topic | 634 (7.92%) | 261 (9.61%) | 895 (8.35%) |
Other | 1,410 (17.63%) | 454 (16.71%) | 1864 (17.39%) |
Total | 8,000 (100.00%) | 2,717 (100.00%) | 10,717 (100.00%) |
- Train data is located in "data/SemEval2010_task8/TRAIN_FILE.TXT".
Vector_50d.txt
is used as pre-trained word2vec model.- We use micro-average F-score over the 18 relation labels apart from Other as our evaluation criteria.
-
Build vocabularies and parameters for your dataset by running
python build_vocab.py --data_dir data/SemEval2010_task8
It will write vocabulary files
words.txt
andlabels.txt
containing the words and labels in the dataset. It will also save adataset_params.json
with some extra information. -
Your experiment We created a
base_model
directory for you under theexperiments
directory. It contains a fileparams.json
which sets the hyperparameters for the experiment. It looks like{ "learning_rate": 1e-3, "batch_size": 50, "num_epochs": 100 }
For every new experiment, you will need to create a new directory under
experiments
with aparams.json
file. -
Train your experiment. Simply run
python train.py --data_dir data/SemEval2010_task8 --model_dir experiments/base_mode
It will instantiate a model and train it on the training set following the hyperparameters specified in
params.json
. It will also evaluate some metrics on the development set. -
Evaluation on the test set Once you've run many experiments and selected your best model and hyperparameters based on the performance on the development set, you can finally evaluate the performance of your model on the test set. Run
python evaluate.py --data_dir data/SemEval2010_task8 --model_dir experiments/base_model
Precision | Recall | F1 |
---|---|---|
77.74 | 84.79 | 81.11 |