-
Notifications
You must be signed in to change notification settings - Fork 1
/
README
executable file
·68 lines (49 loc) · 3.13 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
This data set contains tweets annotated with their universal parts-of-speech tags, with 379 tweets for training and 112 for dev, and 12 possible part-of-speech labels. The test corpus will contain 295 tweets.
The format of the data files is pretty straight forward. It contains a line for each token (with its label separated by a whitespace), and with sentences separated with empty line. See the below example an example, and examine the text files yourself (always a good idea).
------------------------------
@paulwalk X
It PRON
's VERB
the DET
view NOUN
from ADP
where ADV
I PRON
'm VERB
living VERB
for ADP
two NUM
weeks NOUN
. .
Empire NOUN
State NOUN
Building NOUN
= X
ESB NOUN
. .
Pretty ADV
bad ADJ
storm NOUN
here ADV
last ADJ
evening NOUN
------------------------------
2. Files
- data.py: The primary entry point that reads the data, and trains and evaluates the tagger implementation.
usage: python data.py [-h] [-m MODEL] [--test]
optional arguments:
-h, --help show this help message and exit
-m MODEL, --model MODEL
'LR'/'lr' for logistic regression tagger
'CRF'/'crf' for conditional random field tagger
--test Make predictions for test dataset
- tagger.py: Code for two sequence taggers, logistic regression and CRF. Both of these taggers rely on 'feats.py' and 'feat_gen.py' to compute the features for each token. The CRF tagger also relies on 'viterbi.py' to decode (which is currently incorrect), and on 'struct_perceptron.py' for the training algorithm (which also needs Viterbi to be working).
- feats.py & 'feat_gen.py: Code to compute, index, and maintain the token features. The primary purpose of 'feats.py' is to map the boolean features computed in 'feats_gen.py' to integers, and do the reverse mapping (if you want to know the name of a feature from its index). 'feats_gen.py' is used to compute the features of a token in a sentence, which you will be extending. The method there returns the computed features for a token as a list of string (so does not have to worry about indices, etc.).
- 'struct_perceptron.py': A direct port (with negligible changes) of the structured perceptron trainer from the 'pystruct' project. Only used for the CRF tagger. The description of the various hyperparameters of the trainer are available here, but you should change them from the constructor in 'tagger.py'.
- 'viterbi.py' (and 'viterbi_test.py'): General purpose interface to a sequence Viterbi decoder in 'viterbi.py', which currently has an incorrect implementation. Once you have implemented the Viterbi implementation, running 'python viterbi_test.py' should result in succesful execution without any exceptions.
- conlleval.pl: This is the official evaluation script for the CONLL evaluation.
Although it computes the same metrics as the python code does, it supports a bunch of features, such as:
(a) Latex formatted tables, by using -l,
(b) BIO annotation by default, turned off using -r.
In particular, when evaluating the output prediction files (~.pred) for POS tagging,
$ ./conlleval.pl -r -d \\t < ./predictions/twitter_dev.pos.pred