This paper contains three basic module: Retrieve, FastRerank, Bi-selective Encoding. The following is the usage.
The Retrieve module is based on Apache Lucene, an open source search library. You should first download the core library from the website, and then build the java project. After that, you can index and search on the dataset by following steps:
- Change the path in the
Constants.java
to your directory. - Run
Indexer.java
to build the index of the trainning set. (This process may cost several days, but only need once.) - Run
Searcher.java
to search for the candidates and generate the template index files.
The FastRerank module is implemented with pytorch, before run it, you should first prepare all the data (template index retrieved by Retrieve module and the raw dataset).
- Run
python config.py --mode preprocess
to preprocess the data. - Run
python config.py --mode train
to train the model orpython config.py --mode train --model modelname
to finetune a model. (eg.python config.py --mode train --model model_final.pkl
) - Run
python config.py --mode dev --model modelname
to evaluate or test the model, and the template with highest score will be stored.
The Bi-selective Encoding module is integrated with OpenNMT. Now it only has the bi-selective encoding layer, I will add other three interaction methods (concate, multi-head attention, DCN attention) later. You can directly train it end to end with the data by following steps:
- Run
python preprocess.py
to prepare the data. - Run
python train.py
to train the model. - Run
python translate.py
to generate the summaries.
- If you are not familiar with Java or think the first two steps are time-consuming, you can directly train the Bi-selective Encoding module with the retrieved&reranked templates and data in Google Disk.
- I refactor my code for clearity and conciseness (rename the variables and class), but I don't have enough time to do a thorough test. If the code has some problems or you have any questions, please raise an issue, I will figure it out whenever I'm available.
- For personal communication related to BiSET, please contact me (
[email protected]
).