IDEAL: Leveraging Infinite and Dynamic Characterizations of Large Language Models for Query-focused Summarization
-
Download datasets from their respective official repositories:
-
Preprocess the datasets using the provided Jupyter notebook:
data_process.ipynb
.
To train, run inference, and evaluate the model, execute the following script:
bash exps/finetuning_*_generate_evaluate.sh
For multi-reference Rouge scores and Bart-score evaluations on the SQuALITY dataset, use the notebook multi_reference_evaluation_SQuAlITY.ipynb
.