| [arXiv
] | [Paper
] |
The International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2023
- EndoVis-18-VQA [
EndoVis-18-VQA Q&A pair annotation
] - Cholec80-VQA [
Cholec80-VQA Q&A pair annotation
] - PSI-AVA-VQA [
PSI-VQA Q&A pair annotation
]
- LV-GPT (Swin) on EndoVis18-VQA with early word, no visualbert vision embedding and zero pose embedding
- model_subver:
- 'v0' : Vision tokens are further embedded using VisualBert vision embedding
- "v1' :Vision tokens are directly used as vision embedding
- dataset_type:
- 'm18' : EndoVis18-VQA
- 'c80' : Cholec80-VQA
- 'psi' : PSI-AVA-VQA
- vis_pos_emb:
- None
- 'pos' : vision tokens pos = 0, 1, 2, 3, ...., n. ='zeroes' = vision tokens pos = 0
- model_subver:
python train.py --lr=0.00001 --checkpoint_dir='checkpoints/efvlegpt2Swin/m18_v1_z_qf_' --dataset_type='m18' --tokenizer_ver='gpt2v1' --model_ver='efvlegpt2Swin' --model_subver='v1' --vis_pos_emb='zeroes'
Sample command
python Evaluation.py --model_ver efvlegpt2Swin --dataset_type m18 --checkpoint checkpoints/efvlegpt2Swin/m18_v1_z_qf_Best.pth.tar
Sample command
python typewise_evaluation.py --model_ver efvlegpt2Swin --dataset_type m18 --checkpoint checkpoints/efvlegpt2Swin/m18_2/m18_v1_z_qf_Best.pth.tar --class_file "dataset/EndoVis-18-VQA/Val/endovis_C1.txt"