question about reproducing results from paper #22
Closed
maverick-harmonic
started this conversation in
General
Replies: 2 comments 4 replies
-
Hi, If I'm understanding correctly, the model is trained for only one step, so it's essentially not different from the pretrained ByT5? Can you run evaluate.py in debug mode (environment variable |
Beta Was this translation helpful? Give feedback.
4 replies
-
That’s fantastic. Thanks!
…On Thu, Jul 27, 2023 at 4:09 PM Kaiyu Yang ***@***.***> wrote:
I just uploaded the checkpoints in PyTorch Lightning format so that you no
longer need the 1-step training trick.
https://huggingface.co/kaiyuy/leandojo-pl-ckpts/tree/main
—
Reply to this email directly, view it on GitHub
<#22 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BBRKDHPAI35FN6RQNOYIMGTXSLYMBANCNFSM6AAAAAA22VTMQ4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
My goal was to reproduce the results of tactic generation without retrieval on the novel premises split. Here is what I did:
ReProver/generator/confs/cli_novel_premises.yaml
[1] to create a checkpoint that I can use with the evaluation script [2]PYTHONPATH=$PYTHONPATH:~/w/lean-dojo/LeanDojo/src python prover/evaluate.py --data-path data/leandojo_benchmark/novel_premises/ --ckpt_path lightning_checkpoints/last.ckpt/ --split test --num-cpus 4 --with-gpus --timeout 60
Any ideas/tips on what I'm doing wrong?
[1] Here's the diff I used against commit
762c32fc6e8c6e29fa446dada8aac27f678fe4c8
.[2] See below:
[3]
Beta Was this translation helpful? Give feedback.
All reactions