Releases: s-JoL/Open-Llama
Releases · s-JoL/Open-Llama
v2
Refactored the code, used datasets to load data, decreased padding from 30% to 5%, unified language model training logic into trainer to reduce redundant code, and supported a more reasonable config format.
v1.0-alpha
including pretrain and instruction-tuning