Skip to content

Commit

Permalink
Remove the fixed eot_token mechanism for SFT (#927)
Browse files Browse the repository at this point in the history
Not all pretrained LLMs use `<|endoftext|>` as the `eot_token`, therefore it's inappropriate to fix it.

Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
  • Loading branch information
3 people authored Oct 30, 2024
1 parent aa4459f commit eefb0ef
Showing 1 changed file with 8 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,13 @@ def parse_args():
parser.add_argument(
"--add_eot_token",
action='store_true',
help="Add <|endoftext|> as additional special token to tokenizer")
help="Add `eot_token` as additional special token to tokenizer")
parser.add_argument(
"--eot_token",
type=str,
default="<|endoftext|>",
help="Specify the format of the `eot_token`",
)
## Print loss
parser.add_argument('--print_loss',
action='store_true',
Expand Down Expand Up @@ -234,8 +240,7 @@ def main():
torch.distributed.barrier()

# load_hf_tokenizer will get the correct tokenizer and set padding tokens based on the model family
args.end_of_conversation_token = "<|endoftext|>"
additional_special_tokens = args.end_of_conversation_token if args.add_eot_token else None
additional_special_tokens = args.eot_token if args.add_eot_token else None
tokenizer = load_hf_tokenizer(args.model_name_or_path,
fast_tokenizer=True,
add_special_tokens=additional_special_tokens)
Expand Down

0 comments on commit eefb0ef

Please sign in to comment.