Skip to content

Commit

Permalink
Remove workaround for llama 3 because it is now supported by llama.cpp
Browse files Browse the repository at this point in the history
  • Loading branch information
countzero committed Apr 21, 2024
1 parent 9b7ccb0 commit cc88b06
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion quantize_weights_for_llama.cpp.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ ForEach ($repositoryName in $repositoryDirectories) {
$convertParameters = "--outfile `"${unquantizedModelPath}`" `"${sourceDirectoryPath}`""

# Some models have a Byte Pair Encoding (BPE) vocabulary type.
if (@("Smaug-72B-v0.1", "Meta-Llama-3-8B-Instruct", "Meta-Llama-3-70B-Instruct").Contains($repositoryName)) {
if (@("Smaug-72B-v0.1").Contains($repositoryName)) {
$convertParameters = "--vocab-type `"bpe`" --pad-vocab $convertParameters"
}

Expand Down

0 comments on commit cc88b06

Please sign in to comment.