Skip to content

Commit

Permalink
[Minor] Fix small typo in llama.py: QKVParallelLinear -> Quantization…
Browse files Browse the repository at this point in the history
…Config (vllm-project#4991)
  • Loading branch information
pcmoritz authored May 22, 2024
1 parent f3ce39d commit 0cbf251
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion vllm/model_executor/models/llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def __init__(
hidden_size: int,
intermediate_size: int,
hidden_act: str,
quant_config: Optional[QKVParallelLinear] = None,
quant_config: Optional[QuantizationConfig] = None,
bias: bool = False,
) -> None:
super().__init__()
Expand Down

0 comments on commit 0cbf251

Please sign in to comment.