Skip to content

Commit

Permalink
add heuristic logic for weight padding
Browse files Browse the repository at this point in the history
  • Loading branch information
charlifu committed Nov 14, 2024
1 parent 04aa1a7 commit a7e9918
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions vllm/model_executor/layers/quantization/fp8.py
Original file line number Diff line number Diff line change
Expand Up @@ -248,8 +248,10 @@ def process_weights_after_loading(self, layer: Module) -> None:
)

# Pad the weight
if envs.VLLM_FP8_PADDING:
weight = F.pad(weight, (0, 256), "constant", 0)[..., :-256]
if envs.VLLM_FP8_PADDING and weight.stride(-1) == 1 \
and (weight.stride(-2) * weight.element_size()) % 512 == 0:
num_pad = 256 // weight.element_size()
weight = F.pad(weight, (0, num_pad), "constant", 0)[..., :-num_pad]
torch.cuda.empty_cache()

# Update layer with new values.
Expand Down

0 comments on commit a7e9918

Please sign in to comment.