Skip to content

Commit

Permalink
[V1] Do not use inductor for piecewise CUDA graphs (vllm-project#10225)
Browse files Browse the repository at this point in the history
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: OmerD <[email protected]>
  • Loading branch information
WoosukKwon authored and omer-dayan committed Nov 14, 2024
1 parent 34e1ae3 commit c0899de
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions vllm/v1/worker/gpu_model_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,15 +404,14 @@ def execute_model(

def load_model(self) -> None:
if self.use_cuda_graph:
# FIXME(woosuk): Currently, the custom ops are not supported
# in the piecewise compilation mode. We rely on TorchInductor
# to optimize the model.
# FIXME(woosuk): Currently, we do not use inductor to reduce the
# compilation time and any potential issues with the inductor.
os.environ["VLLM_CUSTOM_OPS"] = "none"
set_compilation_config(
CompilationConfig(
use_cudagraph=True,
non_cudagraph_ops=["vllm.unified_v1_flash_attention"],
use_inductor=True,
use_inductor=False,
))

logger.info("Starting to load model %s...", self.model_config.model)
Expand Down

0 comments on commit c0899de

Please sign in to comment.