-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]: Make device agnostic for diverse hardware support #9268
Comments
we can do it step by step.
this can be the first step, and should be easy to do. the rest might need some case-by-case discussion. |
JFYI, refactoring of neuron backend checking is done by #9374 |
can you elaborate on that? |
Hey, I think the idea is very interesting and the problem surely must've been tackled many times across many projects. Bringing one example to the table, ort https://onnxruntime.ai/docs/execution-providers/ has the concept of "ExecutionProvider", but the interface it's simple enough as to group common operations into higher level framework-specific abstractions so you don't have to implement dozens of functions. TFLite had delegates but I think the example isn't as good. Some pain points off the top of my head: execution on cpu will likely implement a small subset of all the ops , executor/worker/interface logic has to have good defaults. Calling into accelerator closed source lib may not implement all the functions (ie not applicable here but still, CoreML), same point. |
Each backend in ort will implement its own |
Of course, a full discussion is necessary. |
@youkaichao could you give some advice on this? |
we should start by sorting the number of if-else branches. if there are more than 3 branches, it means at least 3 backends support this feature, and we can move it inside if not, we can just keep them right now. |
Thanks! I got what you mean, I'll do this work step by step. |
@youkaichao I list the remaining methods involving multiple backend branches, and will implement them one by one in the following PRs. If you have any suggestions, please let me know.
|
please don't directly change let's do it step by step, others need further discussion. |
I added #10402 as a first step to absorb some config checking and updating code into |
Sure, I'll strat this work at xpu exectutor :-) |
Motivation.
vLLM
has already been adapted to many hardware devices, such asGPU
,TPU
, andXPU
. However, adapting these backends requires implementing separateWorker/Executor/Model Runner
frameworks for each, which leads to code redundancy and maintenance difficulties.In fact, these hardware framework codes can be abstracted at the device layer, forming a unified framework. This way, only one set of code would need to be maintained, and different backends would only need to implement the device layer interfaces and any device-specific logic if necessary.
I also found some new features are only updated on GPU-related codes. In fact, these codes are also applicable to other hardware, but it is difficult for other hardware to perceive and follow these updates.
Proposed Change.
This RFC is intended to establish a unified framework.
Maybe there will be diffuculty for intergrating hardware framework to common framework, It makes sense to work towards this direction, the diagram below represents a proposed solution:
Taking
Executor
as example, for third-party hardware devices based on thepytorch
ecosystem, the basic interfaces of torch have been well adapted, so after abstracting the device-related hard coding, such astorch.cuda
,torch.xpu
,GPU Executor
could be used as theCommon Executor
of all third-party devices.Following #6080, different hardware backends can put their own device-specific code in
NewBackendPlatform
, so that the framework can be device-agnostic throughcurrent_platform
. For example,torch.cuda.synchronize
could usecurrent_platform.synchronize
.Feedback Period.
To realize this idea will involve more files, so the following steps are currently sorted out to finally achieve the above purpose:
is_cpu
->current_platform.is_cpu
is_xpu
->current_platform.is_xpu
is_openvino
->current_platform.is_openvino
is_neuron
->current_platform.is_neuron
is_hip
->current_platform.is_rocm
seed_everything
->current_platform.seed_everything
is_pin_memory_available
->current_platform.is_pin_memory_available
DeviceMemoryProfiler
->current_platform.memory_profiler
wrap_device
->current_platform.wrap_device
torch.xxx.get_device_name
->current_platform.get_device
torch.xxx.Event
->current_platform.Event
torch.xxx.synchronize
->current_platform.synchronize
torch.xxx.Stream
->current_platform.Stream
torch.xxx.stream
->current_platform.stream
torch.xxx.empty_cache
->current_platform.empty_cache
torch.xxx.device_count
->current_platform.device_count
torch.xxx.memory_allocated
->current_platform.memroy_allocated
torch.xxx.set_device
->current_paltform.set_device
torch.xxx.current_device
->current_platform.current_device
torch.xxx.get_device_capability
->current_platform.get_device_capability
gpu(neuron,openvino,tpu,xpu,..)_executor
->common_backend_executor
gpu(neuron,openvino,tpu,xpu,..)_worker
->common_backend_worker
gpu(neuron,openvino,tpu,xpu,..)_model_runner
->common_backend_model_runner
There must be omissions or difficulties in actual implementation here, keep updating.
CC List.
@youkaichao @WoosukKwon
Any Other Things.
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: