-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model exchange #709
Comments
Do you mean the model is exchanged between tf, pytorch and so on? |
Yes exactly to exchange models inside Kubeflow supported frameworks |
@bhack what's ur use case for exchanging model between different frameworks? does https://github.com/onnx/onnx a viable solution to you? personally, i think it's out of scope for tf-operator. |
@ddysher I think that the Google/TF position to not officially support onnx is particularly relavant for Kubeflow as it wants to be a cross framework tool. As I told I've opened the issue here cause pytorch-operator and mxnet operator suppport ONNX natively. |
/cc @tjingrant |
Kubeflow wants to be multi-framework but I don't know that we want to be cross framework. I think its perfectly fine if each framework has its own ecosystem and there isn't necessarily overlap between them (e.g. if you can't serve PyTorch using TFServing). I think its really up to the different tools (e.g. onnx, TFServing) to determine whether they want to be cross framework or not. Is there a lot of demand for model exchange? My assumption is that once you pick a framework (e.g. TF vs. PyTorch) most users are probably fine using tools within that ecosystem. |
@jlewi Generally people want onnx cause they want to work with a single framework but as the model portfolio is not the same they want to us more model as possible without switching framework. I.e. have you seen the tensorRT approach to serving about models format support? |
@bhack no haven't looked at TensorRT. Are you saying that people want onnx because they want to try out a bunch of different models and those models could be in different frameworks (e.g. TF vs. PyTorch)? By framework are you referring to serving infrastructure in this case (e.g. model server as in onnx or tensorRT) as opposed to DL framework? |
On the first point yes. Users can find published models related to paper authors that use different frameeorks or models engineered by different teams and they don't want to change framework just cause they want to use that specific model/research i.e. for fine-tuning. On the serving side TensorRT 4 has onnx support but also Tensorflow. An example on how to run TensorRT inferencing service. The specific things is that Onnx is not officially supported by Tensorflow but something is community maintained by @tjingrant IBM team |
I do not have much to add since my experience with Kubeflow is limited, but it occurs to me that having an onnx-operator repository in kubeflow where users can specify backends might make more sense? Then users can just have onnx models in their model portfolio that are ready to be deployed to multiple backends for training(italicized b/c there's not much, [or maybe even a bit of reverse progress as they are removing training related flags], going on with supporting training in ONNX). This way it looks more like an issue suited for the ONNX spec group. |
I think this issue is out of the scope of tf-operator. There are some candidates although: |
I agree with @gaocegege. |
What is tf-operator and so Kubeflow position about model exchange between Kubeflow supported frameworks.
I open the issue here cause seems to me that the TF position is still not clearly resolved. See tensorflow/tensorflow#1288
The text was updated successfully, but these errors were encountered: