-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llava support #88
Comments
Thank you for reaching out about using Neuron for LLaVa. We recommend that you enable this model using neuronx-distributed (NxD). NxD is PyTorch based library intended to provide developers with the ability to enable model parallelism/sharding techniques themselves. In order to apply tensor parallelism to a model, the weights and compute must be split across NeuronCores. Intermediary calculations are synchronized across NeuronCores using collective operations. Typically this is done by applying different parallel layers to each portion of the network in the form of RowParallelLinear, ColumnParallelLinear and other layers (See: Parallel Layers Documentation): Please check the Llama example inference model for a reference on applying tensor parallelism to a model using NxD. You are welcome to submit a PR with your contribution when you implement the model. |
Hi @hannanjgaws Thanks for the response 👍 just a few comments: Seems that the llava model is a combination of a LLMmodel + visual model + intermediate module (like any of these models). In the case of llava-1.6-mistral-* as it name says, it uses mistral model, and seeing the implementation and the config, it also uses "clip_model" for the vision encoder part. So a pair of questions:
Another interesting approach could be to have the "clip_model" model in transformers-neuronx and to load it in conjuntion of https://github.com/aws-neuron/transformers-neuronx/tree/main/src/transformers_neuronx/mistral + the intermediate module |
Hi, I would like to follow-up on this, do we have any pointers on how to compile and inference multi-modal like this on Inf2? |
Hi, I did stop this attempt on my site because wasn't needed anymore (We are just waiting someone to do it and for now we just use GPU). The best guide I had at that moment was the comment of @hannanjgaws |
llava multimodel would be huge to be supported for aws neuron chips
https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf
This in particular is trending
I'm not sure if this is the correct repo to write this issue
The text was updated successfully, but these errors were encountered: