Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llava support #88

Open
sonic182 opened this issue Jun 20, 2024 · 4 comments
Open

llava support #88

sonic182 opened this issue Jun 20, 2024 · 4 comments

Comments

@sonic182
Copy link

llava multimodel would be huge to be supported for aws neuron chips

https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf

This in particular is trending

I'm not sure if this is the correct repo to write this issue

@hannanjgaws
Copy link
Contributor

Thank you for reaching out about using Neuron for LLaVa.

We recommend that you enable this model using neuronx-distributed (NxD). NxD is PyTorch based library intended to provide developers with the ability to enable model parallelism/sharding techniques themselves. In order to apply tensor parallelism to a model, the weights and compute must be split across NeuronCores. Intermediary calculations are synchronized across NeuronCores using collective operations. Typically this is done by applying different parallel layers to each portion of the network in the form of RowParallelLinear, ColumnParallelLinear and other layers (See: Parallel Layers Documentation):

Please check the Llama example inference model for a reference on applying tensor parallelism to a model using NxD. You are welcome to submit a PR with your contribution when you implement the model.

@sonic182
Copy link
Author

sonic182 commented Jun 24, 2024

Hi @hannanjgaws

Thanks for the response 👍 just a few comments:

Seems that the llava model is a combination of a LLMmodel + visual model + intermediate module (like any of these models).

In the case of llava-1.6-mistral-* as it name says, it uses mistral model, and seeing the implementation and the config, it also uses "clip_model" for the vision encoder part.

So a pair of questions:

  • The approach to compile it, for Inf2 instances, should be to replace the inner layers with the corresponding parrallel layers before doing a trace with a sample input, is'nt it?

  • After it compiled, when loading the .pt file, should I use model.load_state_dict(hf_model_state_dict) to load the weights isn't it?

Another interesting approach could be to have the "clip_model" model in transformers-neuronx and to load it in conjuntion of https://github.com/aws-neuron/transformers-neuronx/tree/main/src/transformers_neuronx/mistral + the intermediate module

@minhtcai
Copy link

minhtcai commented Jul 1, 2024

Hi, I would like to follow-up on this, do we have any pointers on how to compile and inference multi-modal like this on Inf2?

@sonic182
Copy link
Author

Hi, I would like to follow-up on this, do we have any pointers on how to compile and inference multi-modal like this on Inf2?

Hi,

I did stop this attempt on my site because wasn't needed anymore (We are just waiting someone to do it and for now we just use GPU).

The best guide I had at that moment was the comment of @hannanjgaws

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants