Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding embedding returned by the model #153

Open
divyag11 opened this issue Mar 11, 2020 · 4 comments
Open

Regarding embedding returned by the model #153

divyag11 opened this issue Mar 11, 2020 · 4 comments

Comments

@divyag11
Copy link

I am loading bert-base-nli-mean-tokens model to get the embedding of the sentence. I had a question, from which layer of the bert base model you are taking the embedding that is being returned by the bert-base-nli-mean-token model. Please let me know.

@nreimers
Copy link
Member

Hi @divyag11
The last layer (the output layer) is used. These output are then usually mean-pooled.

Best
Nils Reimers

@divyag11
Copy link
Author

okay, the last layer of pretrained bert base model you are taking OR the last layer of the bert model trained on nli task is taken?

@nreimers
Copy link
Member

The last layer of BERT, fine tuned on NLI data, is used. The classifier for NLI is discarded / not used.

@divyag11
Copy link
Author

okay,thanks a lot @nreimers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants