Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why use dot as a measure of similarity? #11

Open
celia01 opened this issue Jun 6, 2018 · 5 comments
Open

Why use dot as a measure of similarity? #11

celia01 opened this issue Jun 6, 2018 · 5 comments

Comments

@celia01
Copy link

celia01 commented Jun 6, 2018

in utils.py, why use the embedding's dot in the function getSimilarity,

@suanrong
Copy link
Owner

suanrong commented Jun 9, 2018

They use dot in the paper to measure similarity, while the loss function optimize the Euclid distance. It makes me confused.

@lijunweiyhn
Copy link

Sorry, but where indicate the calculation of similarity in the original paper? I saw the code use dot, too. Thanks

1 similar comment
@lijunweiyhn
Copy link

Sorry, but where indicate the calculation of similarity in the original paper? I saw the code use dot, too. Thanks

@suanrong
Copy link
Owner

Sorry, the author did not mention in the paper. I once asked him and he told me he have tried using Euclidean distance to measure similarity but get worse performance.

@lijunweiyhn
Copy link

Thanks, so am I. From the loss function, Euclidean distance may be the most reasonable way, but when I get the embedding to link prediction task(my own dataset), the results were embarrassing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants