Skip to content

voyage-ai/results

 
 

Repository files navigation

benchmark type submission_name
mteb
evaluation
MTEB

Note

Previously it was possible to submit models results to MTEB by adding the results to the model metadata. This is no longer an option as we want to ensure high quality metadata.

This repository contain the results of the embedding benchmark evaluated using the package mteb.

Reference
🦾 Leaderboard An up to date leaderboard of embedding models
📚 mteb Guides and instructions on how to use mteb, including running, submitting scores, etc.
🙋 Questions Questions about the results
🙋 Issues Issues or bugs you have found

About

Data for the MTEB leaderboard

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Makefile 0.4%