Skip to content

Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard and designing lighteval!

License

Notifications You must be signed in to change notification settings

huggingface/evaluation-guidebook

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The LLM Evaluation guidebook ⚖️

If you've ever wondered how to make sure an LLM performs well on your specific task, this guide is for you! It covers the different ways you can evaluate a model, guides on designing your own evaluations, and tips and tricks from practical experience.

Whether working with production models, a researcher or a hobbyist, I hope you'll find what you need; and if not, open an issue (to suggest ameliorations or missing resources) and I'll complete the guide!

How to read this guide

  • Beginner user: If you don't know anything about evaluation, you should start by the Basics sections in each chapter before diving deeper. You'll also find explanations to support you about important LLM topics in General knowledge: for example, how model inference works and what tokenization is.
  • Advanced user: The more practical sections are the Tips and Tricks ones, and Troubleshooting chapter. You'll also find interesting things in the Designing sections.

In text, links prefixed by ⭐ are links I really enjoyed and recommend reading.

Table of contents

If you want an intro on the topic, you can read this blog on how and why we do evaluation!

Automatic benchmarks

Human evaluation

LLM-as-a-judge

Troubleshooting

The most densely practical part of this guide.

General knowledge

These are mostly beginner guides to LLM basics, but will still contain some tips and cool references! If you're an advanced user, I suggest skimming to the Going further sections.

Planned next articles

  • contents/automated-benchmarks/Metrics -> Description of automatic metrics
  • contents/Introduction: Why do we need to do evaluation?
  • contents/Thinking about evaluation: What are the high level things you always need to consider when building your task?
  • contents/Troubleshooting/Troubleshooting ranking: Why comparing models is hard

Resources

Links I like

Thanks

This guide has been heavily inspired by the ML Engineering Guidebook by Stas Bekman! Thanks for this cool resource!

Many thanks also to all the people who inspired this guide through discussions either at events or online, notably and not limited to:

  • 🤝 Luca Soldaini, Kyle Lo and Ian Magnusson (Allen AI), Max Bartolo (Cohere), Kai Wu (Meta), Swyx and Alessio Fanelli (Latent Space Podcast), Hailey Schoelkopf (EleutherAI), Martin Signoux (Open AI), Moritz Hardt (Max Planck Institute), Ludwig Schmidt (Anthropic)
  • 🔥 community users of the Open LLM Leaderboard and lighteval, who often raised very interesting points in discussions
  • 🤗 people at Hugging Face, like Lewis Tunstall, Omar Sanseviero, Arthur Zucker, Hynek Kydlíček, Guilherme Penedo and Thom Wolf,
  • of course my team ❤️ doing evaluation and leaderboards, Nathan Habib and Alina Lozovskaya.

Citation

CC BY-NC-SA 4.0

@misc{fourrier2024evaluation,
  author = {Clémentine Fourrier and The Hugging Face Community},
  title = {LLM Evaluation Guidebook},
  year = {2024},
  journal = {GitHub repository},
  url = {https://github.com/huggingface/evaluation-guidebook)
}

About

Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard and designing lighteval!

Topics

Resources

License

Stars

Watchers

Forks