Code for our paper The Mismeasure of Man and Models: Evaluating Allocational Harms in Large Language Models
-
Notifications
You must be signed in to change notification settings - Fork 0
hannahxchen/allocational-harm-eval
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Code for evaluating allocational harms in machine learning and large language models.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published