-
Notifications
You must be signed in to change notification settings - Fork 15.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return feedback #9629
Return feedback #9629
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
@@ -62,13 +63,13 @@ class EvaluatorCallbackHandler(BaseTracer): | |||
The LangSmith project name to be organize eval chain runs under. | |||
""" | |||
|
|||
name: str = "evaluator_callback_handler" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ooc why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heh wasn't intentional
except Exception as e: | ||
logger.error( | ||
f"Error evaluating run {run.id} with " | ||
f"{evaluator.__class__.__name__}: {e}", | ||
exc_info=True, | ||
) | ||
raise e | ||
example_id = str(run.reference_example_id) | ||
self.logged_feedback.setdefault(example_id, []).append(feedback) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe noob q but why can't feedback just be returned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the callbacks are all called via the run manager in the depths of the callstack so it would just be ignored sadly
2b88733
to
f6dc4fb
Compare
Return the feedback values in an eval run result
Also made a helper method to display as a dataframe but it may be overkill