-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.Net: Tool filters #4922
.Net: Tool filters #4922
Conversation
dotnet/samples/KernelSyntaxExamples/Example66_FunctionCallingStepwisePlanner.cs
Outdated
Show resolved
Hide resolved
dotnet/src/Planners/Planners.OpenAI/Stepwise/FunctionCallingStepwisePlanner.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you extract common abstraction for ToolFilters and move that to the SemanticKernel.Abstraction? Then I could use this abstraction in the gemini connector.
} | ||
} | ||
|
||
private void UpdateChatHistory(ChatHistory chatHistory, ChatCompletionsOptions options, OpenAIPromptExecutionSettings executionSettings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is being called every time a filter is called right? Is that to handle the case of the filter handler changes the history in some way and we need to sync it? Do we do any updates to the history to clean up if the tool call is canceled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is being called every time a filter is called right? Is that to handle the case of the filter handler changes the history in some way and we need to sync it?
Yes and yes. If the filter changes the chat history, we would want that updated chat history to be included in the next request to the model. This helper function updates the chat history stored in chatOptions
(which is what is sent to the model) to match what is in the chatHistory
object.
Do we do any updates to the history to clean up if the tool call is canceled?
If the tool call is cancelled before invocation, instead of a response message, we add an error message to the chat history. If the tool call is cancelled after invocation, I just have it keeping the regular tool response message in the chat history, but bailing out of future tool calls. (I am not necessarily opposed to a different behavior here, but this made the most sense to me.) In both these cases, AddResponseMessage
handles updating both the chatHistory
and chatOptions
objects.
@@ -35,6 +37,12 @@ public abstract class ToolCallBehavior | |||
/// </remarks> | |||
private const int DefaultMaximumAutoInvokeAttempts = 5; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we going to expose this configuration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would argue that should be in a separate PR, since it's unrelated to the tool filters or the planner updates that this PR covers. (I don't mind creating that PR though, should be fairly quick.) Also, if we expose this field, do we want to expose ToolCallBehavior.MaximumUseAttempts
as well?
@Krzysztof318 Tool filters take a dependency on the tool call construct which is specific to OpenAI. We don't currently have a generic abstraction for tool calls, because we haven't seen many (any?) other model providers that support it. When we start to see that, I think we can revisit moving this abstraction out of the connector. For your Gemini work, have you looked at |
Does gemini support function calling? |
Yes, gemini has support for Function calling. I will come back to topic about common abstraction for tools when I will finish the implementation. |
/// Gets the collection of filters that will be applied to tool calls. | ||
/// </summary> | ||
[Experimental("SKEXP0016")] | ||
public IList<IToolFilter> Filters { get; } = new List<IToolFilter>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want this here, at least not as a public mutable list. It means any code can just do ToolCallBehavior.AutoInvokeKernelFunctions.Filters.Add(myFilter)
, and it'll be added to the singleton that will apply to everyone, which also means you need to remember to remove filters after you're done with them, even in the case of exception.
I think instead we should add overloads to the existing factories below, e.g.
public static ToolCallBehavior EnableFunctions(
IEnumerable<OpenAIFunction>? functions, // if null, functions are sourced from the Kernel ala AutoInvokeKernelFunctions,
EnableFunctionsOptions options);
...
public sealed class EnableFunctionsOptions
{
public bool AutoInvoke { get; set; }
public IList<IToolFilter> Filters { get; }
... // any other customization desired
}
or something along those lines. That's just a sketch; names and overall shape are debatable.
try | ||
{ | ||
// Invoke the pre-invocation filter. | ||
var invokingContext = chatExecutionSettings.ToolCallBehavior?.OnToolInvokingFilter(openAIFunctionToolCall, chat, iteration); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot that happens between here and the actual function invocation, including actually getting the corresponding KernelFunction object and creating the KernelArguments to pass to it. I see the value of a callback that's really early in the process, so that a callback can see the raw request from the model, but in that case, should it actually be even earlier and be passed just the raw string name and string arguments? And then have a separate callback that's invoked with the Kernel, KernelFunction, KernelArguments, etc. just before function.InvokeAsync is actually called?
I think it'd be helpful to enumerate all the extensibility scenarios we're trying to enable here, i.e. the different things we expect folks will want to do with this, and then write out the example code for each, showing both that it's possible and what the code would look like. Those can all become samples, too.
For example:
- Want to limit the number of back and forths with the model, to avoid runaway costs or infinite loops, disabling additional function calling after some number of iterations
- Want to update what functions are available based on the interactions with the model
- Want to limit the number of recursive function invocations that can be made (e.g. agents talking back and forth to each other via function calling)
- Want to screen the arguments being passed to a function and replace the argument with something else
- Want to screen the results of a function and replace it with something else (it's possible this and the above would already be handled by normal function filters)
- Want to stop iterating if a particular function is requested, returning that function's result as the result of the operation (basically the eventual invocation of that function was the ultimate goal)
- ...
var invokingContext = chatExecutionSettings.ToolCallBehavior?.OnToolInvokingFilter(openAIFunctionToolCall, chat, iteration); | ||
this.ApplyToolFilterContextChanges(invokingContext, chatOptions, chat, chatExecutionSettings, ref autoInvoke); | ||
} | ||
catch (OperationCanceledException) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels a bit icky to me. Does this mean that we're saying the way you early-exit non-exceptionally is to throw an OperationCanceledException?
} | ||
catch (OperationCanceledException) | ||
{ | ||
// This tool call already happened so we can't cancel it, but bail out of any remaining tool calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're adding a message in the case of early-exit before tool invocation but not after tool invocation?
/// <param name="result">The result of the tool's invocation.</param> | ||
/// <param name="chatHistory">The chat history associated with the operation.</param> | ||
/// <param name="modelIterations">The number of model iterations completed thus far for the request.</param> | ||
public ToolInvokedContext(OpenAIFunctionToolCall toolCall, object? result, ChatHistory chatHistory, int modelIterations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FunctionResult instead of object for the result? Also, for the function invoked filters, there's more information supplied, e.g. Kernel, KernelFunction, KernelArguments, etc. Should this include all that, too? Can/should it derive from the same base classes as the other filters?
@dmytrostruk FYI - Historical |
Motivation and Context
Resolves #4300
Associated ADR: #4686
Description
Contribution Checklist