-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatBedrock should support default_headers
#283
Comments
cc @guilt |
@efriis Thank you for filing this ticket. Since I am just getting started here, I wanted to clarify: Is the request:
For If we were to add this to default_headers: Optional[Mapping[str, str]] = None what would be expected behavior? Would you want to send these headers to Boto or are we expecting a different behavior? Could I also request you to provide sample usage code that we could use for testing this behavior? |
Was filing this on behalf of a community member - let me see if I can get them to respond with their needs! I think that behavior on invoke is quite confusing, so it might change what they're looking for. |
I believe the ask is just to have the ability to attach custom headers to the request (potentially for tagging on an outbound proxy type thing) - is there not a concept of attaching headers in boto? |
One of the good usecases for Yeah, we can add headers in |
Yeah so for instance I want to be able to use prompt caching. https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching. To turn on that beta feature you need to include a special header: |
@mdfederici Thank you for replying. Bedrock does support Prompt Caching and it works slightly differently as explained in prompt-caching guide. Is that the usecase you would like |
We frequently change models and providers with all of the parts of our app which are interacting with LLMs. So for instance I may have a piece of code which uses sonnet and one time I run from the UI I'll select sonnet on bedrock and another time i'll select sonnet on anthropic. We use langchain to take care of the routing (among many other things). Ideally we wouldn't have to change code or maintain a bunch of different ways to do this depending on if i am going to use bedrock, anthropic, openai, whatever. What I'd like from langchain is one consistent way to specify that I want prompt caching turned on regardless of whether i'm using chatbedrock or chatanthropic or whatever else. prompt caching is just an example of that, not a particularly important feature that i need to get enabled in my app. |
Thank you for explaining this. How does the current usage of headers for Anthropic vs. OpenAI headers look like from your current usage point of view. Would like to understand that. We will run this through @3coins to understand if there's a nice way to specify unified configs across various implementation providers if it ends up making prompt caching easy across different providers. And if there isn't a long term solution, we can for now work on enabling it in our two |
similar to ChatAnthropic: https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html#langchain_anthropic.chat_models.ChatAnthropic.default_headers
The text was updated successfully, but these errors were encountered: