Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use non-Gemini models through ChatVertexAI #5196

Closed
5 tasks done
jarib opened this issue Apr 24, 2024 · 20 comments · Fixed by #6615
Closed
5 tasks done

Unable to use non-Gemini models through ChatVertexAI #5196

jarib opened this issue Apr 24, 2024 · 20 comments · Fixed by #6615
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@jarib
Copy link
Contributor

jarib commented Apr 24, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import { ChatVertexAI } from '@langchain/google-vertexai'

const chat = new ChatVertexAI({
    model: 'claude-3-opus@20240229',
    temperature: 0.0,
    maxOutputTokens: 1200,
})

const result = await chat.invoke('Tell me a story')

console.log(result)

Error Message and Stack Trace (if applicable)

Error: Unable to verify model params: {"lc":1,"type":"constructor","id":["langchain","chat_models","chat_integration","ChatVertexAI"],"kwargs":{"model":"claude-3-opus","temperature":0,"max_output_tokens":1200,"platform_type":"gcp"}}
    at validateModelParams (file:///Users/redacted/src/langchain-repro/node_modules/@langchain/google-common/dist/utils/common.js:100:19)
    at copyAndValidateModelParamsInto (file:///Users/redacted/src/langchain-repro/node_modules/@langchain/google-common/dist/utils/common.js:105:5)
    at new ChatGoogleBase (file:///Users/redacted/src/langchain-repro/node_modules/@langchain/google-common/dist/chat_models.js:191:9)
    at new ChatGoogle (file:///Users/redacted/src/langchain-repro/node_modules/@langchain/google-gauth/dist/chat_models.js:12:9)
    at new ChatVertexAI (file:///Users/redacted/src/langchain-repro/node_modules/@langchain/google-vertexai/dist/chat_models.js:10:9)
    at main (file:///Users/redacted/src/langchain-repro/test.js:5:22)
    at file:///Users/redacted/src/langchain-repro/test.js:22:1
    at ModuleJob.run (node:internal/modules/esm/module_job:218:25)
    at async ModuleLoader.import (node:internal/modules/esm/loader:329:24)
    at async loadESM (node:internal/process/esm_loader:28:7)

Description

I am trying to use ChatVertexAI with Anthropic Claude 3, but it seems this class only supports Gemini models and returns the above error message.

This appears to be a deliberate choice in the code:

switch (modelToFamily(model)) {
case "gemini":
return validateGeminiParams(testParams);
default:
throw new Error(
`Unable to verify model params: ${JSON.stringify(params)}`
);
}

I've verified that using Claude and Vertex AI and Anthropics Vertex SDK works fine:

import { AnthropicVertex } from '@anthropic-ai/vertex-sdk'

const projectId = 'my-project-id'
const region = 'us-east5'

// Goes through the standard `google-auth-library` flow.
const client = new AnthropicVertex({
    projectId,
    region,
})

async function main() {
    const result = await client.messages.create({
        model: 'claude-3-opus@20240229',
        max_tokens: 100,
        messages: [
            {
                role: 'user',
                content: 'Hey Claude!',
            },
        ],
    })
    console.log(JSON.stringify(result, null, 2))
}

main()

Output:

{
  "id": "msg_vrtx_01M1yGR5LiteHznRmyK2MaPG",
  "type": "message",
  "role": "assistant",
  "model": "claude-3-opus-20240229",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 10,
    "output_tokens": 12
  },
  "content": [
    {
      "type": "text",
      "text": "Hello! How can I assist you today?"
    }
  ],
  "stop_reason": "end_turn"
}

System Info

    "@anthropic-ai/vertex-sdk": "^0.3.5",
    "@google-cloud/vertexai": "^1.1.0",
    "@langchain/google-vertexai": "^0.0.11",
    "langchain": "^0.1.35"

Platform: Mac
Node: 20.11.0

Copy link

dosubot bot commented Apr 24, 2024

To address the issue of using Anthropic Claude 3 with LangChain.js, particularly with the ChatVertexAI class, it's important to note that the current implementation indeed focuses on Gemini models, leading to the error you encountered. However, you can integrate Claude 3 by adapting the approach used for Anthropic models in LangChain.js, specifically through a custom implementation similar to the ChatAnthropic class.

Here's a concise guide to get you started:

  1. Ensure Dependencies: Make sure you have both LangChain and the Anthropic SDK installed in your project.

  2. Use the ChatAnthropic Class: Import and configure the ChatAnthropic class from LangChain to work with Claude 3. You'll need to specify the model name, API key, and other relevant settings like temperature and max tokens.

  3. Invoke the Model: Utilize the invoke method for sending prompts to Claude 3 and receiving responses. For interactive applications, consider exploring streaming responses.

  4. Customize as Needed: Tailor the integration to fit your application's needs, adjusting prompt formatting, response handling, or integrating with other services as required.

This approach allows you to leverage Claude 3's capabilities within LangChain.js applications. For detailed implementation and examples, refer to the ChatAnthropic class in the LangChain.js repository and the associated documentation.

Keep in mind, this solution circumvents the ChatVertexAI class's current limitations with non-Gemini models. The LangChain.js team is working on expanding support for a broader range of models, so staying updated with the repository may provide more streamlined solutions in the future.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Apr 24, 2024
@afirstenberg
Copy link
Contributor

Similar to #2562

The issue is that Claude in Vertex AI is neither the same protocol as Gemini, nor quite the same as what is offered through Anthropic. So we need something that combines the two.

Do-able, but hasn't risen in priority.

My general thoughts about how to do this would be

  • The Anthropic library (and possibly the the @langchain/anthropic library) would be a build dependency and optional dependency to use it. Probably use this for the types, mostly, and parsing the results.
  • Add a utility class for anthropic.
    • The things that call baseMessageToContent, safeResponseToChatGeneration, and safeResponseToChatResult need to call their equivalents in this class
  • Changes in GoogleAIConnection
    • modelFamily() should add "claude" or something along those lines. And GoogleLLMModelFamily should be adjusted as well (probably).
    • buildUrlVertex() needs a change to handle non-Google models (possibly determine that based on the model family?)
  • Probably make a subclass of GoogleAIConnection or AbstractGoogleLLMConnection to handle the Claude layout.
    But I haven't dug into the details.

The good news is that the authentication part is handled by default with the Connection classes.

@ianwoodfill
Copy link

+1, would be a very useful capability. Seems like it's already supported in BedrockChat, which is great.

@yermie
Copy link

yermie commented Jun 24, 2024

+1, especially given the renewed interest in Claude 3.5 Sonnet

@DanKupisz
Copy link

+1 for Claude 3.5 Sonnet

@jeloi
Copy link

jeloi commented Jun 25, 2024

+1 another vote - Claude 3.5 support through Vertex AI is a key integration for companies on GCP to adopt Langchain.

@afirstenberg
Copy link
Contributor

I see the +1s - this is now next on my priority list!

I'm aiming to get #5835 into a stable state ASAP so it can be merged, since it contains some updates that would be useful for this effort as well. The approach I outlined above still mostly holds.

Sorry I couldn't get this in place for I/O Connect, but hope to have good news soon!.

@MJCyto
Copy link

MJCyto commented Jul 3, 2024

Could you extend to other LLMs in the model garden such as Llama 3?

@afirstenberg
Copy link
Contributor

Yes... but...

In the Vertex AI Model Garden, Llama 3 requires you to deploy the model to an endpoint you control. And the details of the API are poorly documented. Claude has more full API support with "model-as-a-service" and a standard endpoint.

My current thinking is that I'll be able to provide direct support through the current classes we have (ie - you'll just be able to specify a claude model and it will work), while you may need to do some work for other models in the Model Garden, including deploying an endpoint.

@DanKupisz
Copy link

Hi @afirstenberg, Any update on the progress? Do you have an ETA.
Many thanks!!

@Aliceeeee825
Copy link

+1 is there any updates for this inquiry?

@Zhuqi1108
Copy link

+1 to Claude 3.5 Sonnet via Vertex

@afirstenberg
Copy link
Contributor

It is still next on my list and in progress - but no updates to share.

@afirstenberg
Copy link
Contributor

Looks like I'll be adding this for Mistral at the same time.

https://cloud.google.com/blog/products/ai-machine-learning/codestral-and-mistral-large-v2-on-vertex-ai

@afirstenberg
Copy link
Contributor

Also Llama 3.1

https://console.cloud.google.com/vertex-ai/publishers/meta/model-garden/llama3-405b-instruct-maas

@d-liya
Copy link

d-liya commented Jul 28, 2024

Here's a temporary solution for using Anthropic with Vertex AI and LangChain.

You need to install the following additional packages:

  • google-auth-library
  • @anthropic-ai/vertex-sdk

Your usage should look like this:

import { GoogleAuth } from "google-auth-library";
import { ChatVertexAntropicAI } from "./ChatVertexAntropicAI";

const llm = new ChatVertexAntropicAI({
  googleAuth: new GoogleAuth({
    scopes: "https://www.googleapis.com/auth/cloud-platform",
    // your credentials
  }),
  region: <region>,
  projectId: <project_id>,
  model: "claude-3-5-sonnet@20240620",
});

// Streaming
const response = await llm
  .pipe(new StringOutputParser())
  .stream([["human", "Hi, who are you?"]]);
for await (const chunk of response) {
  console.log(chunk);
}

// None streaming
const response = await llm.invoke("Hi, who are you?");

And here is the implementation for ChatVertexAntropicAI.ts:

import {
  SimpleChatModel,
  type BaseChatModelParams,
} from "@langchain/core/language_models/chat_models";
import { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";
import { AIMessageChunk, type BaseMessage } from "@langchain/core/messages";
import { ChatGenerationChunk } from "@langchain/core/outputs";
import { AnthropicVertex } from "@anthropic-ai/vertex-sdk";
import { GoogleAuth } from "google-auth-library";

export interface ChatVertexAntropicAIInput extends BaseChatModelParams {
  googleAuth: GoogleAuth;
  projectId: string;
  region: string;
  model: string;
  maxTokens?: number;
  temperature?: number;
}

export class ChatVertexAntropicAI extends SimpleChatModel {
  client: AnthropicVertex;
  model: string;
  maxTokens: number;
  temperature: number;

  constructor(fields: ChatVertexAntropicAIInput) {
    super(fields);
    this.model = fields.model;
    this.maxTokens = fields.maxTokens ?? 1000;
    this.temperature = fields.temperature ?? 0;
    this.client = new AnthropicVertex({
      googleAuth: fields.googleAuth,
      projectId: fields.projectId,
      region: fields.region,
    });
  }

  _llmType() {
    return "vertex-ai-custom";
  }

  lg_to_anthropic(messages: BaseMessage[]) {
    return messages.map((message) => {
      const isUser = message.lc_id.pop() === "HumanMessage";
      return {
        role: isUser ? "user" : "assistant",
        content: [
          {
            type: "text",
            text: message.content,
          },
        ],
      };
    }) as any;
  }

  async invoke_llm(messages: BaseMessage[]) {
    const response = await this.client.messages.create({
      model: this.model,
      max_tokens: this.maxTokens,
      temperature: this.temperature,
      messages: this.lg_to_anthropic(messages),
    });
    return response.content[0].type === "text" ? response.content[0].text : "";
  }

  async stream_llm(messages: BaseMessage[]) {
    const response = this.client.messages.stream({
      model: this.model,
      max_tokens: this.maxTokens,
      temperature: this.temperature,
      messages: this.lg_to_anthropic(messages),
    });
    return response;
  }

  async _call(
    messages: BaseMessage[],
    options: this["ParsedCallOptions"],
    runManager?: CallbackManagerForLLMRun
  ): Promise<string> {
    if (!messages.length) {
      throw new Error("No messages provided.");
    }
    // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing
    // await subRunnable.invoke(params, runManager?.getChild());
    if (typeof messages[0].content !== "string") {
      throw new Error("Multimodal messages are not supported.");
    }

    return await this.invoke_llm(messages);
  }

  async *_streamResponseChunks(
    messages: BaseMessage[],
    options: this["ParsedCallOptions"],
    runManager?: CallbackManagerForLLMRun
  ): AsyncGenerator<ChatGenerationChunk> {
    if (!messages.length) {
      throw new Error("No messages provided.");
    }
    if (typeof messages[0].content !== "string") {
      throw new Error("Multimodal messages are not supported.");
    }
    // Pass `runManager?.getChild()` when invoking internal runnables to enable tracing
    // await subRunnable.invoke(params, runManager?.getChild());
    const response = await this.stream_llm(messages);
    for await (const chunk of response) {
      if (chunk.type === "content_block_delta") {
        const delta = chunk.delta.type === "text_delta" ? chunk.delta.text : "";
        yield new ChatGenerationChunk({
          message: new AIMessageChunk({
            content: delta,
          }),
          text: delta,
        });
        // Trigger the appropriate callback for new chunks
        await runManager?.handleLLMNewToken(delta);
      }
    }
  }
}

Important Note

This implementation is currently limited to handling text-only input and output. Other modalities and tool calling are not implemented.

@obround
Copy link
Contributor

obround commented Aug 14, 2024

Edit: This is also easy to patch in @langchain/anthropic, a far better solution.

@d-liya -- Nice approach, here is a more general one (should work with tools, only tested with react):

import {BaseChatModel, type BaseChatModelParams, BindToolsInput} from "@langchain/core/language_models/chat_models";
import {AnthropicVertex} from "@anthropic-ai/vertex-sdk";
import {AIMessage, BaseMessage, BaseMessageChunk} from "@langchain/core/messages";
import {CallbackManagerForLLMRun} from "@langchain/core/callbacks/manager";
import {Runnable} from "@langchain/core/runnables";
import {BaseLanguageModelInput} from "@langchain/core/language_models/base";
import {convertToOpenAITool} from "@langchain/core/utils/function_calling";
import type {Tool as AnthropicTool} from "@anthropic-ai/sdk/resources/index.mjs";


export interface ChatAnthropicVertexInput extends BaseChatModelParams {
    projectId: string;
    region: string;
    model: string;
    maxTokens?: number;
    temperature?: number;
    topP?: number;
    topK: number;
}


export function convertToAnthropicTool(tool: BindToolsInput): AnthropicTool {
    const formatted = convertToOpenAITool(tool).function;
    return {
        name: formatted.name,
        description: formatted.description,
        input_schema: formatted.parameters as AnthropicTool.InputSchema
    }
}

function formatMessagesAnthropic(messages: BaseMessage[]) {
    let system_message = null;
    let formatted_messages = [];

    for (let [i, message] of messages.entries()) {
        if (message._getType() === "system") {
            if (i !== 0) throw new Error("system message must be first");
            if (typeof message.content !== "string") throw new Error(`system message must be a string`);
            system_message = message.content;
            continue;
        }

        if (typeof message.content !== "string") {
            throw new Error("not implemented");
        } else if (message instanceof AIMessage) {
            throw new Error("not implemented");
        }
        const role = message._getType() === "human" ? "user" : "assistant";
        const content = message.content;

        formatted_messages.push({role, content});
    }
    return [system_message, formatted_messages];
}


export class ChatAnthropicVertex extends BaseChatModel<ChatAnthropicVertexInput> {
    client: AnthropicVertex;
    model: string;
    maxTokens: number;
    temperature: number;
    topP: number;
    topK: number;

    static lc_name(): string {
        return "ChatAnthropicVertex";
    }

    get lc_secrets(): { [key: string]: string } {
        return {
            "authOptions.credentials": "GOOGLE_VERTEX_AI_WEB_CREDENTIALS",
        };
    }

    _llmType(): string {
        return "vertex-ai-custom";
    }

    _identifyingParams(): Record<string, any> {
        return {
            modelName: this.model
        }
    }

    bindTools(tools: BindToolsInput[], kwargs?: Partial<ChatAnthropicVertexInput>): Runnable<BaseLanguageModelInput, BaseMessageChunk, ChatAnthropicVertexInput> {
        const formattedTools = tools.map(convertToAnthropicTool);
        return this.bind({
            tools: formattedTools,
            ...kwargs
        });
    }

    bind(kwargs: Partial<ChatAnthropicVertexInput>): Runnable<BaseLanguageModelInput, BaseMessageChunk, ChatAnthropicVertexInput> {
        return super.bind(kwargs);
    }

    constructor(fields: ChatAnthropicVertexInput) {
        super(fields);
        this.model = fields.model;
        this.maxTokens = fields.maxTokens ?? 1024;
        this.temperature = fields.temperature ?? 0;
        this.topP = fields.topP ?? 0.95;
        this.topK = fields.topK ?? 40;
        this.client = new AnthropicVertex({
            projectId: fields.projectId,
            region: fields.region,
        });
    }

    async invokeLLM(messages: BaseMessage[], stop?: string[]) {
        const [systemMessage, formattedMessages] = formatMessagesAnthropic(messages);
        const response = await this.client.messages.create({
            model: this.model,
            max_tokens: this.maxTokens,
            temperature: this.temperature,
            top_p: this.topP,
            top_k: this.topK,
            messages: formattedMessages,
            ...systemMessage && {system: systemMessage},
            ...stop && {stop_sequences: stop}
        })
        return response.content[0].type === "text" ? response.content[0].text : "";
    }

    async _generate(
        messages: BaseMessage[],
        options: this["ParsedCallOptions"],
        runManager?: CallbackManagerForLLMRun
    ) {
        if (!messages.length) {
            throw new Error("message length cant be 0");
        }
        if (typeof messages[0].content !== "string") {
            throw new Error("cant work with multimodal messages yet");
        }
        const result = await this.invokeLLM(messages, options.stop);
        return {
            generations: [{message: new AIMessage(result), text: result}]
        }
    }
}

@rxliuli
Copy link
Contributor

rxliuli commented Aug 23, 2024

Is it possible to modify @langchain/anthropic to pass in a custom Anthropic client?

import { expect, it } from 'vitest'
import { authenticate } from '../authenticate'
import { ChatAnthropic } from 'langchain-anthropic-edge'
import { AnthropicVertex } from '@anthropic-ai/vertex-sdk'
import { GoogleAuth } from 'google-auth-library'
import { AnthropicVertexWeb } from '../AnthropicVertex'

it('Call Anthropic Vertex on nodejs', async () => {
  const client = new AnthropicVertex({
    region: import.meta.env.VITE_VERTEX_ANTROPIC_REGION,
    projectId: import.meta.env.VITE_VERTEX_ANTROPIC_PROJECTID,
    googleAuth: new GoogleAuth({
      credentials: {
        client_email: import.meta.env
          .VITE_VERTEX_ANTROPIC_GOOGLE_SA_CLIENT_EMAIL!,
        private_key: import.meta.env
          .VITE_VERTEX_ANTROPIC_GOOGLE_SA_PRIVATE_KEY!,
      },
      scopes: ['https://www.googleapis.com/auth/cloud-platform'],
    }),
  })
  const chat = new ChatAnthropic({
    apiKey: 'test',
    model: import.meta.env.VITE_VERTEX_ANTROPIC_MODEL,
    createClient: (() => client) as any,
  })
  const response = await chat.invoke([['human', 'Hello!']])
  console.log(response)
  expect(response).not.undefined
})
it('Call Anthropic Vertex on edge runtime', async () => {
  const token = await authenticate({
    clientEmail: import.meta.env.VITE_VERTEX_ANTROPIC_GOOGLE_SA_CLIENT_EMAIL!,
    privateKey: import.meta.env.VITE_VERTEX_ANTROPIC_GOOGLE_SA_PRIVATE_KEY!,
  })
  const client = new AnthropicVertexWeb({
    region: import.meta.env.VITE_VERTEX_ANTROPIC_REGION,
    projectId: import.meta.env.VITE_VERTEX_ANTROPIC_PROJECTID,
    accessToken: token.access_token,
  })
  const chat = new ChatAnthropic({
    apiKey: 'test',
    model: import.meta.env.VITE_VERTEX_ANTROPIC_MODEL,
    createClient: (() => client) as any,
  })
  const response = await chat.invoke([['human', 'Hello!']])
  console.log(response)
  expect(response).not.undefined
}, 10_000)

@obround
Copy link
Contributor

obround commented Aug 23, 2024

@rxliuli -- Yep, here is the diff we use to switch langchain js to AnthropicVertex.

diff --git a/node_modules/@langchain/anthropic/dist/chat_models.js b/node_modules/@langchain/anthropic/dist/chat_models.js
index 14bbafc..46047e1 100644
--- a/node_modules/@langchain/anthropic/dist/chat_models.js
+++ b/node_modules/@langchain/anthropic/dist/chat_models.js
@@ -1,4 +1,3 @@
-import { Anthropic } from "@anthropic-ai/sdk";
 import { AIMessageChunk } from "@langchain/core/messages";
 import { ChatGenerationChunk } from "@langchain/core/outputs";
 import { getEnvironmentVariable } from "@langchain/core/utils/env";
@@ -8,6 +7,7 @@ import { zodToJsonSchema } from "zod-to-json-schema";
 import { RunnablePassthrough, RunnableSequence, } from "@langchain/core/runnables";
 import { isZodSchema } from "@langchain/core/utils/types";
 import { isLangChainTool } from "@langchain/core/utils/function_calling";
+import { AnthropicVertex } from "@anthropic-ai/vertex-sdk";
 import { AnthropicToolsOutputParser } from "./output_parsers.js";
 import { extractToolCallChunk, handleToolChoice } from "./utils/tools.js";
 import { _formatMessagesForAnthropic } from "./utils/message_inputs.js";
@@ -422,6 +422,7 @@ export class ChatAnthropicMessages extends BaseChatModel {
         return {
             anthropicApiKey: "ANTHROPIC_API_KEY",
             apiKey: "ANTHROPIC_API_KEY",
+            "authOptions.credentials": "GOOGLE_VERTEX_AI_WEB_CREDENTIALS",
         };
     }
     get lc_aliases() {
@@ -537,11 +538,11 @@ export class ChatAnthropicMessages extends BaseChatModel {
         });
         this.anthropicApiKey =
             fields?.apiKey ??
-                fields?.anthropicApiKey ??
-                getEnvironmentVariable("ANTHROPIC_API_KEY");
-        if (!this.anthropicApiKey) {
-            throw new Error("Anthropic API key not found");
-        }
+            fields?.anthropicApiKey ??
+            getEnvironmentVariable("ANTHROPIC_API_KEY");
+        // if (!this.anthropicApiKey) {
+        //     throw new Error("Anthropic API key not found");
+        // }
         this.clientOptions = fields?.clientOptions ?? {};
         /** Keep anthropicApiKey for backwards compatibility */
         this.apiKey = this.anthropicApiKey;
@@ -755,10 +756,10 @@ export class ChatAnthropicMessages extends BaseChatModel {
     async createStreamWithRetry(request, options) {
         if (!this.streamingClient) {
             const options_ = this.apiUrl ? { baseURL: this.apiUrl } : undefined;
-            this.streamingClient = new Anthropic({
+            this.streamingClient = new AnthropicVertex({
                 ...this.clientOptions,
                 ...options_,
-                apiKey: this.apiKey,
+                // apiKey: this.apiKey,
                 // Prefer LangChain built-in retries
                 maxRetries: 0,
             });
@@ -774,13 +775,13 @@ export class ChatAnthropicMessages extends BaseChatModel {
     async completionWithRetry(request, options) {
         if (!this.batchClient) {
             const options = this.apiUrl ? { baseURL: this.apiUrl } : undefined;
-            if (!this.apiKey) {
-                throw new Error("Missing Anthropic API key.");
-            }
-            this.batchClient = new Anthropic({
+            // if (!this.apiKey) {
+            //     throw new Error("Missinxg Anthropic API key.");
+            // }
+            this.batchClient = new AnthropicVertex({
                 ...this.clientOptions,
                 ...options,
-                apiKey: this.apiKey,
+                // apiKey: this.apiKey,
                 maxRetries: 0,
             });
         }

@rxliuli
Copy link
Contributor

rxliuli commented Aug 23, 2024

@obround I just made a similar modification to @langchain/anthropic. Do langchainjs want to accept this PR?

PR: #6615

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.