-
Notifications
You must be signed in to change notification settings - Fork 16
ChatBot
IntelliNode module provides various language models, including OpenAI's ChatGPT and Llama V2 model from Replicate or AWS SageMaker.
We will demonstrate the setup for OpenAI's ChatGPT, followed by the two methods of integrating the Llama V2 model - through Replicate's API or AWS SageMaker dedicated deployment. All the models are available with the unified chatbot interface with a minimum code change when switching between models.
- Import the necessary modules from IntelliNode. This will include the
Chatbot
,ChatGPTInput
, andChatGPTMessage
classes.
const { Chatbot, ChatGPTInput, ChatGPTMessage } = require('intellinode');
- To use OpenAI, you'll need a valid API key. Create a
Chatbot
instance, providing the OpenAI API key and 'openai' as the provider.
const chatbot = new Chatbot(OPENAI_API_KEY, 'openai');
- Construct a chat input instance and add user messages:
const system = 'You are a helpful assistant.';
const input = new ChatGPTInput(system);
input.addUserMessage('Explain the plot of the Inception movie in one line.');
- Use the
chatbot
instance to send chat input:
const responses = await chatbot.chat(input);
responses.forEach(response => console.log('- ', response));
Integration with Llama V2 is attainable via two alternatives, using:
- Replicate's API: simple integration.
- AWS SageMaker hosted in your account (go to steps).
npm i intellinode
- Import the necessary classes.
const { Chatbot, LLamaReplicateInput, SupportedChatModels } = require('intellinode');
- You'll need a valid API key. This time, it should be for replicate.com.
const chatbot = new Chatbot(REPLICATE_API_KEY, SupportedChatModels.REPLICATE);
- Create the chat input with
LLamaReplicateInput
const system = 'You are a helpful assistant.';
const input = new LLamaReplicateInput(system);
input.addUserMessage('Explain the plot of the Inception movie in one line.');
- Use the
chatbot
instance to send chat input:
const response = await chatbot.chat(input);
console.log('- ', response);
Advanced Settings
You can create the input with the desired model name:
// import the config loader
const {Config2} = require('intellinode');
// llama 13B model (default)
const input = new LLamaReplicateInput(system, {model: Config2.getInstance().getProperty('models.replicate.llama.13b')});
// llama 70B model
const input = new LLamaReplicateInput(system, {model: Config2.getInstance().getProperty('models.replicate.llama.70b')});
Integration with the Llama V2 model via AWS SageMaker, providing an additional layer of control, is achievable through IntelliNode.
- Import the necessary classes:
const { Chatbot, LLamaSageInput, SupportedChatModels } = require('intellinode');
- With AWS SageMaker, you'll be providing the URL of your API gateway:
const chatbot = new Chatbot(null /*replace with api key, if the model not deployed in open gateway*/,
SupportedChatModels.SAGEMAKER,
{url: process.env.AWS_API_URL /*replace with your API gateway url*/});
- Create the chat input with
LLamaSageInput
:
const system = 'You are a helpful assistant.';
const input = new LLamaSageInput(system);
input.addUserMessage('Explain the plot of the Inception movie in one line.');
- Use the
chatbot
instance to send the chat input:
const response = await chatbot.chat(input);
console.log('Chatbot response:' + response);
The steps to leverage AWS SageMaker for hosting the Llama V2 model:
-
Create a SageMaker Domain: Begin by setting up a domain on your AWS SageMaker. This step establishes a controlled space for your SageMaker operations.
-
Deploy the Llama Model: Utilize SageMaker JumpStart to deploy the Llama model you plan to integrate.
-
Copy the Endpoint Name: Once you have a model deployed, make sure to note the endpoint name, which is crucial for future steps.
-
Create a Node.js Lambda Function: AWS Lambda allows running the back-end code without managing servers. Create a Node.js lambda function to use for integrating the deployed model.
-
Set Up Environment Variable: Create an environment variable named
llama_endpoint
with the value of the SageMaker endpoint. -
Intellinode Lambda Import: You need to import the prepared Lambda zip file that establishes a connection to your SageMaker Llama deployment. This export is a zip file, and it can be found in the lambda_llama_sagemaker directory.
-
API Gateway Configuration: Click on the "Add trigger" option on the Lambda function page, and select "API Gateway" from the list of available triggers.
-
Lambda Function Settings: Update the lambda role to grant necessary permissions to access SageMaker endpoints. Additionally, the function's timeout period should be extended to accommodate the processing time. Make these adjustments in the "Configuration" tab of your Lambda function.
Once you complete these steps, your AWS SageMaker will be ready to host and run the Llama V2 model, and you can easily integrate it with IntelliNode.