Skip to content

ChatBot

Albarqawi edited this page Jul 25, 2023 · 27 revisions

IntelliNode module provides various language models, including OpenAI's ChatGPT and Llama V2 model from Replicate or AWS SageMaker.

We will demonstrate the setup for OpenAI's ChatGPT, followed by the two methods of integrating the Llama V2 model - through Replicate's API or AWS SageMaker dedicated deployment. All the models are available with the unified chatbot interface with a minimum code change when switching between models.

ChatGPT Model

  1. Import the necessary modules from IntelliNode. This will include the Chatbot, ChatGPTInput, and ChatGPTMessage classes.
const { Chatbot, ChatGPTInput, ChatGPTMessage } = require('intellinode');
  1. To use OpenAI, you'll need a valid API key. Create a Chatbot instance, providing the OpenAI API key and 'openai' as the provider.
const chatbot = new Chatbot(OPENAI_API_KEY, 'openai');
  1. Construct a chat input instance and add user messages:
const system = 'You are a helpful assistant.';
const input = new ChatGPTInput(system);
input.addUserMessage('Explain the plot of the Inception movie in one line.');
  1. Use the chatbot instance to send chat input:
const responses = await chatbot.chat(input);

responses.forEach(response => console.log('- ', response));

Llama V2 Model

Integration with Llama V2 is attainable via two alternatives, using:

  1. Replicate's API: simple integration.
  2. AWS SageMaker hosted in your account (SageMaker steps).

Installation

npm i intellinode

Replicate's Llama Integration

  1. Import the necessary classes.
const { Chatbot, LLamaReplicateInput, SupportedChatModels } = require('intellinode');
  1. You'll need a valid API key. This time, it should be for replicate.com.
const chatbot = new Chatbot(REPLICATE_API_KEY, SupportedChatModels.REPLICATE);
  1. Create the chat input with LLamaReplicateInput
const system = 'You are a helpful assistant.';
const input = new LLamaReplicateInput(system);
input.addUserMessage('Explain the plot of the Inception movie in one line.');
  1. Use the chatbot instance to send chat input:
const response = await chatbot.chat(input);

console.log('- ', response);

Advanced Settings

You can create the input with the desired model name:

// import the config loader
const {Config2} = require('intellinode');

// llama 13B model (default)
const input = new LLamaReplicateInput(system, {model: Config2.getInstance().getProperty('models.replicate.llama.13b')});
// llama 70B model 
const input = new LLamaReplicateInput(system, {model: Config2.getInstance().getProperty('models.replicate.llama.70b')});

AWS SageMaker Integration

Integration with the Llama V2 model via AWS SageMaker, providing an additional layer of control, is achievable through IntelliNode.

IntelliNode Integration
  1. Import the necessary classes:
const { Chatbot, LLamaSageInput, SupportedChatModels } = require('intellinode');
  1. With AWS SageMaker, you'll be providing the URL of your API gateway, the steps to deploy your model and get the URL in the next section:
const chatbot = new Chatbot(null /*replace with api key, if the model not deployed in open gateway*/, 
                            SupportedChatModels.SAGEMAKER, 
                            {url: process.env.AWS_API_URL /*replace with your API gateway url*/});
  1. Create the chat input with LLamaSageInput:
const system = 'You are a helpful assistant.';
const input = new LLamaSageInput(system);
input.addUserMessage('Explain the plot of the Inception movie in one line.');
  1. Use the chatbot instance to send the chat input:
const response = await chatbot.chat(input);

console.log('Chatbot response:' + response);
Prerequisite to Integrate AWS SageMaker and IntelliNode

The steps to leverage AWS SageMaker for hosting the Llama V2 model:

  1. Create a SageMaker Domain: Begin by setting up a domain on your AWS SageMaker. This step establishes a controlled space for your SageMaker operations.

  2. Deploy the Llama Model: Utilize SageMaker JumpStart to deploy the Llama model you plan to integrate.

  3. Copy the Endpoint Name: Once you have a model deployed, make sure to note the endpoint name, which is crucial for future steps.

  4. Create a Node.js Lambda Function: AWS Lambda allows running the back-end code without managing servers. Create a Node.js lambda function to use for integrating the deployed model.

  5. Set Up Environment Variable: Create an environment variable named llama_endpoint with the value of the SageMaker endpoint.

  6. Intellinode Lambda Import: You need to import the prepared Lambda zip file that establishes a connection to your SageMaker Llama deployment. This export is a zip file, and it can be found in the lambda_llama_sagemaker directory.

  7. API Gateway Configuration: Click on the "Add trigger" option on the Lambda function page, and select "API Gateway" from the list of available triggers.

  8. Lambda Function Settings: Update the lambda role to grant necessary permissions to access SageMaker endpoints. Additionally, the function's timeout period should be extended to accommodate the processing time. Make these adjustments in the "Configuration" tab of your Lambda function.

Once you complete these steps, your AWS SageMaker will be ready to host and run the Llama V2 model, and you can easily integrate it with IntelliNode.

Clone this wiki locally