Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add contact page, improve tables and examples #65

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# token.js
# Token.js

## 0.0.1

Expand Down
203 changes: 19 additions & 184 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,192 +1,29 @@
# token.js
# Token.js

Integrate 9 LLM providers with a single Typescript SDK using OpenAIs format. Free and opensource with no proxy server required.

### [Documentation](http://tokenjs.ai)
Integrate 60+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.

## Features

* Define prompts in OpenAIs format and have them translated automatially for each LLM provider.
* Support for tools, JSON output, image inputs, streaming, and more.
* Support for 9 popular LLM providers: AI21, Anthropic, AWS Bedrock, Cohere, Gemini, Groq, Mistral, OpenAI, and Perplexity with more coming soon.
* Free and opensource under GPLv3.
* No proxy server required.

## Setup

### Installation

```bash
npm install token.js
```

### Environment Variables

```env
OPENAI_API_KEY=<your openai api key>
GEMINI_API_KEY=<your gemini api key>
ANTHROPIC_API_KEY=<your api>
```

### Usage

```ts
import { TokenJS, ChatCompletionMessageParam } from 'token.js'

const tokenjs = new TokenJS()

const messages: ChatCompletionMessageParam[] = [
{
role: 'user',
content: `How are you?`,
},
]

// Call OpenAI
const result = await tokenjs.chat.completions.create({
provider: 'openai',
model: 'gpt-4o',
messages,
})

// Call Gemini
const result = await tokenjs.chat.completions.create({
provider: 'gemini',
model: 'gemini-1.5-pro',
messages,
})

// Call Anthropic
const result = await tokenjs.chat.completions.create({
provider: 'anthropic',
model: 'claude-2.0',
messages,
})
```

## Access Credential Configuration

token.js uses environment variables to configure access to different LLM providers. Configure your api keys using the following environment variables:

```
# OpenAI
OPENAI_API_KEY=

# AI21
AI21_API_KEY=

# Anthropic
ANTHROPIC_API_KEY=
* Use OpenAI's format to call 60+ LLMs from 9 providers.
* Supports tools, JSON outputs, image inputs, streaming, and more.
* Runs completely on the client side. No proxy server needed.
* Free and open source under GPLv3.

# Cohere
COHERE_API_KEY=
## Supported Providers

# Gemini
GEMINI_API_KEY=
* AI21
* Anthropic
* AWS Bedrock
* Cohere
* Gemini
* Groq
* Mistral
* OpenAI
* Perplexity

# Groq
GROQ_API_KEY=
## Documentation

# Mistral
MISTRAL_API_KEY=

# Perplexity
PERPLEXITY_API_KEY=

# AWS Bedrock
AWS_REGION_NAME=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
```

Then you can select the `provider` and `model` you would like to use when calling the `create` function, and token.js will use the correct access credentials for the provider.

## Streaming

token.js supports streaming for all providers that support it.

```ts
import { TokenJS } from 'token.js'

const tokenjs = new TokenJS()
const result = await tokenjs.chat.completions.create({
stream: true,
provider: 'gemini',
model: 'gemini-1.5-pro',
messages: [
{
role: 'user',
content: `How are you?`,
},
],
})

for await (const part of result) {
process.stdout.write(part.choices[0]?.delta?.content || '')
}
```

## Tools

token.js supports tools for all providers and models that support it.

```ts
import { TokenJS, ChatCompletionTool } from 'token.js'

const tokenjs = new TokenJS()

const tools: ChatCompletionTool[] = [
{
type: 'function',
function: {
name: 'getCurrentWeather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: { type: 'string', enum: ['celsius', 'fahrenheit'] },
},
required: ['location', 'unit'],
},
},
},
]

const result = await tokenjs.chat.completions.create({
provider: 'gemini',
model: 'gemini-1.5-pro',
messages: [
{
role: 'user',
content: `What's the weather like in San Francisco?`,
},
],
tools,
tool_choice: 'auto',
})
```

## Providers

Not every feature is supported by every provider and model. This table provides a general overview of what features are supported by each provider. For details on which features are supported by individual models from different providers see the [provider documentation](todo\(md\)/).

| Provider | Completion | Streaming | Tools | JSON Output | Image Input |
| ---------- | -------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
| openai | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
| anthropic | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
| bedrock | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
| mistral | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | |
| cohere | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | | |
| AI21 | :white\_check\_mark: | :white\_check\_mark: | | | |
| Gemini | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: | :white\_check\_mark: |
| Groq | :white\_check\_mark: | :white\_check\_mark: | | :white\_check\_mark: | |
| Perplexity | :white\_check\_mark: | :white\_check\_mark: | | | |

If there are more providers or features you would like to see implemented in token.js please let us know by opening an issue!
Learn more about Token.js in our [documentation](https://docs.tokenjs.ai/).

## Contributing

Expand Down Expand Up @@ -216,8 +53,6 @@ pnpm test
pnpm lint
```

### Open a pull request!

## License

token.js is free and open source under the GPLv3 license.
Token.js is free and open source software licensed under [GPLv3](https://github.com/token-js/token.js/blob/main/LICENSE).
Loading