diff --git a/fern/GHL.mdx b/fern/GHL.mdx
new file mode 100644
index 0000000..1cd6353
--- /dev/null
+++ b/fern/GHL.mdx
@@ -0,0 +1,151 @@
+---
+title: How to Connect Vapi with Make & GHL
+slug: GHL
+---
+
+
+Vapi's GHL/Make Tools integration allows you to directly import your GHL workflows and Make scenarios into Vapi as Tools. This enables you to create voicebots that can trigger your favorite app integrations and automate complex workflows using voice commands.
+
+## What are GHL/Make Tools?
+
+GHL (GoHighLevel) workflows and Make scenarios are powerful automation tools that allow you to connect and integrate various apps and services. With the GHL/Make Tools integration, you can now bring these automations into Vapi and trigger them using voice commands.
+
+## How does the integration work?
+
+1. **Import workflows and scenarios**: Navigate to the [Tools section](https://dashboard.vapi.ai/tools) in your Vapi dashboard and import your existing GHL workflows and Make scenarios.
+
+2. **Add Tools to your assistants**: Once imported, you can add these Tools to your AI assistants, enabling them to trigger the automations based on voice commands.
+
+3. **Trigger automations with voice**: Your AI assistants can now understand voice commands and execute the corresponding GHL workflows or Make scenarios, allowing for seamless voice-enabled automation.
+
+## Setting up the GHL/Make Tools integration
+
+1. **Create a GHL workflow or Make scenario**: Design your automation in GHL or Make, connecting the necessary apps and services.
+
+2. **Import the workflow/scenario into Vapi**: In the Vapi dashboard, navigate to the Tools section and click on "Import." Select the GHL workflow or Make scenario you want to import.
+
+3. **Configure the Tool**: Provide a name and description for the imported Tool, and map any required input variables to the corresponding Vapi entities (e.g., extracted from user speech).
+
+4. **Add the Tool to your assistant**: Edit your AI assistant and add the newly imported Tool to its capabilities. Specify the voice commands that should trigger the Tool.
+
+5. **Test the integration**: Engage with your AI assistant using the specified voice commands and verify that the corresponding GHL workflow or Make scenario is triggered successfully.
+
+## Use case examples
+
+### Booking appointments with AI callers
+
+- Import a GHL workflow that handles appointment booking
+- Configure the workflow to accept appointment details (date, time, user info) from Vapi
+- Add the Tool to your AI assistant, allowing it to book appointments based on voice commands
+
+### Updating CRMs with voice-gathered data
+
+- Import a Make scenario that updates your CRM with customer information
+- Map the scenario's input variables to entities extracted from user speech
+- Enable your AI assistant to gather customer information via voice and automatically update your CRM
+
+### Real Estate: Automated Property Information Retrieval
+
+- Import a Make scenario that retrieves property information from your MLS (Multiple Listing Service) or real estate database
+- Configure the scenario to accept a property address or MLS ID as input
+- Add the Tool to your AI assistant, allowing potential buyers to request property details using voice commands
+- Your AI assistant can then provide key information about the property, such as price, square footage, number of bedrooms/bathrooms, and amenities
+
+### Healthcare/Telehealth: Appointment Reminders and Prescription Refills
+
+- Import a GHL workflow that sends appointment reminders and handles prescription refill requests
+- Configure the workflow to accept patient information and appointment/prescription details from Vapi
+- Add the Tool to your AI assistant, enabling patients to request appointment reminders or prescription refills using voice commands
+- Your AI assistant can confirm the appointment details, send reminders via SMS or email, and forward prescription refill requests to the appropriate healthcare provider
+
+### Restaurant Ordering: Custom Order Placement and Delivery Tracking
+
+- Import a Make scenario that integrates with your restaurant's online ordering system and delivery tracking platform
+- Configure the scenario to accept customer information, order details, and delivery preferences from Vapi
+- Add the Tool to your AI assistant, allowing customers to place custom orders and track their delivery status using voice commands
+- Your AI assistant can guide customers through the ordering process, suggest menu items based on preferences, and provide real-time updates on the order status and estimated delivery time
+
+## Best practices
+
+- Break down complex automations into smaller, focused workflows or scenarios for better maintainability
+- Use clear and concise naming conventions for your imported Tools and their input variables
+- Thoroughly test the integration to ensure reliable performance and accurate data passing
+- Keep your GHL workflows and Make scenarios up to date to reflect any changes in the connected apps or services
+
+## Troubleshooting
+
+- If a Tool is not triggering as expected, verify that the voice commands are correctly configured and the input variables are properly mapped
+- Check the Vapi logs and the GHL/Make execution logs to identify any errors or issues in the automation flow
+- Ensure that the necessary API credentials and permissions are correctly set up in both Vapi and the integrated apps/services
+
+By leveraging Vapi's GHL/Make Tools integration, you can create powerful voice-enabled automations and streamline your workflows, all without extensive coding. Automate tasks, connect your favorite apps, and unlock the full potential of voice AI with Vapi.
+
+## Get Support
+
+Join our Discord to connect with other developers & connect with our team:
+
+
+
+ Connect with our team & other developers using Vapi.
+
+
+ Send our support team an email.
+
+
+
+Here are some video tutorials that will guide you on how to use Vapi with services like Make and GoHighLevel:
+
+
+
+
+
+
+
+
diff --git a/fern/advanced/calls/sip.mdx b/fern/advanced/calls/sip.mdx
new file mode 100644
index 0000000..fab237a
--- /dev/null
+++ b/fern/advanced/calls/sip.mdx
@@ -0,0 +1,90 @@
+---
+title: SIP
+subtitle: You can make SIP calls to Vapi Assistants.
+slug: advanced/calls/sip
+---
+
+
+This instruction is solely for testing purposes. To productionize a SIP implementation, contact Sales to inquire about an Enterprise plan.
+
+
+
+## 1. Create an Assistant
+
+We'll create an assistant with `POST /assistant` endpoint. This is no different than creating an assistant for other transports.
+
+```json
+{
+ "name": "My SIP Assistant",
+ "firstMessage": "Hello {{first_name}}, you've reached me over SIP. How can I help you today?"
+}
+
+```
+
+
+
+
+
+## 2. Create A SIP Phone Number
+
+We'll create a SIP phone number with `POST /phone-number` endpoint.
+
+```json
+{
+ "provider": "vapi",
+ "sipUri": "sip:your_unique_user_name@sip.vapi.ai",
+ "assistantId": "your_assistant_id"
+}
+
+```
+
+`sipUri` is the SIP URI of the phone number. It must be in the format `sip:username@sip.vapi.ai`. You are free to choose any username you like.
+
+
+
+
+
+
+
+## 3. Start a SIP call.
+
+You can use any SIP softphone to test the Assistant. Examples include [Zoiper](https://www.zoiper.com/) or [Linphone](https://www.linphone.org/).
+
+You just need to dial `sip:your_unique_user_name@sip.vapi.ai` and the Assistant will answer your call.
+
+There is no authentication or SIP registration required.
+
+
+
+
+
+## 4. Send SIP Headers to Fill Template Variables.
+
+To fill your template variables, you can send custom SIP headers.
+
+For example, to fill the `first_name` variable, you can send a SIP header `x-first_name: John`.
+
+The header name is case insensitive. So, `X-First_Name`, `x-first_name`, and `X-FIRST_NAME` are all the same.
+
+
+
+
+
+## 5. Use a Custom Assistant For Each Call.
+
+You can use a custom assistant for SIP calls same as phone calls.
+
+Set the `assistantId` to `null` and the `serverUrl` to the URL of your server which will respond to the `assistant-request`.
+
+`PATCH /phone-number/:id`
+```json
+{
+ "assistantId": null,
+ "serverUrl": "https://your_server_url"
+}
+```
+
+Now, every time you make a call to this phone number, the server will receive a `assistant-request` event.
+
+
+
diff --git a/fern/api-reference/messages/client-inbound-message.mdx b/fern/api-reference/messages/client-inbound-message.mdx
new file mode 100644
index 0000000..f3d9d3f
--- /dev/null
+++ b/fern/api-reference/messages/client-inbound-message.mdx
@@ -0,0 +1,5 @@
+---
+title: ClientInboundMessage
+slug: api-reference/messages/client-inbound-message
+---
+
diff --git a/fern/api-reference/messages/client-message.mdx b/fern/api-reference/messages/client-message.mdx
new file mode 100644
index 0000000..734997d
--- /dev/null
+++ b/fern/api-reference/messages/client-message.mdx
@@ -0,0 +1,5 @@
+---
+title: ClientMessage
+slug: api-reference/messages/client-message
+---
+
diff --git a/fern/api-reference/messages/server-message-response.mdx b/fern/api-reference/messages/server-message-response.mdx
new file mode 100644
index 0000000..4c18a3c
--- /dev/null
+++ b/fern/api-reference/messages/server-message-response.mdx
@@ -0,0 +1,5 @@
+---
+title: ServerMessageResponse
+slug: api-reference/messages/server-message-response
+---
+
diff --git a/fern/api-reference/messages/server-message.mdx b/fern/api-reference/messages/server-message.mdx
new file mode 100644
index 0000000..6dd7504
--- /dev/null
+++ b/fern/api-reference/messages/server-message.mdx
@@ -0,0 +1,5 @@
+---
+title: ServerMessage
+slug: api-reference/messages/server-message
+---
+
diff --git a/fern/api-reference/openapi.mdx b/fern/api-reference/openapi.mdx
new file mode 100644
index 0000000..3fa1c96
--- /dev/null
+++ b/fern/api-reference/openapi.mdx
@@ -0,0 +1,10 @@
+---
+title: OpenAPI
+slug: api-reference/openapi
+---
+
+
+
+ Our OpenAPI is hosted at
+ [https://api.vapi.ai/api-json](https://api.vapi.ai/api-json)
+
diff --git a/fern/api-reference/swagger.mdx b/fern/api-reference/swagger.mdx
new file mode 100644
index 0000000..cbcebb5
--- /dev/null
+++ b/fern/api-reference/swagger.mdx
@@ -0,0 +1,9 @@
+---
+title: Swagger
+slug: api-reference/swagger
+---
+
+
+
+ Our Swagger is hosted at [https://api.vapi.ai/api](https://api.vapi.ai/api)
+
diff --git a/fern/assets/styles.css b/fern/assets/styles.css
new file mode 100644
index 0000000..4e73186
--- /dev/null
+++ b/fern/assets/styles.css
@@ -0,0 +1,75 @@
+.fern-header .fern-button.filled .fa-icon {
+ height: 10px !important;
+ width: 10px !important;
+}
+
+.fern-header * {
+ font-weight: 500;
+}
+
+/* for a grid of videos */
+
+.video-grid {
+ display: flex;
+ flex-wrap: wrap;
+ gap: 20px; /* Spacing between videos */
+}
+
+.video-grid iframe,
+.video-grid a {
+ flex: 0 0 calc(50% - 20px); /* Flex grow is 0, basis is 50% minus the gap */
+ aspect-ratio: 560 / 315; /* Maintain the aspect ratio of 16:9 */
+ max-width: calc(50% - 20px); /* Max width is also set to 50% minus the gap */
+ height: auto; /* Allow height to auto adjust based on aspect ratio */
+}
+
+.video-grid a {
+ aspect-ratio: 1;
+}
+
+@media (max-width: 600px) {
+ .video-grid iframe {
+ flex: 0 0 100%; /* Flex grow is 0, basis is 100% */
+ max-width: 100%; /* Allow max-width to be full width on mobile */
+ }
+}
+
+.card-img {
+ height: 200px;
+ object-fit: contain;
+ margin: auto;
+ background: white; /*TODO: change color as per theme*/
+}
+
+.card-content {
+ display: flex;
+ flex-direction: column;
+ align-items: center;
+ margin-top: auto;
+ text-align: center;
+}
+
+.card-content > h3 {
+ margin: 16px 0 8px 0;
+ font-size: 1.5em;
+ text-align: center;
+}
+
+.card-content > p {
+ font-size: 1em;
+ text-align: center;
+}
+
+.video-embed-wrapper {
+ position: relative;
+ width: 100%;
+ padding-top: 56.25%; /* 16:9 Aspect Ratio (divide 9 by 16 = 0.5625) */
+}
+
+.video-embed-wrapper iframe {
+ position: absolute;
+ top: 0;
+ left: 0;
+ width: 100%;
+ height: 100%;
+}
\ No newline at end of file
diff --git a/fern/assistants.mdx b/fern/assistants.mdx
new file mode 100644
index 0000000..4bd1748
--- /dev/null
+++ b/fern/assistants.mdx
@@ -0,0 +1,31 @@
+---
+title: Introduction
+subtitle: The core building-block of voice agents on Vapi.
+slug: assistants
+---
+
+
+**Assistant** is a fancy word for an AI configuration that can be used across phone calls and Vapi clients. Your voice assistant can augment your customer support and experience for call centers, business websites, mobile apps, and much more.
+
+There are three core components: **Transcriber**, **Model**, and **Voice**. These can be configured, mixed, and matched for your use case. There are also various other configurable properties you can find [here](/api-reference/assistants/create-assistant) Below, check out some ways you can layer in powerful customizations and features to meet any use case.
+
+## Advanced Concepts
+
+
+
+ Add your API keys for other providers
+
+
+ Plug in your own LLM
+
+
+ Forward and hang up with function calls
+
+
+ Which setup is best for you?
+
+
diff --git a/fern/assistants/background-messages.mdx b/fern/assistants/background-messages.mdx
new file mode 100644
index 0000000..b18607f
--- /dev/null
+++ b/fern/assistants/background-messages.mdx
@@ -0,0 +1,48 @@
+---
+title: Background Messaging
+subtitle: >-
+ Vapi SDK lets you silently update the chat history through efficient text
+ message integration. This is particularly useful for background tasks or
+ discreetly logging user interactions.
+slug: assistants/background-messages
+---
+
+
+## Scenario Overview
+
+As a developer you may run into scenarios where a user action, such as pressing a button, needs to be logged in the chat history without overt user involvement. This could be crucial for maintaining conversation context or system logging purposes.
+
+
+
+ Add a button to your interface with an `onClick` event handler that will call a function to send the system message:
+ ```html
+
+ ```
+
+
+
+ When the button is clicked, the `logUserAction` function will silently insert a system message into the chat history:
+ ```js
+ function logUserAction() {
+ // Function to log the user action
+ vapi.send({
+ type: "add-message",
+ message: {
+ role: "system",
+ content: "The user has pressed the button, say peanuts",
+ },
+ });
+ }
+ ```
+ - `vapi.send`: The primary function to interact with your assistant, handling various requests or commands.
+ - `type: "add-message"`: Specifies the command to add a new message.
+ - `message`: This is the actual message that you want to add to the message history.
+ - `role`: "system" Designates the message origin as 'system', ensuring the addition is unobtrusive. Other possible values of role are 'user' | 'assistant' | 'tool' | 'function'
+ - `content`: The actual message text to be added.
+
+
+
+
+ - Silent logging of user activities. - Contextual updates in conversations triggered by background
+ processes. - Non-intrusive user experience enhancements through additional information provision.
+
diff --git a/fern/assistants/call-analysis.mdx b/fern/assistants/call-analysis.mdx
new file mode 100644
index 0000000..362a31f
--- /dev/null
+++ b/fern/assistants/call-analysis.mdx
@@ -0,0 +1,149 @@
+---
+title: Call Analysis
+subtitle: At the end of the call, you can summarize and evaluate how it went.
+slug: assistants/call-analysis
+---
+
+
+The Call Analysis feature allows you to summarize and evaluate calls, providing valuable insights into their effectiveness. This feature uses a combination of prompts and schemas to generate structured data and success evaluations based on the call's content.
+
+You can customize the below in the assistant's `assistant.analysisPlan`.
+
+## Summary Prompt
+
+The summary prompt is used to create a concise summary of the call. This summary is stored in `call.analysis.summary`.
+
+### Default Summary Prompt
+
+The default summary prompt is:
+
+```text
+You are an expert note-taker. You will be given a transcript of a call. Summarize the call in 2-3 sentences, if applicable.
+```
+
+### Customizing the Summary Prompt
+
+You can customize the summary prompt by setting the `summaryPrompt` property in the API or SDK:
+
+```json
+{
+ "summaryPrompt": "Custom summary prompt text"
+}
+```
+
+To disable the summary prompt, set it to an empty string `""` or `"off"`:
+
+```json
+{
+ "summaryPrompt": ""
+}
+```
+
+## Structured Data Prompt
+
+The structured data prompt extracts specific pieces of data from the call. This data is stored in `call.analysis.structuredData`.
+
+### Default Structured Data Prompt
+
+The default structured data prompt is:
+
+```text
+You are an expert data extractor. You will be given a transcript of a call. Extract structured data per the JSON Schema.
+```
+
+### Customizing the Structured Data Prompt
+
+You can set a custom structured data prompt using the `structuredDataPrompt` property:
+
+```json
+{
+ "structuredDataPrompt": "Custom structured data prompt text"
+}
+```
+
+## Structured Data Schema
+
+The structured data schema enforces the format of the extracted data. It is defined using JSON Schema standards.
+
+### Customizing the Structured Data Schema
+
+You can set a custom structured data schema using the `structuredDataSchema` property:
+
+```json
+{
+ "structuredDataSchema": {
+ "type": "object",
+ "properties": {
+ "field1": { "type": "string" },
+ "field2": { "type": "number" }
+ },
+ "required": ["field1", "field2"]
+ }
+}
+```
+
+## Success Evaluation Prompt
+
+The success evaluation prompt is used to determine if the call was successful. This evaluation is stored in `call.analysis.successEvaluation`.
+
+### Default Success Evaluation Prompt
+
+The default success evaluation prompt is:
+
+```text
+You are an expert call evaluator. You will be given a transcript of a call and the system prompt of the AI participant. Determine if the call was successful based on the objectives inferred from the system prompt.
+```
+
+### Customizing the Success Evaluation Prompt
+
+You can set a custom success evaluation prompt using the `successEvaluationPrompt` property:
+
+```json
+{
+ "successEvaluationPrompt": "Custom success evaluation prompt text"
+}
+```
+
+To disable the success evaluation prompt, set it to an empty string `""` or `"off"`:
+
+```json
+{
+ "successEvaluationPrompt": ""
+}
+```
+
+## Success Evaluation Rubric
+
+The success evaluation rubric defines the criteria used to evaluate the call's success. The available rubrics are:
+
+- `NumericScale`: A scale of 1 to 10.
+- `DescriptiveScale`: A scale of Excellent, Good, Fair, Poor.
+- `Checklist`: A checklist of criteria and their status.
+- `Matrix`: A grid that evaluates multiple criteria across different performance levels.
+- `PercentageScale`: A scale of 0% to 100%.
+- `LikertScale`: A scale of Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree.
+- `AutomaticRubric`: Automatically break down evaluation into several criteria, each with its own score.
+- `PassFail`: A simple 'true' if the call passed, 'false' if not.
+
+### Customizing the Success Evaluation Rubric
+
+You can set a custom success evaluation rubric using the `successEvaluationRubric` property:
+
+```json
+{
+ "successEvaluationRubric": "NumericScale"
+}
+```
+
+## Combining Prompts and Rubrics
+
+You can use prompts and rubrics in combination to create detailed instructions for the call analysis:
+
+```json
+{
+ "successEvaluationPrompt": "Evaluate the call based on these criteria:...",
+ "successEvaluationRubric": "Checklist"
+}
+```
+
+By customizing these properties, you can tailor the call analysis to meet your specific needs and gain valuable insights from your calls.
\ No newline at end of file
diff --git a/fern/assistants/dynamic-variables.mdx b/fern/assistants/dynamic-variables.mdx
new file mode 100644
index 0000000..9dfa9ff
--- /dev/null
+++ b/fern/assistants/dynamic-variables.mdx
@@ -0,0 +1,69 @@
+---
+title: Dynamic Variables
+subtitle: >-
+ Vapi makes it easy to personalize an assistant's messages and prompts using
+ variables, allowing each call to be customized.
+slug: assistants/dynamic-variables
+---
+
+
+Prompts, messages, and other assistant properties can be dynamically set when starting a call based on templates.
+These templates are defined using double curly braces `{{variableName}}`.
+This is useful when you want to customize the assistant for a specific call.
+
+For example, you could set the assistant's first message to "Hello, `{{name}}`!" and then set `name` to `John` when starting the call
+by passing `assistantOverrides` with `variableValues` to the API or SDK:
+
+```json
+{
+ "variableValues": {
+ "name": "John"
+ }
+}
+```
+
+## Utilizing Dynamic Variables in Phone Calls
+
+To leverage dynamic variables during phone calls, follow these steps:
+
+1. **Prepare Your Request:** Construct a JSON payload containing the following key-value pairs:
+
+ * `assistantId`: Replace `"your-assistant-id"` with the actual ID of your assistant.
+ * `assistantOverride`: This object is used to customize your assistant's behavior.
+ * `variableValues`: An object containing the dynamic variables you want to use, in the format `{ "variableName": "variableValue" }`. For example, `{ "name": "John" }`.
+ * `customer`: An object representing the call recipient.
+ * `number`: Replace `"+1xxxxxxxxxx"` with the phone number you wish to call (in E.164 format).
+ * `phoneNumberId`: Replace `"your-phone-id"` with the ID of your registered phone number. You can get it from the [Phone number](https://dashboard.vapi.ai/phone-numbers) in the dashboard.
+
+2. **Send the Request:** Dispatch the JSON payload to the `/call/phone` endpoint using your preferred method (e.g., HTTP POST request).
+
+```json
+{
+ "assistantId": "your-assistant-id",
+ "assistantOverrides": {
+ "variableValues": {
+ "name": "John"
+ }
+ },
+ "customer": {
+ "number": "+1xxxxxxxxxx"
+ },
+ "phoneNumberId": "your-phone-id"
+}
+```
+
+## Default Variables
+
+By default, the following variables are automatically filled based on the current (UTC) time,
+meaning that you don't need to set them manually in `variableValues`:
+
+| Variable | Description | Example |
+| ----------- | --------------------------- | -------------------- |
+| `{{now}}` | Current date and time (UTC) | Jan 1, 2024 12:00 PM |
+| `{{date}}` | Current date (UTC) | Jan 1, 2024 |
+| `{{time}}` | Current time (UTC) | 12:00 PM |
+| `{{month}}` | Current month (UTC) | January |
+| `{{day}}` | Current day of month (UTC) | 1 |
+| `{{year}}` | Current year (UTC) | 2024 |
+
+**Note:** You will need to add the `{{variableName}}` in this format in all your prompts, whether it is the first message or anywhere else you want to use it.
diff --git a/fern/assistants/function-calling.mdx b/fern/assistants/function-calling.mdx
new file mode 100644
index 0000000..af0fa0f
--- /dev/null
+++ b/fern/assistants/function-calling.mdx
@@ -0,0 +1,112 @@
+---
+title: Function Calling
+subtitle: 'Additional Capabilities for Your Assistants '
+slug: assistants/function-calling
+---
+
+
+Vapi voice assistants are given three additional functions: `transferCall`,`endCall`, and `dialKeypad`. These functions can be used to transfer calls, hang up calls, and enter digits on the keypad.
+
+You **do not** need to add these functions to your model's `functions` array.
+
+#### Transfer Call
+
+When a `forwardingPhoneNumber` is present on an assistant, the assistant will be given a `transferCall` function. This function can be used to transfer the call to the `forwardingPhoneNumber`.
+
+```json
+{
+ "model": {
+ "provider": "openai",
+ "model": "gpt-3.5-turbo",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are an assistant at a law firm. When the user asks to be transferred, use the transferCall function."
+ }
+ ]
+ },
+ "forwardingPhoneNumber": "+16054440129"
+}
+```
+
+#### End Call
+
+This function is provided when `endCallFunctionEnabled` is enabled on the assistant. The assistant can use this function to end the call.
+
+```json
+{
+ "model": {
+ "provider": "openai",
+ "model": "gpt-3.5-turbo",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are an assistant at a law firm. If the user is being mean, use the endCall function."
+ }
+ ]
+ },
+ "endCallFunctionEnabled": true
+}
+```
+
+#### Dial Keypad
+
+This function is provided when `dialKeypadFunctionEnabled` is enabled on the assistant. The assistant will be able to enter digits on the keypad.
+
+```json
+{
+ "model": {
+ "provider": "openai",
+ "model": "gpt-3.5-turbo",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are an assistant at a law firm. When you hit a menu, use the dialKeypad function to enter the digits."
+ }
+ ]
+ },
+ "dialKeypadFunctionEnabled": true
+}
+```
+
+### Custom Functions
+
+In addition to the predefined functions, you can also define custom functions. These functions are similar to OpenAI functions and your chosen LLM will trigger them as needed based on your instructions.
+
+The functions array in the assistant definition allows you to define custom functions that the assistant can call during a conversation. Each function is an object with the following properties:
+
+- `name`: The name of the function. It must be a string containing a-z, A-Z, 0-9, underscores, or dashes, with a maximum length of 64.
+- `description`: A brief description of what the function does. This is used by the AI to decide when and how to call the function.
+- `parameters`: An object that describes the parameters the function accepts. The type property should be "object", and the properties property should be an object where each key is a parameter name and each value is an object describing the type and purpose of the parameter.
+
+Here's an example of a function definition:
+
+```json
+{
+ "functions": [
+ {
+ "name": "bookAppointment",
+ "description": "Used to book the appointment.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "datetime": {
+ "type": "string",
+ "description": "The date and time of the appointment in ISO format."
+ }
+ }
+ }
+ }
+ ]
+}
+```
+
+In this example, the bookAppointment function accepts one parameter, `datetime`, which is a string representing the date and time of the appointment in ISO format.
+
+In addition to defining custom functions, you can specify a `serverUrl` where Vapi will send the function call information. This URL can be configured at the account level or at the assistant level.
+At the account level, the `serverUrl` is set in the Vapi Dashboard. All assistants under the account will use this URL by default for function calls.
+At the assistant level, the `serverUrl` can be specified in the assistant configuration when creating or updating an assistant. This allows different assistants to use different URLs for function calls. If a `serverUrl` is specified at the assistant level, it will override the account-level Server URL.
+
+If the `serverUrl` is not defined either at the account level or the assistant level, the function call will simply be added to the chat history. This can be particularly useful when you want a function call to trigger an action on the frontend.
+
+For instance, the frontend can listen for specific function calls in the chat history and respond by updating the user interface or performing other actions. This allows for a dynamic and interactive user experience, where the frontend can react to changes in the conversation in real time.
diff --git a/fern/assistants/persistent-assistants.mdx b/fern/assistants/persistent-assistants.mdx
new file mode 100644
index 0000000..c4adc39
--- /dev/null
+++ b/fern/assistants/persistent-assistants.mdx
@@ -0,0 +1,17 @@
+---
+title: Persistent Assistants
+subtitle: Should I use persistent assistants?
+slug: assistants/persistent-assistants
+---
+
+
+You might be wondering whether or not you should create an assistant using the `/assistant` endpoint with its `assistantId`. Or, can you just specify the assistant configuration when starting a call?
+
+The `/assistant` endpoint is there for convenience to save you creating your own assistants table.
+
+
+- You won't be adding more assistant properties on top of ours.
+- You want to use the same assistant across multiple calls.
+
+
+Otherwise, you can just specify the assistant configuration when starting a call.
diff --git a/fern/billing/billing-limits.mdx b/fern/billing/billing-limits.mdx
new file mode 100644
index 0000000..3b92f19
--- /dev/null
+++ b/fern/billing/billing-limits.mdx
@@ -0,0 +1,29 @@
+---
+title: Billing Limits
+subtitle: Set billing limits on your Vapi account.
+slug: billing/billing-limits
+---
+
+
+You can set billing limits in the billing section of your dashboard.
+
+
+ You can access your billing settings at
+ [dashboard.vapi.ai/billing](https://dashboard.vapi.ai/billing)
+
+
+### Setting a Monthly Billing Limit
+
+In your billing settings you can set a monthly billing limit:
+
+
+
+
+
+### Exceeding Billing Limits
+
+Once you have used all of your starter credits, or exceeded your set monthly usage limit, you will start seeing errors in your dashboard & via the API mentioning `Billing Limits Exceeded`.
+
+
+
+
diff --git a/fern/billing/cost-routing.mdx b/fern/billing/cost-routing.mdx
new file mode 100644
index 0000000..59c887b
--- /dev/null
+++ b/fern/billing/cost-routing.mdx
@@ -0,0 +1,55 @@
+---
+title: Cost Routing
+subtitle: Learn more about how your Vapi account is billed for provider expenses.
+slug: billing/cost-routing
+---
+
+
+
+
+
+
+During calls, requests will be made to different providers in the voice pipeline:
+
+- **transcription providers:** providers conducting speech-to-text
+- **model providers:** LLM providers
+- **voice providers:** providers conducting text-to-speech
+- **telephony providers:** providers like [Twilio](https://www.twilio.com)/[Vonage](https://www.vonage.com) that facilitate phone calls
+
+
+ Per-minute telephony costs only occur during inbound/outbound phone calling. Web calls do not
+ incur this cost.
+
+
+## Where Provider Costs End-up
+
+There are 2 places these charges can end up:
+
+1. **Provider-side:** in the account you have with the provider.
+2. **With Vapi:** in your Vapi account.
+
+
+
+ If we have [provider keys](customization/provider-keys) on file for a provider, the cost will be seen directly
+ in your account with the provider. Vapi will have made the request on your behalf with your provider key.
+
+ No charge will be made to your Vapi account.
+
+ Charges for inbound/outbound phone calling (telephony) will always end up where the phone number
+ was provisioned. If you import a phone number from Twilio or Vonage, per-minute charges for calling
+ those numbers will appear with them.
+
+
+
+ If no key is found on-file for the provider, Vapi will make the API request itself (with Vapi's own keys, at Vapi's expense). This expense is then passed on [**at-cost**](/glossary#at-cost) to be billed directly to your Vapi account.
+
+ No charge will show up provider-side.
+
+
+
+
+## Billing That "Just Works"
+
+The central idea is that everything is designed to "just work".
+
+Whether you are billed provider-side, or on Vapi's side, you will never be charged with any margin for provider fees incurred during calls.
diff --git a/fern/billing/estimating-costs.mdx b/fern/billing/estimating-costs.mdx
new file mode 100644
index 0000000..98a7c0c
--- /dev/null
+++ b/fern/billing/estimating-costs.mdx
@@ -0,0 +1,229 @@
+---
+title: Estimating Costs
+subtitle: Get information on your voice pipeline's projected costs.
+slug: billing/estimating-costs
+---
+
+
+Since there are so many moving parts to the voice pipeline that can incur cost, it would be ideal for us to get a good estimate of our final projected per-minute cost for calls.
+
+### Dashboard Cost Estimates
+
+The Vapi dashboard provides static cost projections on a per-assistant basis, so you can get a rough idea of the costs your assistant will incur during live execution.
+
+
+ You can view your dashboard at [dashboard.vapi.ai](https://dashboard.vapi.ai/)
+ & get started with our [dashboard quickstart](/quickstart/dashboard).
+
+
+
+
+
+
+### General Provider Estimates
+
+The provider costs listed below are subject to change as we get more data, but they will always reflect our best estimate of the provider costs per minute:
+
+
+
+ | Provider | \$/min (≈) | \$/hour |
+ | -------- | -------------- | --------- |
+ | Deepgram | **\$0.01/min** | \$0.60/hr |
+
+
+ | Provider | $/min (≈) | $/hour |
+ | ----------------------- | --------------- | ------------- |
+ | OpenAI (gpt-4-turbo) | **$0.20/min** | $12.00/hr |
+ | OpenAI (gpt-3.5-turbo) | **$0.02/min** | $1.20/hr |
+
+
+
+ | Provider | $/min (≈) | $/hour |
+ | ---------- | --------------- | ---------- |
+ | ElevenLabs | **$0.04/min** | $2.40/hr |
+ | PlayHT | **$0.07/min** | $4.20/hr |
+ | Deepgram | **$0.02/min** | $1.20/hr |
+ | OpenAI | **$0.02/min** | $1.20/hr |
+ | RimeAI | **$0.03/min** | $1.80/hr |
+ | Azure | **$0.02/min** | $1.20/hr |
+ | Neets | **$0.005/min** | $0.30/hr |
+ | LMNT | **$0.03/min** | $1.80/hr |
+
+
+
+ | Provider | $/min (≈) | $/hour |
+ | -------- | --------------- | ---------- |
+ | Twilio | **$0.01/min** | $0.60/hr |
+ | Vonage | **$0.01/min** | $0.60/hr |
+
+
+
+
+### Provider Pricings
+
+Here are direct links to different provider's pricing pages to assist in estimating cost:
+
+
+
+
+
+ Deepgram transcription pricing.
+
+
+
+
+
+
+ OpenAI model pricing.
+
+
+
+
+
+
+ ElevenLabs voice pricing.
+
+
+ PlayHT voice pricing.
+
+
+ Deepgram voice pricing.
+
+
+ OpenAI voice pricing.
+
+
+ RimeAI voice pricing.
+
+
+ Azure voice pricing.
+
+
+ Neets voice pricing.
+
+
+ LMNT voice pricing.
+
+
+
+
+
+
+ Twilio phone call pricing.
+
+
+ Vonage phone call pricing.
+
+
+
+
+
+### Calling Your Assistant
+
+One good way to get an empirical per-minute cost on your whole voice pipeline is to actually call in, use it for a few minutes, & observe the average cost/minute at the call level.
+
+
+ You can view a breakdown of your cost per call in your dashboard at
+ [dashboard.vapi.ai/calls](https://dashboard.vapi.ai/calls)
+
+
+Your call cost breakdowns will look something like this:
+
+
+
+
+
+Here is what each line item corresponds to:
+
+- `STT`: Speech-to-text (providers often bill per-minute, prorated)
+- `LLM`: LLM inference (providers often bill per-million or per-thousand tokens)
+- `TTS`: Text-to-speech (providers often bill per-character)
+- `Vapi`: the Vapi platform fee of 5¢/minute (prorated per-second)
+- `Transport`: telephony costs (incurred for inbound/outbound phone calls to/from a phone number) (providers often bill per-minute)
+
+This method can be effective because **per-minute costs will not scale** with the amount of call minutes you consume. The cost for the 1st minute will be the same as the 10,000th minute.
+
+
+ Volume pricing is available on enterprise plans. Check out
+ [enterprise](/enterprise) to learn more.
+
diff --git a/fern/billing/examples.mdx b/fern/billing/examples.mdx
new file mode 100644
index 0000000..0e1d618
--- /dev/null
+++ b/fern/billing/examples.mdx
@@ -0,0 +1,188 @@
+---
+title: Billing Examples
+subtitle: End-to-end examples estimating voice workflow cost on Vapi.
+slug: billing/examples
+---
+
+
+
+
+
+
+## Case Examples
+
+Here are a few case-examples of what billing would look like on Vapi for different voice pipeline configurations.
+
+
+
+ A customer is looking to use Vapi to assist their call center staff taking phone calls inbound:
+
+
+
+
+ "I want to use Vapi voice assistants to support my human customer service reps in a call
+ center. However, I have a custom LLM I would prefer to use instead of the ones offered through
+ the platform.
+
+
+
+ Expected monthly usage will be 10,000 calls, with an average of 2 minutes per call. For Voice,
+ PlayHT will suit our needs.
+
+
+
What is my pricing breakdown?"
+
+
+
+ The providers used will determine per-minute cost. The following providers will be involved:
+
+
+
+
+
+
+
+
+
+
+
+ We will break down the costs of each piece of the voice pipeline, then later multiply by call volume:
+
+
**Deepgram:** ≈ \$0.01/min
+
**Custom Model:** ≈ \$0.04/min (vague assumption, can vary widely)
+
**PlayHT:** ≈ \$0.07/min
+
**Twilio:** ≈ \$0.02/min (inbound, toll-free) (see Twilio [phone call pricing](https://www.twilio.com/en-us/voice/pricing))
+
**Vapi:** \$0.05/min
+
+ Our [estimating costs](/billing/estimating-costs) guide can help you determine these values.
+
+
+
+ Call Minutes / Month: 10,000 calls x 2 min/call = **20,000 call minutes**
+
+
**Transcription:** \$0.01/min x 20,000 = **\$200**
+
**Custom Model:** \$0.04/min x 20,000 = **\$800**
+
**Voice:** ≈ \$0.07/min x 20,000 = **\$1,400**
+
**Telephony:** ≈ \$0.02/min x 20,000 = **\$400**
+
**Vapi:** \$0.05/min x 20,000 = **\$1,000**
+
+ **Total**: **\$3,800**/mo
+
+
+
+
+
+
+ A customer doing real estate lead generation is looking to use Vapi to automate parts of their sales calling operation:
+
+
+
+ "I have a company that does real estate lead generation, and would like to use Vapi voice
+ assistants to automate parts of my sales process.
+
+ Calls would average ~4 minutes, for Model I want to use GPT-3.5-turbo through your platform, and for Voice I will be using a ElevenLabs.
+
+ I’d like a breakdown based on sending 1,000 outbound calls in one month."
+
+
+
+
**Vonage:** ≈ \$0.01/min (outbound call) (see Vonage's [phone call pricing](https://www.vonage.com/communications-apis/voice/pricing))
+
**Vapi:** \$0.05/min
+
+ Our [estimating costs](/billing/estimating-costs) guide can help you determine these values.
+
+
+
+ Call Minutes / Month: 1,000 calls x 4 min/call = **4,000 call minutes**
+
+
**Transcription:** \$0.01/min x 4,000 = **\$40**
+
**Model:** \$0.02/min x 4,000 = **\$80**
+
**Voice:** ≈ \$0.04/min x 4,000 = **\$160**
+
**Telephony:** ≈ \$0.01/min x 4,000 = **\$40**
+
**Vapi:** \$0.05/min x 4,000 = **\$200**
+
+ **Total**: **\$520**/mo
+
+
+
+
+
+
+ A web engineer is looking to develop a website that helps job candidates practice for job interviews. They are looking to use Vapi for their virtual interviewers:
+
+
+
+ "Hi, I'm looking to develop a web application for mock interviews. Users will be able to practice for a variety
+ of job interviews with AI interviewers.
+
+ Interviews will be 30-minutes each (at max), for model I'll be using a custom open-source model hosted with Baseten & for voice I'll be using PlayHT.
+
+ How much would this cost me each month if I service 1,000 interviews per month?"
+
+
+
+
**Transcriber:** Deepgram
+
**Model:** custom model
+
**Voice:** PlayHT
+
+
+
+
+
+
+
+
+
+
**Deepgram:** ≈ \$0.01/min
+
**Custom Model:** ≈ \$0.02/min (vague assumption, can vary widely)
+
**PlayHT:** ≈ \$0.07/min
+
**Vapi:** \$0.05/min
+
+ Our [estimating costs](/billing/estimating-costs) guide can help you determine these values.
+
+
+
+ Call Minutes / Month: 1,000 calls x 30 min/call = **30,000 call minutes**
+
+
**Transcription:** \$0.01/min x 30,000 = **\$300**
+
**Model:** \$0.02/min x 30,000 = **\$600**
+
**Voice:** ≈ \$0.07/min x 30,000 = **\$2,100**
+
**Vapi:** \$0.05/min x 30,000 = **\$1,500**
+
+ **Total**: **\$4,500**/mo
+
+
+
+
+
+
+
+### Further Reading
+
+
+
+ Learn more about where provider costs end up getting billed.
+
+
+ Learn more about determining per-minute costs for providers.
+
+
diff --git a/fern/blocks.mdx b/fern/blocks.mdx
new file mode 100644
index 0000000..1f00745
--- /dev/null
+++ b/fern/blocks.mdx
@@ -0,0 +1,59 @@
+---
+title: Introduction
+subtitle: Breaking down bot conversations into smaller, more manageable prompts
+slug: blocks
+---
+
+
+
+We're currently running a beta for **Blocks**, an upcoming feature from [Vapi.ai](http://vapi.ai/) aimed at improving bot conversations. The problem we've noticed is that single LLM prompts are prone to hallucinations, unreliable tool calls, and can’t handle many-step complex instructions.
+
+**By breaking the conversation into smaller, more manageable prompts**, we can guarantee the bot will do this, then that, or if this happens, then that happens. It’s like having a checklist for conversations — less room for error, more room for getting things right.
+
+
+Here’s an example: For food ordering, this is what a prompt would look like.
+
+
+
+Example Prompt
+
+```jsx
+[Identity]
+You are a friendly and efficient assistant for a food truck that serves burgers, fries, and drinks.
+
+[Task]
+1. Greet the customer warmly and inquire about their main order.
+2. Offer suggestions for the main order if needed.
+3. If they choose a burger, suggest upgrading to a combo with fries and a drink, offering clear options (e.g., regular or special fries, different drink choices).
+4. Confirm the entire order to ensure accuracy.
+5. Suggest any additional items like desserts or sauces.
+6. Thank the customer and let them know when their order will be ready.
+```
+
+
+
+
+
+
+
+
+
+
+
+There are three core types of Blocks: [Conversation](https://api.vapi.ai/api#:~:text=ConversationBlock), [Tool-call](https://api.vapi.ai/api#:~:text=ToolCallBlock), and [Workflow](https://api.vapi.ai/api#:~:text=WorkflowBlock). Each type serves a different role in shaping how your assistant engages with users.
+
+
+
+ Blocks is currently in beta. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience.
+
+
+## Advanced Concepts
+
+
+
+ Learn how to structure the flow of your conversation
+
+
+ Explore the different block types and how to use them
+
+
\ No newline at end of file
diff --git a/fern/blocks/block-types.mdx b/fern/blocks/block-types.mdx
new file mode 100644
index 0000000..ea10ce8
--- /dev/null
+++ b/fern/blocks/block-types.mdx
@@ -0,0 +1,18 @@
+---
+title: Block Types
+subtitle: 'Building the Logic and Actions for Each Step in Your Conversation '
+slug: blocks/block-types
+---
+
+
+[**Blocks**](https://api.vapi.ai/api#/Blocks/BlockController_create) are the functional units within a Step, defining what action happens at each stage of a conversation. Each Step can contain only one Block, and there are three main types of Blocks, each designed to handle different aspects of conversation flow.
+
+
+ Blocks is currently in beta. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience.
+
+
+#### Types
+
+- [**Conversation:**]((https://api.vapi.ai/api#:~:text=ConversationBlock)) This block type manages interactions between the assistant and the user. A conversation block is used when the assistant needs to ask the user for specific information, such as contact details or preferences.
+- [**Tool-call:**](https://api.vapi.ai/api#:~:text=ToolCallBlock) This block allows the assistant to make external tool calls.
+- [**Workflow:**](https://api.vapi.ai/api#:~:text=WorkflowBlock) This block type enables the creation of subflows, which are smaller sets of steps executed within a Block. It can contain an array of steps (`steps[]`) and uses an `inputSchema` to define the data needed to initiate the workflow, along with an `outputSchema` to handle the data returned after completing the subflow. Workflow blocks are ideal for organizing complex processes or reusing workflows across different parts of the conversation.
\ No newline at end of file
diff --git a/fern/blocks/steps.mdx b/fern/blocks/steps.mdx
new file mode 100644
index 0000000..b5391bf
--- /dev/null
+++ b/fern/blocks/steps.mdx
@@ -0,0 +1,70 @@
+---
+title: Steps
+subtitle: Building and Controlling Conversation Flow for Your Assistants
+slug: blocks/steps
+---
+
+
+[**Steps**](https://api.vapi.ai/api#:~:text=HandoffStep) are the core building blocks that dictate how conversations progress in a bot interaction. Each Step represents a distinct point in the conversation where the bot performs an action, gathers information, or decides where to go next. Think of Steps as checkpoints in a conversation that guide the flow, manage user inputs, and determine outcomes.
+
+
+ Blocks is currently in beta. We're excited to have you try this new feature and welcome your [feedback](https://discord.com/invite/pUFNcf2WmH) as we continue to refine and improve the experience.
+
+
+#### Features
+
+- **Output:** The data or response expected from the step, as outlined in the block's `outputSchema`.
+- **Input:** The data necessary for the step to execute, defined in the block's `inputSchema`.
+- [**Destinations:**](https://api.vapi.ai/api#:~:text=StepDestination) This can be determined by a simple linear progression or based on specific criteria, like conditions or rules set within the Step. This enables dynamic decision-making, allowing the assistant to choose the next Step depending on what happens during the conversation (e.g., user input, a specific value, or a condition being met).
+
+#### Example
+
+```json
+ {
+ "type": "handoff",
+ "name": "get_user_order",
+ "input": {
+ "name": "John Doe",
+ "email": "johndoe@example.com"
+ },
+ "destinations": [
+ {
+ "type": "step",
+ "stepName": "confirm_order",
+ "conditions": [
+ {
+ "type": "model-based",
+ "instruction": "If the user has provided an order"
+ }
+ ]
+ }
+ ],
+ "block": {
+ "name": "ask_for_order",
+ "type": "conversation",
+ "inputSchema": {
+ "type": "object",
+ "required": ["name", "email"],
+ "properties": {
+ "name": { "type": "string", "description": "The customer's name" },
+ "email": { "type": "string", "description": "The customer's email" }
+ }
+ },
+ "instruction": "Greet the customer and ask for their name and email. Then ask them what they'd like to order.",
+ "outputSchema": {
+ "type": "object",
+ "required": ["orders", "name"],
+ "properties": {
+ "orders": {
+ "type": "string",
+ "description": "The customer's order, e.g., 'burger with fries'"
+ },
+ "name": {
+ "type": "string",
+ "description": "The customer's name"
+ }
+ }
+ }
+ }
+}
+```
\ No newline at end of file
diff --git a/fern/call-forwarding.mdx b/fern/call-forwarding.mdx
new file mode 100644
index 0000000..0cd5b5b
--- /dev/null
+++ b/fern/call-forwarding.mdx
@@ -0,0 +1,262 @@
+---
+title: Call Forwarding
+slug: call-forwarding
+---
+
+
+Vapi's call forwarding functionality allows you to redirect calls to different phone numbers based on specific conditions using tools. This guide explains how to set up and use the `transferCall` function for call forwarding.
+
+## Key Concepts
+
+### Call Forwarding Tools
+
+- **`transferCall` Tool**: This tool enables call forwarding to predefined phone numbers with specific messages based on the destination.
+
+### Parameters and Messages
+
+- **Destinations**: A list of phone numbers where the call can be forwarded.
+- **Messages**: Custom messages that inform the caller about the call being forwarded.
+
+## Setting Up Call Forwarding
+
+### 1. Defining Destinations and Messages
+
+The `transferCall` tool includes a list of destinations and corresponding messages to notify the caller:
+
+```json
+{
+ "tools": [
+ {
+ "type": "transferCall",
+ "destinations": [
+ {
+ "type": "number",
+ "number": "+1234567890",
+ "message": "I am forwarding your call to Department A. Please stay on the line."
+ },
+ {
+ "type": "number",
+ "number": "+0987654321",
+ "message": "I am forwarding your call to Department B. Please stay on the line."
+ },
+ {
+ "type": "number",
+ "number": "+1122334455",
+ "message": "I am forwarding your call to Department C. Please stay on the line."
+ }
+ ],
+ "function": {
+ "name": "transferCall",
+ "description": "Use this function to transfer the call. Only use it when following instructions that explicitly ask you to use the transferCall function. DO NOT call this function unless you are instructed to do so.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "destination": {
+ "type": "string",
+ "enum": [
+ "+1234567890",
+ "+0987654321",
+ "+1122334455"
+ ],
+ "description": "The destination to transfer the call to."
+ }
+ },
+ "required": [
+ "destination"
+ ]
+ }
+ },
+ "messages": [
+ {
+ "type": "request-start",
+ "content": "I am forwarding your call to Department A. Please stay on the line.",
+ "conditions": [
+ {
+ "param": "destination",
+ "operator": "eq",
+ "value": "+1234567890"
+ }
+ ]
+ },
+ {
+ "type": "request-start",
+ "content": "I am forwarding your call to Department B. Please stay on the line.",
+ "conditions": [
+ {
+ "param": "destination",
+ "operator": "eq",
+ "value": "+0987654321"
+ }
+ ]
+ },
+ {
+ "type": "request-start",
+ "content": "I am forwarding your call to Department C. Please stay on the line.",
+ "conditions": [
+ {
+ "param": "destination",
+ "operator": "eq",
+ "value": "+1122334455"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+### 2. Using the `transferCall` Function
+
+When the assistant needs to forward a call, it uses the `transferCall` function with the appropriate destination:
+
+```json
+{
+ "function": {
+ "name": "transferCall",
+ "parameters": {
+ "destination": "+1234567890"
+ }
+ }
+}
+
+```
+
+### 3. Customizing Messages
+
+Customize the messages for each destination to provide clear information to the caller:
+
+```json
+{
+ "messages": [
+ {
+ "type": "request-start",
+ "content": "I am forwarding your call to Department A. Please stay on the line.",
+ "conditions": [
+ {
+ "param": "destination",
+ "operator": "eq",
+ "value": "+1234567890"
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+## Instructing the Assistant
+
+Use the system prompt to guide the assistant on when to utilize each forwarding number. For example:
+
+- "If the user asks for sales, call the `transferCall` function with `+1234567890`."
+- "If the user requests technical support, use the `transferCall` function with `+0987654321`."
+
+## Troubleshooting
+
+- If calls are not being transferred, check the logs for errors.
+- Ensure that the correct destination numbers are used.
+- Ensure you have written the function description properly to indicate where you want to forward the call
+- Test the call forwarding setup thoroughly to confirm its functionality.
+
+## Call Transfers Mode
+
+Vapi supports two types of call transfers:
+
+1. **Blind Transfer** (default): Directly transfers the call to another agent without providing any prior information to the recipient.
+2. **Warm Transfer**: Transfers the call to another agent after providing context about the call. The context can be either a full transcript or a summary, based on your configuration.
+
+### Warm Transfer
+
+To implement a warm transfer, add a `transferPlan` object to the `transferCall` tool syntax and specify the transfer mode.
+
+#### Modes of Warm Transfer
+
+#### 1. Warm Transfer with Summary
+
+In this mode, Vapi provides a summary of the call to the recipient before transferring.
+
+* **Configuration:**
+ * Set the `mode` to `"warm-transfer-with-summary"`.
+ * Define a `summaryPlan` specifying how the summary should be generated.
+ * Use the `{{transcript}}` variable to include the call transcript.
+
+* **Example:**
+
+```json
+"transferPlan": {
+ "mode": "warm-transfer-with-summary",
+ "summaryPlan": {
+ "enabled": true,
+ "messages": [
+ {
+ "role": "system",
+ "content": "Please provide a summary of the call."
+ },
+ {
+ "role": "user",
+ "content": "Here is the transcript:\n\n{{transcript}}\n\n"
+ }
+ ]
+ }
+}
+```
+
+#### 2. Warm Transfer with Message
+
+In this mode, Vapi delivers a custom static message to the recipient before transferring the call.
+
+* **Configuration:**
+ * Set the `mode` to `"warm-transfer-with-message"`.
+ * Provide the custom message in the `message` property.
+ * Note that the `{{transcript}}` variable is not available in this mode.
+
+* **Example:**
+
+```json
+"transferPlan": {
+ "mode": "warm-transfer-with-message",
+ "message": "Hey, this call has been forwarded through Vapi."
+}
+```
+
+#### Complete Example
+
+Here is a full example of a `transferCall` payload using the warm transfer with summary mode:
+
+```json
+{
+ "type": "transferCall",
+ "messages": [
+ {
+ "type": "request-start",
+ "content": "I'll transfer you to someone who can help."
+ }
+ ],
+ "destinations": [
+ {
+ "type": "number",
+ "number": "+918936850777",
+ "description": "Transfer the call",
+ "transferPlan": {
+ "mode": "warm-transfer-with-summary",
+ "summaryPlan": {
+ "enabled": true,
+ "messages": [
+ {
+ "role": "system",
+ "content": "Please provide a summary of the call."
+ },
+ {
+ "role": "user",
+ "content": "Here is the transcript:\n\n{{transcript}}\n\n"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+**Note:** In all warm transfer modes, the `{{transcript}}` variable contains the full transcript of the call and can be used within the `summaryPlan`.
diff --git a/fern/calls/call-ended-reason.mdx b/fern/calls/call-ended-reason.mdx
new file mode 100644
index 0000000..ae20ba4
--- /dev/null
+++ b/fern/calls/call-ended-reason.mdx
@@ -0,0 +1,57 @@
+---
+title: Call Ended Reason
+subtitle: A guide to understanding all call "Ended Reason" types & errors.
+slug: calls/call-ended-reason
+---
+
+
+This guide will discuss all possible `endedReason`s for a call.
+
+You can find these under the **"Ended Reason"** section of your [call
+logs](https://dashboard.vapi.ai/calls) (or under the `endedReason` field on the [Call
+Object](/api-reference/calls/get-call)).
+
+#### **Assistant-Related**
+
+- **assistant-ended-call**: The assistant intentionally ended the call based on the user's response.
+- **assistant-error**: This general error occurs within the assistant's logic or processing due to bugs, misconfigurations, or unexpected inputs.
+- **assistant-forwarded-call**: The assistant successfully transferred the call to another number or service.
+- **assistant-join-timed-out**: The assistant failed to join the call within the expected timeframe.
+- **assistant-not-found**: The specified assistant cannot be located or accessed, possibly due to an incorrect assistant ID or configuration issue.
+- **assistant-not-invalid**: The assistant ID provided is not valid or recognized by the system.
+- **assistant-not-provided**: No assistant ID was specified in the request, causing the system to fail.
+- **assistant-request-returned-error**: Communicating with the assistant resulted in an error, possibly due to network issues or problems with the assistant itself.
+- **assistant-request-returned-forwarding-phone-number**: The assistant triggered a call forwarding action, ending the current call.
+- **assistant-request-returned-invalid-assistant**: The assistant returned an invalid response or failed to fulfill the request properly.
+- **assistant-request-returned-no-assistant**: The assistant didn't provide any response or action to the request.
+- **assistant-said-end-call-phrase**: The assistant recognized a phrase or keyword triggering call termination.
+
+#### **Pipeline and LLM**
+
+These relate to issues within the AI processing pipeline or the Large Language Models (LLMs) used for understanding and generating text:
+
+- **pipeline-error-\***: Various error codes indicate specific failures within the processing pipeline, such as function execution, LLM responses, or external service integration. Examples include OpenAI, Azure OpenAI, Together AI, and several other LLMs or voice providers.
+- **pipeline-error-first-message-failed:** The system failed to deliver the first message. This issue usually occurs when you add your own provider key in the voice section. It may be due to exceeding your subscription or quota limit.
+- **pipeline-no-available-llm-model**: No suitable LLM was available to process the request.
+
+#### **Phone Calls and Connectivity**
+
+- **customer-busy**: The customer's line was busy.
+- **customer-ended-call**: The customer(end human user) ended the call for both inbound and outbound calls.
+- **customer-did-not-answer**: The customer didn't answer the call. If you're looking to build a usecase where you need the bot to talk to automated IVRs, set `assistant.voicemailDetectionEnabled=false`.
+- **customer-did-not-give-microphone-permission**: The user didn't grant the necessary microphone access for the call.
+- **phone-call-provider-closed-websocket**: The connection with the call provider was unexpectedly closed.
+- **twilio-failed-to-connect-call**: The Twilio service, responsible for managing calls, failed to establish a connection.
+- **vonage-disconnected**: The call was disconnected by Vonage, another call management service.
+- **vonage-failed-to-connect-call**: Vonage failed to establish the call connection.
+- **vonage-rejected**: The call was rejected by Vonage due to an issue or configuration problem.
+
+#### **Other Reasons**
+
+- **exceeded-max-duration**: The call reached its maximum allowed duration and was automatically terminated.
+- **silence-timed-out**: The call was ended due to prolonged silence, indicating inactivity.
+- **voicemail**: The call was diverted to voicemail.
+
+#### **Unknown**
+
+- **unknown-error**: An unexpected error occurred, and the cause is unknown. For this, please [contact support](/support) with your `call_id` and account email address, & we will investigate.
diff --git a/fern/calls/call-features.mdx b/fern/calls/call-features.mdx
new file mode 100644
index 0000000..610e77b
--- /dev/null
+++ b/fern/calls/call-features.mdx
@@ -0,0 +1,114 @@
+---
+title: Live Call Control
+slug: calls/call-features
+---
+
+Vapi offers two main features that provide enhanced control over live calls:
+
+1. **Call Control**: This feature allows you to inject conversation elements dynamically during an ongoing call.
+2. **Call Listen**: This feature enables real-time audio data streaming using WebSocket connections.
+
+To use these features, you first need to obtain the URLs specific to the live call. These URLs can be retrieved by triggering a `/call` endpoint, which returns the `listenUrl` and `controlUrl` within the `monitor` object.
+
+## Obtaining URLs for Call Control and Listen
+
+To initiate a call and retrieve the `listenUrl` and `controlUrl`, send a POST request to the `/call` endpoint.
+
+### Sample Request
+
+```bash
+curl 'https://api.vapi.ai/call/phone'
+-H 'authorization: Bearer YOUR_API_KEY'
+-H 'content-type: application/json'
+--data-raw '{
+ "assistantId": "5b0a4a08-133c-4146-9315-0984f8c6be80",
+ "customer": {
+ "number": "+12345678913"
+ },
+ "phoneNumberId": "42b4b25d-031e-4786-857f-63b346c9580f"
+}'
+
+```
+
+### Sample Response
+
+```json
+{
+ "id": "7420f27a-30fd-4f49-a995-5549ae7cc00d",
+ "assistantId": "5b0a4a08-133c-4146-9315-0984f8c6be80",
+ "phoneNumberId": "42b4b25d-031e-4786-857f-63b346c9580f",
+ "type": "outboundPhoneCall",
+ "createdAt": "2024-09-10T11:14:12.339Z",
+ "updatedAt": "2024-09-10T11:14:12.339Z",
+ "orgId": "eb166faa-7145-46ef-8044-589b47ae3b56",
+ "cost": 0,
+ "customer": {
+ "number": "+12345678913"
+ },
+ "status": "queued",
+ "phoneCallProvider": "twilio",
+ "phoneCallProviderId": "CA4c6793d069ef42f4ccad69a0957451ec",
+ "phoneCallTransport": "pstn",
+ "monitor": {
+ "listenUrl": "wss://aws-us-west-2-production1-phone-call-websocket.vapi.ai/7420f27a-30fd-4f49-a995-5549ae7cc00d/transport",
+ "controlUrl": ""
+ }
+}
+
+```
+
+## Call Control Feature
+
+Once you have the `controlUrl`, you can inject a message into the live call using a POST request. This can be done by sending a JSON payload to the `controlUrl`.
+
+### Example: Injecting a Message
+
+```bash
+curl -X POST 'https://aws-us-west-2-production1-phone-call-websocket.vapi.ai/7420f27a-30fd-4f49-a995-5549ae7cc00d/control'
+-H 'content-type: application/json'
+--data-raw '{
+ "type": "say",
+ "message": "Welcome to Vapi, this message was injected during the call."
+}'
+
+```
+
+The message will be spoken in real-time during the ongoing call.
+
+## Call Listen Feature
+
+The `listenUrl` allows you to connect to a WebSocket and stream the audio data in real-time. You can either process the audio directly or save the binary data to analyze or replay later.
+
+### Example: Saving Audio Data from a Live Call
+
+Here is a simple implementation for saving the audio buffer from a live call using Node.js:
+
+```jsx
+const WebSocket = require('ws');
+const fs = require('fs');
+
+let pcmBuffer = Buffer.alloc(0);
+
+const ws = new WebSocket("wss://aws-us-west-2-production1-phone-call-websocket.vapi.ai/7420f27a-30fd-4f49-a995-5549ae7cc00d/transport");
+
+ws.on('open', () => console.log('WebSocket connection established'));
+
+ws.on('message', (data, isBinary) => {
+ if (isBinary) {
+ pcmBuffer = Buffer.concat([pcmBuffer, data]);
+ console.log(`Received PCM data, buffer size: ${pcmBuffer.length}`);
+ } else {
+ console.log('Received message:', JSON.parse(data.toString()));
+ }
+});
+
+ws.on('close', () => {
+ if (pcmBuffer.length > 0) {
+ fs.writeFileSync('audio.pcm', pcmBuffer);
+ console.log('Audio data saved to audio.pcm');
+ }
+});
+
+ws.on('error', (error) => console.error('WebSocket error:', error));
+
+```
diff --git a/fern/changelog.mdx b/fern/changelog.mdx
new file mode 100644
index 0000000..8f37b0e
--- /dev/null
+++ b/fern/changelog.mdx
@@ -0,0 +1,39 @@
+---
+title: Changelog
+subtitle: New features, improvements, and fixes every few days
+slug: changelog
+---
+
+
+# October 7 to October 8, 2024
+
+1. **New GPT-4o Model Support for Azure OpenAI**: You can now specify the `gpt-4o-2024-08-06` model in the `models` field when configuring Azure OpenAI credentials. Use this model to access the latest GPT-4 operational capabilities in your applications.
+
+2. **Specify Timestamps as Strings in `/logs`**: We now expect timestamps as strings when working with logs. Please make sure to handle this accordingly in your applications.
+
+
+# October 6 to October 7, 2024
+
+1. **Add Structured Outputs for OpenAI Functions in Assistant Tools**: You can use [OpenAI Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs) by specifying a new parameter called `strict` as true or false when creating or using `OpenAIFunction`s in `assistant.model.tools[type=function]`. Set the `name`, provide a `description` (up to 1000 characters), and specify `parameters` as a [JSON Schema object](https://json-schema.org/understanding-json-schema). See the [OpenAI guide](https://platform.openai.com/docs/guides/function-calling) for examples.
+
+2. **Secure Incoming SIP Phone Calls to Vapi Provided SIP Numbers**: You can now specify a `username`, `password`, and optional `realm` in SIP Invite AuthZ header, through digest authentication. Create this secure SIP number by specifying an "authentication" object with the username and password fields inside `POST /phone-number` request body. Example:
+```bash
+curl --location 'https://api.vapi.ai/phone-number' \
+--header 'Content-Type: application/json' \
+--header 'Authorization: Bearer {}API_KEY}}' \
+--data-raw '{
+ "provider": "vapi",
+ "sipUri": "sip:{{USERNAME}}@sip.vapi.ai",
+ "assistantId": "{{ASSISTANT_ID}}",
+ "name": "example phone number label for your reference",
+ "authentication": {
+ "realm": "sip.vapi.ai",
+ "username": "test@example.com",
+ "password": "example_password"
+ }
+}'
+```
+
+3. **Use Updated `handoff`, `callback` Steps in Blocks**: You can now use `assistant.model.steps[type=handoff]` and `assistant.model.steps[type=callback]` to control conversation flow in your assistant. Use `HandoffStep` to move to the next step linearly without returning to the previous step, ideal for sequential tasks like forms. Use `CallbackStep` to spawn a new conversation thread and return to the previous step once done, good for handling interruptions or sub-tasks within a conversation.
+
+4. **Use Step Destinations and Assignment Mutation in Blocks**: Specify destination nodes for each step with `assistant.model.steps[type=handoff].destinations[type=step]` to direct the workflow to specific steps based on certain conditions. Update context variables in each callback step with `mutations[type=assignment]`, for example: `assistant.model.steps[type=callback].mutations[type=assignment]`
\ No newline at end of file
diff --git a/fern/community/appointment-scheduling.mdx b/fern/community/appointment-scheduling.mdx
new file mode 100644
index 0000000..64bba56
--- /dev/null
+++ b/fern/community/appointment-scheduling.mdx
@@ -0,0 +1,143 @@
+---
+title: Appointment Scheduling
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/appointment-scheduling
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/comparisons.mdx b/fern/community/comparisons.mdx
new file mode 100644
index 0000000..7e80115
--- /dev/null
+++ b/fern/community/comparisons.mdx
@@ -0,0 +1,109 @@
+---
+title: Comparisons
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/comparisons
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/conferences.mdx b/fern/community/conferences.mdx
new file mode 100644
index 0000000..24db623
--- /dev/null
+++ b/fern/community/conferences.mdx
@@ -0,0 +1,34 @@
+---
+title: Conferences
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/conferences
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/demos.mdx b/fern/community/demos.mdx
new file mode 100644
index 0000000..7d2e8af
--- /dev/null
+++ b/fern/community/demos.mdx
@@ -0,0 +1,44 @@
+---
+title: Demos
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/demos
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/expert-directory.mdx b/fern/community/expert-directory.mdx
new file mode 100644
index 0000000..c37ffb3
--- /dev/null
+++ b/fern/community/expert-directory.mdx
@@ -0,0 +1,573 @@
+---
+title: Expert Directory
+subtitle: Certified Voice AI Expert - Vapi
+slug: community/expert-directory
+---
+
+
+Want to maximize your Voice AI? Vapi, a certified consultant, specializes in building Voice AI bots.
+
+Whether you need help deciding what to automate or assistance in building it, Vapi Experts have proven their expertise by supporting users and creating valuable video content for the community. Find the right fit here.
+
+
+
+
+
+
6omb
+
Voice agents and custom product development.
+
+
+
+
+
+
+
Aitoflo
+
+ At Aitoflo, we specialize in Voice AI and RPA services, seamlessly flowing
+ to streamline business operations and enhance customer interactions with
+ Realistic Voice AI.
+
+
+
+
+
+
+
+
Amplify Voice
+
+ Our hyper-focus on User Experience will WOW your customers. Click to Book
+ a Strategy Session.
+
+
+
+
+
+
+
+
Arose AI
+
+ Arose AI creates custom Inbound Voice AI solutions for small businesses.
+ Our founder Tommy Chryst also provides 1-on-1 coaching.
+
+
+
+
+
+
+
+
AIP
+
+ Created debt collector, appointment book, customer service, website
+ assistant, etc. I, Valentino M., offer full integration and/or
+ consultation services.
+
+
+
+
+
+
+
+
Boldwave
+
+ We implement voice agents to streamline appointment booking, enhance lead
+ conversion, and provide superior 24/7 customer service.
+
+
+
+
+
+
+
+
Brisk Logic
+
+ We are a AI Automation Agency that specializes in designing advanced AI
+ voice assistants capable of automating various tasks through phone
+ calls.{" "}
+
+
+
+
+
+
+
+
Cold-Calls.AI
+
+ We specialize in helping companies within the German market integrate
+ Voice Agents for both inbound and outbound calls.
+
+
+
+
+
+
+
+
Don't Run Off AI
+
Telephone voice AI systems, prompt engineering, integrations
+
+
+
+
+
+
+
Flowzen
+
+ Our agency offers Voice AI solutions using VAPI, in English and Spanish,
+ integrated with platforms like GoHighLevel, Airtable, and Make.com.
+
+
+
+
+
+
+
+
Globe AI
+
+ I'm Aryan, founder of Globe AI. We build Inbound Voice Assistants for any
+ industry at any scale.
+
+
+
+
+
+
+
+
INFLATE AI Automation Development Services
+
Building voice systems for any industry from $3k USD minimum.
+
+
+
+
+
+
+
Integraticus
+
+ We build AI appointment setters for Real Estate Agencies to qualify more
+ leads, handle tailored outreach, and make sure you have higher margins
+ than ever before.
+
+
+
+
+
+
+
+
Klen AI
+
+ Custom AI voice assistants to handle calls, pre-qualify leads, schedule
+ appointments, etc. Utilizing Vapi for seamless integration and
+ productivity
+
+
+
+
+
+
+
+
Lunaris AI
+
We create Voice Agents for all types of businesses.
+
+
+
+
+
+
+
NukyLabs.AI
+
All Services for VAPI.ai Automation
+
+
+
+
+
+
+
Otaku Solutions
+
+ Handle the creation of voice assistants, automations, tracking, and
+ training.
+
+
+
+
+
+
+
+
Shadow AI
+
+ Specializing in AI-powered inbound and outbound calling operations for all
+ types of businesses using industry expertise.
+
+
+
+
+
+
+
+
Synthiq
+
+ Multilingual AI Voice Agents (20+ countries). Any industry. Use your
+ existing number. Expert AI consulting available.
+
+
+
+
+
+
+
+
Temporal Labs LLC
+
+ Temporal Labs LLC offers a unique solution through our parametrized,
+ development community, with industry specific solutions and engagement.
+
+
+
+
+
+
+
+
Value Added Tech
+
+ Top-notch automation company. We specialise in Make.com (Silver partner),
+ multiple CRMs and VAPI.
+
+
+
+
+
+
+
+
Strinq
+
+ Strinq develops custom voice AI solutions for enterprises, offering bespoke software and high-quality human voice models.
+
+
+
+
+
+
+
iffort.ai
+
+ We revolutionize your business communication with our conversational agents, turning traditional chats and calls into effortless conversations.
+
+
+
+
+
+
+
Msquare Automation
+
+ Experts in AI voice assistants and business automation. Affordable, quality service from India. Gold partners of Make.com
+
+
+
+
diff --git a/fern/community/ghl.mdx b/fern/community/ghl.mdx
new file mode 100644
index 0000000..d0d60d2
--- /dev/null
+++ b/fern/community/ghl.mdx
@@ -0,0 +1,45 @@
+---
+title: GoHighLevel
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/ghl
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/guide.mdx b/fern/community/guide.mdx
new file mode 100644
index 0000000..5382b0a
--- /dev/null
+++ b/fern/community/guide.mdx
@@ -0,0 +1,244 @@
+---
+title: Guide
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/guide
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/inbound.mdx b/fern/community/inbound.mdx
new file mode 100644
index 0000000..dcc303d
--- /dev/null
+++ b/fern/community/inbound.mdx
@@ -0,0 +1,80 @@
+---
+title: Inbound
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/inbound
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/knowledgebase.mdx b/fern/community/knowledgebase.mdx
new file mode 100644
index 0000000..2431403
--- /dev/null
+++ b/fern/community/knowledgebase.mdx
@@ -0,0 +1,63 @@
+---
+title: Knowledgebase
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/knowledgebase
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/myvapi.mdx b/fern/community/myvapi.mdx
new file mode 100644
index 0000000..9080728
--- /dev/null
+++ b/fern/community/myvapi.mdx
@@ -0,0 +1,212 @@
+---
+title: My Vapi
+slug: community/myvapi
+---
+
+
+Here is the updated MyVapi User Guide, including the customer endpoints and noting that MyVapi uses 27 out of the 33 available VAPI APIs:
+# MyVapi User Guide
+
+Welcome to MyVapi! This guide will help you get started with using MyVapi, your custom GPT, to enhance your productivity and streamline your tasks. Follow the steps below to make the most out of this powerful tool.
+
+## Table of Contents
+- [Introduction to MyVapi](#introduction-to-myvapi)
+- [Getting Started](#getting-started)
+- [Accessing MyVapi](#accessing-myvapi)
+- [Using MyVapi](#using-myvapi)
+ - [Basic Commands](#basic-commands)
+- [Tips and Best Practices](#tips-and-best-practices)
+- [Troubleshooting](#troubleshooting)
+- [FAQ](#faq)
+
+## Introduction to MyVapi
+
+### What is MyVapi?
+MyVapi is a custom GPT designed to allow users to manage their Vapi accounts with ease. While the Vapi Dashboard provides limited functionality and using PostMan can be cumbersome, MyVapi offers a streamlined solution to interact with the Vapi API directly. This eliminates the back-and-forth usually associated with manual API interactions and JSON validation, making the process more efficient and user-friendly. The reason MyVapi was created is to help users understand the power of using VAPI's API. MyVapi uses 27 out of the 33 available VAPI APIs.
+
+### Key Features
+- **Full API Access:** Leverage the full power of the Vapi API without the limitations of the Dashboard.
+- **Efficient Workflow:** Avoid the tedious back-and-forth of using PostMan and JSON validators.
+- **Voice Assistant Creation:** Simplify the process of creating voice assistants with the Vapi API.
+- **Troubleshooting:** Get real-time help and troubleshooting advice from ChatGPT.
+
+### Benefits of Using MyVapi
+- **Streamlined Management:** Manage your Vapi account more effectively and efficiently.
+- **Increased Productivity:** Save time and reduce effort in creating and managing voice assistants.
+- **Enhanced Support:** Receive guidance and support directly from ChatGPT to resolve any issues you encounter.
+
+### Additional Information
+MyVapi is not connected to a user's account but will help with almost anything you need help with. This includes creating transient assistants, creating tools, getting information about a call, and more.
+
+## Getting Started
+
+### Accessing MyVapi
+MyVapi can be accessed in the following ways:
+- Visit [https://chatgpt.com/g/g-3luI9WIdj-myvapi](https://chatgpt.com/g/g-3luI9WIdj-myvapi)
+- Search for "MyVapi" in the GPT Store
+
+MyVapi is available to both free and paid ChatGPT accounts.
+
+## Using MyVapi
+
+### Basic Commands
+MyVapi provides a range of commands to interact with your Vapi account efficiently. Below are the basic commands and their functions:
+
+#### Assistant Management
+- **Get Assistants**
+ - **Method:** GET
+ - **Endpoint:** /assistant
+ - **Description:** Retrieve a list of all assistants.
+
+- **Create Assistant**
+ - **Method:** POST
+ - **Endpoint:** /assistant
+ - **Description:** Create a new assistant.
+
+- **Get Assistant by ID**
+ - **Method:** GET
+ - **Endpoint:** /assistant/{id}
+ - **Description:** Retrieve details of a specific assistant using its ID.
+
+- **Update Assistant by ID**
+ - **Method:** PATCH
+ - **Endpoint:** /assistant/{id}
+ - **Description:** Update details of a specific assistant using its ID.
+
+- **Delete Assistant by ID**
+ - **Method:** DELETE
+ - **Endpoint:** /assistant/{id}
+ - **Description:** Delete a specific assistant using its ID.
+
+#### Phone Call Management
+- **Get Phone Calls**
+ - **Method:** GET
+ - **Endpoint:** /call
+ - **Description:** Retrieve a list of all phone calls.
+
+- **Get Phone Call by ID**
+ - **Method:** GET
+ - **Endpoint:** /call/{id}
+ - **Description:** Retrieve details of a specific phone call using its ID.
+
+- **Create Phone Call**
+ - **Method:** POST
+ - **Endpoint:** /call/phone
+ - **Description:** Create a new phone call.
+
+- **Update Phone Call by ID**
+ - **Method:** PATCH
+ - **Endpoint:** /call/{id}
+ - **Description:** Update the details of a specific phone call by its ID.
+
+- **Delete Phone Call by ID**
+ - **Method:** DELETE
+ - **Endpoint:** /call/{id}
+ - **Description:** Delete a specific phone call by its ID.
+
+- **Get Call Logs**
+ - **Method:** GET
+ - **Endpoint:** /log
+ - **Description:** Retrieve call logs.
+
+#### Squad Management
+- **Get Squads**
+ - **Method:** GET
+ - **Endpoint:** /squad
+ - **Description:** Retrieve a list of all squads.
+
+- **Create Squad**
+ - **Method:** POST
+ - **Endpoint:** /squad
+ - **Description:** Create a new squad.
+
+- **Get Squad by ID**
+ - **Method:** GET
+ - **Endpoint:** /squad/{id}
+ - **Description:** Retrieve details of a specific squad using its ID.
+
+- **Update Squad by ID**
+ - **Method:** PATCH
+ - **Endpoint:** /squad/{id}
+ - **Description:** Update details of a specific squad using its ID.
+
+- **Delete Squad by ID**
+ - **Method:** DELETE
+ - **Endpoint:** /squad/{id}
+ - **Description:** Delete a specific squad using its ID.
+
+#### Metrics Management
+- **Get Metrics**
+ - **Method:** GET
+ - **Endpoint:** /metrics
+ - **Description:** Retrieve metrics data.
+
+#### Tool Management
+- **List Tools**
+ - **Method:** GET
+ - **Endpoint:** /tool
+ - **Description:** Retrieve a list of all tools.
+
+- **Create Tool**
+ - **Method:** POST
+ - **Endpoint:** /tool
+ - **Description:** Create a new tool.
+
+- **Get Tool by ID**
+ - **Method:** GET
+ - **Endpoint:** /tool/{id}
+ - **Description:** Retrieve details of a specific tool using its ID.
+
+- **Update Tool by ID**
+ - **Method:** PATCH
+ - **Endpoint:** /tool/{id}
+ - **Description:** Update details of a specific tool using its ID.
+
+- **Delete Tool by ID**
+ - **Method:** DELETE
+ - **Endpoint:** /tool/{id}
+ - **Description:** Delete a specific tool using its ID.
+
+#### Customer Management
+- **Get Customers**
+ - **Method:** GET
+ - **Endpoint:** /customer
+ - **Description:** Retrieve a list of all customers.
+
+- **Create Customer**
+ - **Method:** POST
+ - **Endpoint:** /customer
+ - **Description:** Create a new customer.
+
+- **Get Customer by ID**
+ - **Method:** GET
+ - **Endpoint:** /customer/{id}
+ - **Description:** Retrieve details of a specific customer using its ID.
+
+- **Update Customer by ID**
+ - **Method:** PATCH
+ - **Endpoint:** /customer/{id}
+ - **Description:** Update details of a specific customer using its ID.
+
+- **Delete Customer by ID**
+ - **Method:** DELETE
+ - **Endpoint:** /customer/{id}
+ - **Description:** Delete a specific customer using its ID.
+
+## Tips and Best Practices
+- **Be Specific:** The more specific your request, the better MyVapi can assist you.
+- **Explore Features:** Take time to explore all the features and find what works best for you.
+- **Regular Updates:** Keep your account information and settings up-to-date for the best experience.
+
+## Troubleshooting
+If you encounter any issues while using MyVapi, try the following steps:
+1. **Check Internet Connection:** Ensure you have a stable internet connection.
+2. **Clear Cache:** Sometimes clearing your browser cache can resolve issues.
+3. **Restart Browser:** Close and reopen your browser to refresh the session.
+
+## FAQ
+**Q:** Is MyVapi free to use?
+**A:** MyVapi is available to both free and paid ChatGPT accounts.
+
+**Q:** How secure is my data?
+**A:** We prioritize your data security and use advanced encryption methods to protect your information.
diff --git a/fern/community/outbound.mdx b/fern/community/outbound.mdx
new file mode 100644
index 0000000..75a4237
--- /dev/null
+++ b/fern/community/outbound.mdx
@@ -0,0 +1,98 @@
+---
+title: Outbound
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/outbound
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+{" "}
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/podcast.mdx b/fern/community/podcast.mdx
new file mode 100644
index 0000000..110f27a
--- /dev/null
+++ b/fern/community/podcast.mdx
@@ -0,0 +1,34 @@
+---
+title: Podcast
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/podcast
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/snippets-sdks-tutorials.mdx b/fern/community/snippets-sdks-tutorials.mdx
new file mode 100644
index 0000000..56d7e2a
--- /dev/null
+++ b/fern/community/snippets-sdks-tutorials.mdx
@@ -0,0 +1,69 @@
+---
+title: Snippets & SDKs Tutorials
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/snippets-sdks-tutorials
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/special-mentions.mdx b/fern/community/special-mentions.mdx
new file mode 100644
index 0000000..bb36f1c
--- /dev/null
+++ b/fern/community/special-mentions.mdx
@@ -0,0 +1,66 @@
+---
+title: Special Mentions
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/special-mentions
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/squads.mdx b/fern/community/squads.mdx
new file mode 100644
index 0000000..8a5edc7
--- /dev/null
+++ b/fern/community/squads.mdx
@@ -0,0 +1,74 @@
+---
+title: Squads
+slug: community/squads
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/television.mdx b/fern/community/television.mdx
new file mode 100644
index 0000000..aa1f2ff
--- /dev/null
+++ b/fern/community/television.mdx
@@ -0,0 +1,34 @@
+---
+title: Television
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/television
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/community/usecase.mdx b/fern/community/usecase.mdx
new file mode 100644
index 0000000..c30fddc
--- /dev/null
+++ b/fern/community/usecase.mdx
@@ -0,0 +1,74 @@
+---
+title: Usecase
+subtitle: Videos showcasing Vapi out in the wild.
+slug: community/usecase
+---
+
+
+Here are some videos made by people in our community showcasing what Vapi can do:
+
+
+
+
+
+
+
+
+
+
+## Send Us Your Video
+
+Have a video showcasing Vapi that you want us to feature? Let us know:
+
+
+
+ Send us your video showcasing what Vapi can do, we'd like to feature it.
+
+
diff --git a/fern/customization/custom-keywords.mdx b/fern/customization/custom-keywords.mdx
new file mode 100644
index 0000000..95e1450
--- /dev/null
+++ b/fern/customization/custom-keywords.mdx
@@ -0,0 +1,95 @@
+---
+title: Custom Keywords
+subtitle: Enhanced transcription accuracy guide
+slug: customization/custom-keywords
+---
+
+
+VAPI allows you to improve the accuracy of your transcriptions by leveraging Deepgram's keyword boosting feature. This is particularly useful when dealing with specialized terminology or uncommon proper nouns. By providing specific keywords to the Deepgram model, you can enhance transcription quality directly through VAPI.
+
+### Why Use Keyword Boosting?
+
+Keyword boosting is beneficial for:
+
+- Enhancing the recognition of specialized terms and proper nouns.
+- Improving transcription accuracy without the need for a custom-trained model.
+- Quickly updating the model's vocabulary with new or uncommon words.
+
+### Important Notes
+
+- Keywords should be uncommon words or proper nouns not frequently recognized by the model.
+- Custom model training is the most effective way to ensure accurate keyword recognition.
+- For more than 50 keywords, consider custom model training by contacting Deepgram.
+
+## Enabling Keyword Boosting in VAPI
+
+### API Call Integration
+
+To enable keyword boosting, you need to add a `keywords` parameter to your VAPI assistant's transcriber section. This parameter should include the keywords and their respective intensifiers.
+
+### Example of POST Request
+
+To create an assistant with keyword boosting enabled, you can make the following POST request to VAPI:
+
+```bash
+bashCopy code
+curl \
+ --request POST \
+ --header 'Authorization: Bearer ' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "name": "Emma",
+ "model": {
+ "model": "gpt-4o",
+ "provider": "openai"
+ },
+ "voice": {
+ "voiceId": "emma",
+ "provider": "azure"
+ },
+ "transcriber": {
+ "provider": "deepgram",
+ "model": "nova-2",
+ "language": "bg",
+ "smartFormat": true,
+ "keywords": [
+ "snuffleupagus:1"
+ ]
+ },
+ "firstMessage": "Hi, I am Emma, what is your name?",
+ "firstMessageMode": "assistant-speaks-first"
+ }' \
+ https://api.vapi.ai/assistant
+
+```
+
+In this configuration:
+
+- **name**: The name of the assistant.
+- **model**: Specifies the model and provider for the assistant's conversational capabilities.
+- **voice**: Specifies the voice and provider for the assistant's speech.
+- **transcriber**: Specifies Deepgram as the transcription provider, along with the model, language, smart formatting, and keywords for boosting.
+- **firstMessage**: The initial message the assistant will speak.
+- **firstMessageMode**: Specifies that the assistant speaks first.
+
+### Intensifiers
+
+Intensifiers are exponential factors that boost or suppress the likelihood of the specified keyword being recognized. The default intensifier is `1`. Higher values increase the likelihood, while `0` is equivalent to not specifying a keyword.
+
+- **Boosting Example:** `keywords=snuffleupagus:5`
+- **Suppressing Example:** `keywords=kansas:-10`
+
+### Best Practices for Keyword Boosting
+
+1. **Send Uncommon Keywords:** Focus on keywords not successfully transcribed by the model.
+2. **Send Keywords Once:** Avoid repeating keywords.
+3. **Use Individual Keywords:** Prefer individual terms over phrases.
+4. **Use Proper Spelling:** Spell proper nouns as you want them to appear in transcripts.
+5. **Moderate Intensifiers:** Start with small increments to avoid false positives.
+6. **Custom Model Training:** For extensive vocabulary needs, consider custom model training.
+
+### Additional Resources
+
+For more detailed information on Deepgram's keyword boosting feature, refer to the Deepgram Keyword Boosting Documentation.
+
+By following these guidelines, you can effectively utilize Deepgram's keyword boosting feature within your VAPI assistant, ensuring enhanced transcription accuracy for specialized terminology and uncommon proper nouns.
\ No newline at end of file
diff --git a/fern/customization/custom-llm/fine-tuned-openai-models.mdx b/fern/customization/custom-llm/fine-tuned-openai-models.mdx
new file mode 100644
index 0000000..a939913
--- /dev/null
+++ b/fern/customization/custom-llm/fine-tuned-openai-models.mdx
@@ -0,0 +1,89 @@
+---
+title: Fine-tuned OpenAI models
+subtitle: Use Another LLM or Your Own Server
+slug: customization/custom-llm/fine-tuned-openai-models
+---
+
+
+Vapi supports using any OpenAI-compatible endpoint as the LLM. This includes services like [OpenRouter](https://openrouter.ai/), [AnyScale](https://www.anyscale.com/), [Together AI](https://www.together.ai/), or your own server.
+
+
+ - For an open-source LLM, like Mixtral
+ - To update the context during the conversation
+ - To customize the messages before they're sent to an LLM
+
+
+## Using an LLM provider
+
+You'll first want to POST your API key via the `/credential` endpoint:
+
+```json
+{
+ "provider": "openrouter",
+ "apiKey": ""
+}
+```
+
+Then, you can create an assistant with the model provider:
+
+```json
+{
+ "name": "My Assistant",
+ "model": {
+ "provider": "openrouter",
+ "model": "cognitivecomputations/dolphin-mixtral-8x7b",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are an assistant."
+ }
+ ],
+ "temperature": 0.7
+ }
+}
+```
+## Using Fine-Tuned OpenAI Models
+
+To set up your OpenAI Fine-Tuned model, you need to follow these steps:
+
+1. Set the custom llm URL to `https://api.openai.com/v1`.
+2. Assign the custom llm key to the OpenAI key.
+3. Update the model to their model.
+4. Execute a PATCH request to the `/assistant` endpoint and ensure that `model.metadataSendMode` is set to off.
+
+## Using your server
+
+To set up your server to act as the LLM, you'll need to create an endpoint that is compatible with the [OpenAI Client](https://platform.openai.com/docs/api-reference/making-requests). For best results, your endpoint should also support streaming completions.
+
+If your server is making calls to an OpenAI compatble API, you can pipe the requests directly back in your response to Vapi.
+
+If you'd like your OpenAI-compatible endpoint to be authenticated, you can POST your server's API key and URL via the `/credential` endpoint:
+
+```json
+{
+ "provider": "custom-llm",
+ "apiKey": ""
+}
+```
+
+If your server isn't authenticated, you can skip this step.
+
+Then, you can create an assistant with the `custom-llm` model provider:
+
+```json
+{
+ "name": "My Assistant",
+ "model": {
+ "provider": "custom-llm",
+ "url": "",
+ "model": "my-cool-model",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are an assistant."
+ }
+ ],
+ "temperature": 0.7
+ }
+}
+```
diff --git a/fern/customization/custom-llm/using-your-server.mdx b/fern/customization/custom-llm/using-your-server.mdx
new file mode 100644
index 0000000..fecbcac
--- /dev/null
+++ b/fern/customization/custom-llm/using-your-server.mdx
@@ -0,0 +1,100 @@
+---
+title: 'Connecting Your Custom LLM to Vapi: A Comprehensive Guide'
+slug: customization/custom-llm/using-your-server
+---
+
+
+This guide provides a comprehensive walkthrough on integrating Vapi with OpenAI's gpt-3.5-turbo-instruct model using a custom LLM configuration. We'll leverage Ngrok to expose a local development environment for testing and demonstrate the communication flow between Vapi and your LLM.
+## Prerequisites
+
+- **Vapi Account**: Access to the Vapi Dashboard for configuration.
+- **OpenAI API Key**: With access to the gpt-3.5-turbo-instruct model.
+- **Python Environment**: Set up with the OpenAI library (`pip install openai`).
+- **Ngrok**: For exposing your local server to the internet.
+- **Code Reference**: Familiarize yourself with the `/openai-sse/chat/completions` endpoint function in the provided Github repository: [Server-Side Example Python Flask](https://github.com/VapiAI/server-side-example-python-flask/blob/main/app/api/custom_llm.py).
+
+## Step 1: Setting Up Your Local Development Environment
+
+**1. Create a Python Script (app.py):**
+
+```python
+from flask import Flask, request, jsonify
+import openai
+
+app = Flask(__name__)
+openai.api_key = "YOUR_OPENAI_API_KEY" # Replace with your actual API key
+
+@app.route("/chat/completions", methods=["POST"])
+def chat_completions():
+ data = request.get_json()
+ # Extract relevant information from data (e.g., prompt, conversation history)
+ # ...
+
+ response = openai.ChatCompletion.create(
+ model="gpt-3.5-turbo-instruct",
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ # ... (Add messages from conversation history and current prompt)
+ ]
+ )
+ # Format response according to Vapi's structure
+ # ...
+ return jsonify(formatted_response)
+
+if __name__ == "__main__":
+ app.run(debug=True, port=5000) # You can adjust the port if needed
+```
+**2. Run the Script:**
+Execute the Python script using python app.py in your terminal. This will start the Flask server on the specified port (5000 in this example).
+
+**3. Expose with Ngrok:**
+Open a new terminal window and run ngrok http 5000 (replace 5000 with your chosen port) to create a public URL that tunnels to your local server.
+
+## Step 2: Configuring Vapi with Custom LLM
+**1. Access Vapi Dashboard:**
+Log in to your Vapi account and navigate to the "Model" section.
+
+**2. Select Custom LLM:**
+Choose the "Custom LLM" option to set up the integration.
+
+**3. Enter Ngrok URL:**
+Paste the public URL generated by ngrok (e.g., https://your-unique-id.ngrok.io) into the endpoint field. This will be the URL Vapi uses to communicate with your local server.
+
+**4. Test the Connection:**
+Send a test message through the Vapi interface to ensure it reaches your local server and receives a response from the OpenAI API. Verify that the response is displayed correctly in Vapi.
+
+## Step 3: Understanding the Communication Flow
+**1. Vapi Sends POST Request:**
+When a user interacts with your Vapi application, Vapi sends a POST request containing conversation context and metadata to the configured endpoint (your ngrok URL).
+
+**2. Local Server Processes Request:**
+Your Python script receives the POST request and the chat_completions function is invoked.
+
+**3. Extract and Prepare Data:**
+The script parses the JSON data, extracts relevant information (prompt, conversation history), and builds the prompt for the OpenAI API call.
+
+**4. Call to OpenAI API:**
+The constructed prompt is sent to the gpt-3.5-turbo-instruct model using the openai.ChatCompletion.create method.
+
+**5. Receive and Format Response:**
+The response from OpenAI, containing the generated text, is received and formatted according to Vapi's expected structure.
+
+**6. Send Response to Vapi:**
+The formatted response is sent back to Vapi as a JSON object.
+
+**7. Vapi Displays Response:**
+Vapi receives the response and displays the generated text within the conversation interface to the user.
+
+By following these detailed steps and understanding the communication flow, you can successfully connect Vapi to OpenAI's gpt-3.5-turbo-instruct model and create powerful conversational experiences within your Vapi applications. The provided code example and reference serve as a starting point for you to build and customize your integration based on your specific needs.
+
+**Video Tutorial:**
+
\ No newline at end of file
diff --git a/fern/customization/custom-voices/custom-voice.mdx b/fern/customization/custom-voices/custom-voice.mdx
new file mode 100644
index 0000000..a4c1699
--- /dev/null
+++ b/fern/customization/custom-voices/custom-voice.mdx
@@ -0,0 +1,19 @@
+---
+title: Introduction
+subtitle: Use Custom Voice with your favourite provider instead of the preset ones.
+slug: customization/custom-voices/custom-voice
+---
+
+
+Vapi lets you use various providers with some preset voice. At the same time you can also create your own custom voices in the supported providers and use them with Vapi.
+
+You can update the `voice` property in the assistant configuration when you are creating the assistant to use your custom voice.
+
+```json
+{
+ "voice": {
+ "provider": "deepgram",
+ "voiceId": "your-voice-id"
+ }
+}
+```
diff --git a/fern/customization/custom-voices/elevenlabs.mdx b/fern/customization/custom-voices/elevenlabs.mdx
new file mode 100644
index 0000000..5270797
--- /dev/null
+++ b/fern/customization/custom-voices/elevenlabs.mdx
@@ -0,0 +1,32 @@
+---
+title: Elevenlabs
+subtitle: 'Quickstart: Setup Elevenlabs Custom Voice'
+slug: customization/custom-voices/elevenlabs
+---
+
+
+This guide outlines the procedure for integrating your cloned voice with 11labs through the VAPI platform.
+
+An subscription is required for this process to work.
+
+To integrate your cloned voice with 11labs using the VAPI platform, follow these steps.
+
+1. **Obtain an 11labs API Subscription:** Visit the [11labs pricing page](https://elevenlabs.io/pricing) and subscribe to an API plan that suits your needs.
+2. **Retrieve Your API Key:** Go to the 'Profile + Keys' section on the 11labs website to get your API key.
+3. **Enter Your API Key in VAPI:** Navigate to the [VAPI Provider Key section](https://dashboard.vapi.ai/keys) and input your 11labs API key under the 11labs section.
+4. **Sync Your Cloned Voice:** From the [Voice Library](https://dashboard.vapi.ai/voice-library) in VAPI, select 11labs as your voice provider and click on "Sync with 11labs."
+5. **Search and Use Your Cloned Voice:** After syncing, you can search for your cloned voice within the voice library and directly use it with your assistant.
+
+By following these steps, you will successfully integrate your cloned voice from 11labs with VAPI.
+
+**Video Tutorial:**
+
diff --git a/fern/customization/custom-voices/playht.mdx b/fern/customization/custom-voices/playht.mdx
new file mode 100644
index 0000000..17aceb1
--- /dev/null
+++ b/fern/customization/custom-voices/playht.mdx
@@ -0,0 +1,30 @@
+---
+title: PlayHT
+subtitle: 'Quickstart: Setup PlayHT Custom Voice'
+slug: customization/custom-voices/playht
+---
+
+
+This guide outlines the procedure for integrating your cloned voice with Play.ht through the VAPI platform.
+
+An API subscription is required for this process to work.
+
+To integrate your cloned voice with [Play.ht](http://play.ht/) using the VAPI platform, follow these steps.
+
+1. **Obtain a Play.ht API Subscription:** Visit the [Play.ht pricing page](https://play.ht/studio/pricing) and subscribe to an API plan.
+2. **Retrieve Your User ID and Secret Key:** Go to the [API Access section](https://play.ht/studio/api-access) on Play.ht to get your User ID and Secret Key.
+3. **Enter Your API Keys in VAPI:** Navigate to the [VAPI Provider Key section](https://dashboard.vapi.ai/keys) and input your Play.ht API keys under the Play.ht section.
+4. **Sync Your Cloned Voice:** From the [Voice Library](https://dashboard.vapi.ai/voice-library) in VAPI, select Play.ht as your voice provider and click on "Sync with Play.ht."
+5. **Search and Use Your Cloned Voice:** After syncing, you can search for your cloned voice within the voice library and directly use it with your assistant.
+
+**Video Tutorial:**
+
\ No newline at end of file
diff --git a/fern/customization/jwt-authentication.mdx b/fern/customization/jwt-authentication.mdx
new file mode 100644
index 0000000..3abd1de
--- /dev/null
+++ b/fern/customization/jwt-authentication.mdx
@@ -0,0 +1,93 @@
+---
+title: JWT Authentication
+subtitle: Secure API authentication guide
+slug: customization/jwt-authentication
+---
+
+This documentation provides an overview of JWT (JSON Web Token) Authentication and demonstrates how to generate a JWT token and use it to authenticate API requests securely.
+
+## Prerequisites
+
+Before you proceed, ensure you have the following:
+
+- An environment that supports JWT generation and API calls (e.g., a programming language or framework)
+- An account with a service that requires JWT authentication
+- Environment variables set up for the necessary credentials (e.g., organization ID and private key, both can be found in your Vapi portal)
+
+## Generating a JWT Token
+
+The following steps outline how to generate a JWT token:
+
+1. **Define the Payload**: The payload contains the data you want to include in the token. In this case, it includes an `orgId`.
+2. **Get the Private Key**: The private key (provided by Vapi) is used to sign the token. Ensure it is securely stored, often in environment variables.
+3. **Set Token Options**: Define options for the token, such as the expiration time (`expiresIn`).
+4. **Generate the Token**: Use a JWT library or built-in functionality to generate the token with the payload, key, and options.
+
+### Example
+
+```js
+// Define the payload
+const payload = {
+ orgId: process.env.ORG_ID,
+};
+
+// Get the private key from environment variables
+const key = process.env.PRIVATE_KEY;
+
+// Define token options
+const options = {
+ expiresIn: '1h',
+};
+
+// Generate the token using a JWT library or built-in functionality
+const token = generateJWT(payload, key, options);
+```
+
+### Explanation
+
+- **Payload**: The payload includes the `orgId`, representing the organization ID.
+- **Key**: The private key is used to sign the token, ensuring its authenticity.
+- **Options**: The `expiresIn` option specifies that the token will expire in 1 hour.
+- **Token Generation**: The `generateJWT` function (a placeholder for the actual JWT generation method) creates the token using the provided payload, key, and options.
+
+## Making an Authenticated API Request
+
+Once the token is generated, you can use it to make authenticated API requests. The following steps outline how to make an authenticated request:
+
+1. **Define the API Endpoint**: Specify the URL of the API you want to call.
+2. **Set the Headers**: Include the `Content-Type` and `Authorization` headers in your request. The `Authorization` header should include the generated JWT token prefixed with `Bearer`.
+3. **Make the API Call**: Use an appropriate method to send the request and handle the response.
+
+### Example
+
+```js
+async function getAssistants() {
+ const response = await fetch('https://api.vapi.ai/assistant', {
+ method: 'GET',
+ headers: {
+ 'Content-Type': 'application/json',
+ Authorization: `Bearer ${token}`,
+ },
+ });
+
+ const data = await response.json();
+ console.log(data);
+}
+
+fetchData().catch(console.error);
+
+```
+
+### Explanation
+
+- **API Endpoint**: The URL of the API you want to call.
+- **Headers**: The `Content-Type` is set to `application/json`, and the `Authorization` header includes the generated JWT token.
+- **API Call**: The `fetchData` function makes an asynchronous GET request to the specified API endpoint and logs the response.
+
+### Usage
+
+With the generated token, you can authenticate API requests to any endpoint requiring authentication. The token will be valid for the duration specified in the options (1 hour in this case).
+
+## Conclusion
+
+This documentation covered the basics of generating a JWT token and demonstrated how to use the token to make authenticated API requests. Ensure that your environment variables (e.g., `ORG_ID` and `PRIVATE_KEY`) are correctly set up before running the code.
diff --git a/fern/customization/knowledgebase.mdx b/fern/customization/knowledgebase.mdx
new file mode 100644
index 0000000..90ad733
--- /dev/null
+++ b/fern/customization/knowledgebase.mdx
@@ -0,0 +1,59 @@
+---
+title: Creating Custom Knowledge Bases for Your Voice AI Assistants
+subtitle: >-
+ Learn how to create and integrate custom knowledge bases into your voice AI
+ assistants.
+slug: customization/knowledgebase
+---
+
+
+
+## **What is Vapi's Knowledge Base?**
+Our Knowledge Base is a collection of custom documents that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries.
+
+### **Why Use a Knowledge Base?**
+Using a Knowledge Base with your voice AI assistant offers several benefits:
+
+* **Improved accuracy**: By integrating custom documents into your assistant, you can ensure that it provides accurate and up-to-date information to users.
+* **Enhanced capabilities**: A Knowledge Base enables your assistant to answer complex queries and provide detailed responses to user inquiries.
+* **Customization**: With a Knowledge Base, you can tailor your assistant's responses to specific domains or topics, making it more effective and informative.
+
+## **How to Create a Knowledge Base**
+
+To create a Knowledge Base, follow these steps:
+
+### **Step 1: Upload Your Documents**
+
+Navigate to Overview > Documents and upload your custom documents in Markdown, PDF, plain text, or Microsoft Word (.doc and .docx) format to Vapi's Knowledge Base.
+
+
+
+### **Step 2: Create an Assistant**
+
+Create a new assistant in Vapi and, on the right sidebar menu, select the document you've just added to the Knowledge Base feature.
+
+
+
+### **Step 3: Configure Your Assistant**
+
+Customize your assistant's system prompt to utilize the Knowledge Base for responding to user queries.
+
+## **Best Practices for Creating Effective Knowledge Bases**
+
+* **Organize Your documents**: Organize your documents by topic or category to ensure that your assistant can quickly retrieve relevant information.
+* **Use Clear and concise language**: Use clear and concise language in your documents to ensure that your assistant can accurately understand and respond to user queries.
+* **Keep your documents up-to-date**: Regularly update your documents to ensure that your assistant provides the most accurate and up-to-date information.
+
+
+ For more information on creating effective Knowledge Bases, check out our tutorial on [Best Practices for Knowledge Base Creation](https://youtu.be/i5mvqC5sZxU).
+
+
+By following these guidelines, you can create a comprehensive Knowledge Base that enhances the capabilities of your voice AI assistant and provides valuable information to users.
diff --git a/fern/customization/multilingual.mdx b/fern/customization/multilingual.mdx
new file mode 100644
index 0000000..8e6aad9
--- /dev/null
+++ b/fern/customization/multilingual.mdx
@@ -0,0 +1,57 @@
+---
+title: Multilingual
+subtitle: Learn how to set up and test multilingual support in Vapi.
+slug: customization/multilingual
+---
+
+
+Vapi's multilingual support is primarily facilitated through transcribers, which are part of the speech-to-text process. The pipeline consists of three key elements: text-to-speech, speech-to-text, and the llm model, which acts as the brain of the operation. Each of these elements can be customized using different providers.
+
+## Transcribers (Speech-to-Text)
+
+Currently, Vapi supports two providers for speech-to-text transcriptions:
+
+- `Deepgram` (nova - family models)
+- `Talkscriber` (whisper model)
+
+Each provider supports different languages. For more detailed information, you can visit your dashboard and navigate to the transcribers tab on the assistant page. Here, you can see the languages supported by each provider and the available models. **Note that not all models support all languages**. For specific details, you can refer to the documentation for the corresponding providers.
+
+## Voice (Text-to-Speech)
+
+Once you have set your transcriber and corresponding language, you can choose a voice for text-to-speech in that language. For example, you can choose a voice with a Spanish accent if needed.
+
+Vapi currently supports the following providers for text-to-speech:
+
+- `PlayHT`
+- `11labs`
+- `Rime-ai`
+- `Deepgram`
+- `OpenAI`
+- `Azure`
+- `Lmnt`
+- `Neets`
+
+Each provider offers varying degrees of language support. Azure, for instance, supports the most languages, with approximately 400 prebuilt voices across 140 languages and variants. You can also create your own custom languages with other providers.
+
+## Multilingual Support
+
+For multilingual support, you can choose providers like Eleven Labs or Azure, which have models and voices designed for this purpose. This allows your voice assistant to understand and respond in multiple languages, enhancing the user experience for non-English speakers.
+
+To set up multilingual support, you no longer need to specify the desired language when configuring the voice assistant. This configuration in the voice section is deprecated.
+
+Instead, you directly choose a voice that supports the desired language from your voice provider. This can be done when you are setting up or modifying your voice assistant.
+
+Here is an example of how to set up a voice assistant that speaks Spanish:
+
+```json
+{
+ "voice": {
+ "provider": "azure",
+ "voiceId": "es-ES-ElviraNeural"
+ }
+}
+```
+
+In this example, the voice `es-ES-ElviraNeural` from the provider `azure` supports Spanish. You can replace `es-ES-ElviraNeural` with the ID of any other voice that supports your desired language.
+
+By leveraging Vapi's multilingual support, you can make your voice assistant more accessible and user-friendly, reaching a wider audience and providing a better user experience.
diff --git a/fern/customization/provider-keys.mdx b/fern/customization/provider-keys.mdx
new file mode 100644
index 0000000..1c6090b
--- /dev/null
+++ b/fern/customization/provider-keys.mdx
@@ -0,0 +1,26 @@
+---
+title: Provider Keys
+subtitle: Bring your own API keys to Vapi.
+slug: customization/provider-keys
+---
+
+
+Have a custom model or voice with one of the providers? Or an enterprise account with volume pricing?
+
+No problem! You can bring your own API keys to Vapi. You can add them in the [Dashboard](https://dashboard.vapi.ai) under the **Provider Keys** tab. Once your API key is validated, you won't be charged when using that provider through Vapi. Instead, you'll be charged directly by the provider.
+
+## Transcription Providers
+
+Currently, the only available transcription provider is `deepgram`. To use a custom model, you can specify the deepgram model ID in the `transcriber.model` parameter of the [Assistant](/api-reference/assistants/create-assistant).
+
+## Model Providers
+
+We are currently have support for any OpenAI-compatible endpoint. This includes services like [OpenRouter](https://openrouter.ai/), [AnyScale](https://www.anyscale.com/), [Together AI](https://www.together.ai/), or your own server.
+
+To use one of these providers, you can specify the `provider` and `model` in the `model` parameter of the [Assistant](/api-reference/assistants/create-assistant).
+
+You can find more details in the [Custom LLMs](customization/custom-llm/fine-tuned-openai-models) section of the documentation.
+
+## Voice Providers
+
+All voice providers are supported. Once you've validated your API through the [Dashboard](https://dashboard.vapi.ai), any voice ID from your provider can be used in the `voice.voiceId` field of the [Assistant](/api-reference/assistants/create-assistant).
diff --git a/fern/customization/speech-configuration.mdx b/fern/customization/speech-configuration.mdx
new file mode 100644
index 0000000..28fe4eb
--- /dev/null
+++ b/fern/customization/speech-configuration.mdx
@@ -0,0 +1,35 @@
+---
+title: Speech Configuration
+subtitle: Timing control for assistant speech
+slug: customization/speech-configuration
+---
+
+
+The Speaking Plan and Stop Speaking Plan are essential configurations designed to optimize the timing of when the assistant begins and stops speaking during interactions with a customer. These plans ensure that the assistant does not interrupt the customer and also prevents awkward pauses that can occur if the assistant starts speaking too late. Adjusting these parameters helps tailor the assistant’s responsiveness to different conversational dynamics.
+
+**Note**: At the moment these configurations can currently only be made via API.
+
+## Start Speaking Plan
+
+- **Wait Time Before Speaking**: You can set how long the assistant waits before speaking after the customer finishes. The default is 0.4 seconds, but you can increase it if the assistant is speaking too soon, or decrease it if there’s too much delay.
+
+- **Smart Endpointing**: This feature uses advanced processing to detect when the customer has truly finished speaking, especially if they pause mid-thought. It’s off by default but can be turned on if needed.
+
+- **Transcription-Based Detection**: Customize how the assistant determines that the customer has stopped speaking based on what they’re saying. This offers more control over the timing.
+
+
+## Stop Speaking Plan
+
+- **Words to Stop Speaking**: Define how many words the customer needs to say before the assistant stops talking. If you want immediate reaction, set this to 0. Increase it to avoid interruptions by brief acknowledgments like "okay" or "right".
+
+- **Voice Activity Detection**: Adjust how long the customer needs to be speaking before the assistant stops. The default is 0.2 seconds, but you can tweak this to balance responsiveness and avoid false triggers.
+
+- **Pause Before Resuming**: Control how long the assistant waits before starting to talk again after being interrupted. The default is 1 second, but you can adjust it depending on how quickly the assistant should resume.
+
+## Considerations for Configuration
+
+- **Customer Style**: Think about whether the customer pauses mid-thought or provides continuous speech. Adjust wait times and enable smart endpointing as needed.
+
+- **Background Noise**: If there’s a lot of background noise, you may need to tweak the settings to avoid false triggers.
+
+- **Conversation Flow**: Aim for a balance where the assistant is responsive but not intrusive. Test different settings to find the best fit for your needs.
diff --git a/fern/docs.yml b/fern/docs.yml
new file mode 100644
index 0000000..af3818a
--- /dev/null
+++ b/fern/docs.yml
@@ -0,0 +1,352 @@
+instances:
+ - url: vapi.docs.buildwithfern.com
+
+title: Vapi
+favicon: static/images/favicon.png
+logo:
+ light: static/images/logo/logo-light.png
+ dark: static/images/logo/logo-dark.png
+ href: /
+ height: 28
+colors:
+ accentPrimary:
+ dark: '#94ffd2'
+ light: '#37aa9d'
+ background:
+ dark: '#000000'
+ light: '#FFFFFF'
+experimental:
+ mdx-components:
+ - snippets
+css: assets/styles.css
+navbar-links:
+ - type: minimal
+ text: Home
+ href: https://vapi.ai/
+ - type: minimal
+ text: Pricing
+ href: /pricing
+ - type: minimal
+ text: Status
+ href: https://status.vapi.ai/
+ - type: minimal
+ text: Changelog
+ href: /changelog
+ - type: minimal
+ text: Support
+ href: /support
+ - type: filled
+ text: Dahshboard
+ rightIcon: fa-solid fa-chevron-right
+ href: https://example.com/login
+ rounded: true
+tabs:
+ api-reference:
+ slug: api-reference
+ display-name: API Reference
+ documentation:
+ display-name: Documentation
+ slug: documentation
+layout:
+ tabs-placement: header
+ searchbar-placement: header
+navigation:
+ - tab: documentation
+ layout:
+ - section: ''
+ contents:
+ - page: Introduction
+ path: introduction.mdx
+ - section: General
+ contents:
+ - section: How Vapi Works
+ contents:
+ - page: Core Models
+ path: quickstart.mdx
+ - page: Orchestration Models
+ path: how-vapi-works.mdx
+ - page: Knowledge Base
+ path: knowledgebase.mdx
+ - section: Pricing
+ contents:
+ - page: Overview
+ path: pricing.mdx
+ - page: Cost Routing
+ path: billing/cost-routing.mdx
+ - page: Billing Limits
+ path: billing/billing-limits.mdx
+ - page: Estimating Costs
+ path: billing/estimating-costs.mdx
+ - page: Billing Examples
+ path: billing/examples.mdx
+ - section: Enterprise
+ contents:
+ - page: Vapi Enterprise
+ path: enterprise/plans.mdx
+ - page: On-Prem Deployments
+ path: enterprise/onprem.mdx
+ - page: Changelog
+ path: changelog.mdx
+ - page: Support
+ path: support.mdx
+ - page: Status
+ path: status.mdx
+ - section: Quickstart
+ contents:
+ - page: Dashboard
+ path: quickstart/dashboard.mdx
+ - page: Inbound Calling
+ path: quickstart/inbound.mdx
+ - page: Outbound Calling
+ path: quickstart/outbound.mdx
+ - page: Web Calling
+ path: quickstart/web.mdx
+ - section: Client SDKs
+ contents:
+ - page: Overview
+ path: sdks.mdx
+ - page: Web SDK
+ path: sdk/web.mdx
+ - page: Web Snippet
+ path: examples/voice-widget.mdx
+ - section: Examples
+ contents:
+ - page: Outbound Sales
+ path: examples/outbound-sales.mdx
+ - page: Inbound Support
+ path: examples/inbound-support.mdx
+ - page: Pizza Website
+ path: examples/pizza-website.mdx
+ - page: Python Outbound Snippet
+ path: examples/outbound-call-python.mdx
+ - page: Code Resources
+ path: resources.mdx
+ - section: Customization
+ contents:
+ - page: Provider Keys
+ path: customization/provider-keys.mdx
+ - section: Custom LLM
+ contents:
+ - page: Fine-tuned OpenAI models
+ path: customization/custom-llm/fine-tuned-openai-models.mdx
+ - page: Custom LLM
+ path: customization/custom-llm/using-your-server.mdx
+ - section: Custom Voices
+ contents:
+ - page: Introduction
+ path: customization/custom-voices/custom-voice.mdx
+ - page: Elevenlabs
+ path: customization/custom-voices/elevenlabs.mdx
+ - page: PlayHT
+ path: customization/custom-voices/playht.mdx
+ - page: Custom Keywords
+ path: customization/custom-keywords.mdx
+ - page: Knowledge Base
+ path: customization/knowledgebase.mdx
+ - page: Multilingual
+ path: customization/multilingual.mdx
+ - page: JWT Authentication
+ path: customization/jwt-authentication.mdx
+ - page: Speech Configuration
+ path: customization/speech-configuration.mdx
+ - section: Core Concepts
+ contents:
+ - section: Assistants
+ contents:
+ - page: Introduction
+ path: assistants.mdx
+ - page: Function Calling
+ path: assistants/function-calling.mdx
+ - page: Persistent Assistants
+ path: assistants/persistent-assistants.mdx
+ - page: Dynamic Variables
+ path: assistants/dynamic-variables.mdx
+ - page: Call Analysis
+ path: assistants/call-analysis.mdx
+ - page: Background Messages
+ path: assistants/background-messages.mdx
+ - section: Blocks
+ contents:
+ - page: Introduction
+ path: blocks.mdx
+ - page: Steps
+ path: blocks/steps.mdx
+ - page: Block Types
+ path: blocks/block-types.mdx
+ - section: Server URL
+ contents:
+ - page: Introduction
+ path: server-url.mdx
+ - page: Setting Server URLs
+ path: server-url/setting-server-urls.mdx
+ - page: Server Events
+ path: server-url/events.mdx
+ - page: Developing Locally
+ path: server-url/developing-locally.mdx
+ - section: Phone Calling
+ contents:
+ - page: Introduction
+ path: phone-calling.mdx
+ - section: Squads
+ contents:
+ - page: Introduction
+ path: squads.mdx
+ - page: Example
+ path: squads-example.mdx
+ - section: Advanced Concepts
+ contents:
+ - section: Calls
+ contents:
+ - page: Call Forwarding
+ path: call-forwarding.mdx
+ - page: Ended Reason
+ path: calls/call-ended-reason.mdx
+ - page: SIP
+ path: advanced/calls/sip.mdx
+ - page: Live Call Control
+ path: calls/call-features.mdx
+ - page: Make & GHL Integration
+ path: GHL.mdx
+ - page: Tools Calling
+ path: tools-calling.mdx
+ - page: Prompting Guide
+ path: prompting-guide.mdx
+ - section: Glossary
+ contents:
+ - page: Definitions
+ path: glossary.mdx
+ - page: FAQ
+ path: faq.mdx
+ - section: Community
+ contents:
+ - section: Videos
+ contents:
+ - page: Appointment Scheduling
+ path: community/appointment-scheduling.mdx
+ - page: Comparisons
+ path: community/comparisons.mdx
+ - page: Conferences
+ path: community/conferences.mdx
+ - page: Demos
+ path: community/demos.mdx
+ - page: GoHighLevel
+ path: community/ghl.mdx
+ - page: Guide
+ path: community/guide.mdx
+ - page: Inbound
+ path: community/inbound.mdx
+ - page: Knowledgebase
+ path: community/knowledgebase.mdx
+ - page: Outbound
+ path: community/outbound.mdx
+ - page: Podcast
+ path: community/podcast.mdx
+ - page: Snippets & SDKs Tutorials
+ path: community/snippets-sdks-tutorials.mdx
+ - page: Special Mentions
+ path: community/special-mentions.mdx
+ - page: Squads
+ path: community/squads.mdx
+ - page: Television
+ path: community/television.mdx
+ - page: Usecase
+ path: community/usecase.mdx
+ - page: My Vapi
+ path: community/myvapi.mdx
+ - page: Expert Directory
+ path: community/expert-directory.mdx
+ - section: Providers
+ contents:
+ - section: Voice
+ contents:
+ - page: ElevenLabs
+ path: providers/voice/elevenlabs.mdx
+ - page: PlayHT
+ path: providers/voice/playht.mdx
+ - page: Azure
+ path: providers/voice/azure.mdx
+ - page: OpenAI
+ path: providers/voice/openai.mdx
+ - page: Neets
+ path: providers/voice/neets.mdx
+ - page: Cartesia
+ path: providers/voice/cartesia.mdx
+ - page: LMNT
+ path: providers/voice/imnt.mdx
+ - page: RimeAI
+ path: providers/voice/rimeai.mdx
+ - page: Deepgram
+ path: providers/voice/deepgram.mdx
+ - section: Models
+ contents:
+ - page: OpenAI
+ path: providers/model/openai.mdx
+ - page: Groq
+ path: providers/model/groq.mdx
+ - page: DeepInfra
+ path: providers/model/deepinfra.mdx
+ - page: Perplexity
+ path: providers/model/perplexity.mdx
+ - page: TogetherAI
+ path: providers/model/togetherai.mdx
+ - page: OpenRouter
+ path: providers/model/openrouter.mdx
+ - section: Transcription
+ contents:
+ - page: Deepgram
+ path: providers/transcriber/deepgram.mdx
+ - page: Gladia
+ path: providers/transcriber/gladia.mdx
+ - page: Talkscriber
+ path: providers/transcriber/talkscriber.mdx
+ - page: Voiceflow
+ path: providers/voiceflow.mdx
+ - section: Security & Privacy
+ contents:
+ - page: HIPAA Compliance
+ path: security-and-privacy/hipaa.mdx
+ - page: SOC-2 Compliance
+ path: security-and-privacy/soc.mdx
+ - page: Privacy Policy
+ path: security-and-privacy/privacy-policy.mdx
+ - page: Terms of Service
+ path: security-and-privacy/tos.mdx
+ - tab: api-reference
+ layout:
+ - api: API Reference
+ flattened: true
+ paginated: true
+ snippets:
+ typescript: "@vapi/server-sdk"
+ python: "vapi_server_sdk"
+ # - section: Assistants
+ # contents: []
+ # - section: Calls
+ # contents: []
+ # - section: Phone Numbers
+ # contents: []
+ # - section: Files
+ # contents: []
+ # - section: Squads
+ # contents: []
+ # - section: Tools
+ # contents: []
+ # - section: Analytics
+ # contents: []
+ - section: Server and Client
+ contents:
+ - page: ServerMessage
+ path: api-reference/messages/server-message.mdx
+ - page: ServerMessageResponse
+ path: api-reference/messages/server-message-response.mdx
+ - page: ClientMessage
+ path: api-reference/messages/client-message.mdx
+ - page: ClientInboundMessage
+ path: api-reference/messages/client-inbound-message.mdx
+ - section: ''
+ contents:
+ - page: Swagger
+ path: api-reference/swagger.mdx
+ - page: OpenAPI
+ path: api-reference/openapi.mdx
+
\ No newline at end of file
diff --git a/fern/enterprise/onprem.mdx b/fern/enterprise/onprem.mdx
new file mode 100644
index 0000000..657edde
--- /dev/null
+++ b/fern/enterprise/onprem.mdx
@@ -0,0 +1,34 @@
+---
+title: On-Prem Deployments
+subtitle: Deploy Vapi in your private cloud.
+slug: enterprise/onprem
+---
+
+
+Vapi On-Prem allows you to deploy Vapi's best in class enterprise voice infrastructure AI directly in your own cloud. It can be deployed in a dockerized format on any cloud provider, in any geographic location, running on your GPUs.
+
+With On-Prem, your audio and text data stays in your cloud. Data never passes through Vapi's servers. If you're are handling sensitive data (e.g. health, financial, legal) and are under strict data requirements, you should consider deploying on-prem.
+
+Your device regularly sends performance and usage information to Vapi's cloud. This data helps adjust your device's GPU resources and is also used for billing. All network traffic from your device is tracked in an audit log, letting your engineering or security team see what the device is doing at all times.
+
+## Frequently Asked Questions
+
+#### Can the appliance adjust to my needs?
+
+Yes, the Vapi On-Prem appliance automatically adjusts its GPU resources to handle your workload as required by our service agreement. It can take a few minutes to adjust to changes in your workload. If you need quicker adjustments, you might want to ask for more GPUs by contacting support@vapi.ai.
+
+#### What if I can’t get enough GPUs from my cloud provider?
+
+If you're struggling to get more GPUs from your provider, contact support@vapi.ai for help.
+
+#### Can I access Vapi's AI models?
+
+No, our AI models are on secure machines in your Isolated VPC and you can’t log into these machines or check their files.
+
+#### How can I make sure my data stays within my cloud?
+
+Your device operates in VPCs that you control. You can check the network settings and firewall rules, and look at traffic logs to make sure everything is as it should be. The Control VPC uses open source components, allowing you to make sure the policies are being followed. Performance data and model updates are sent to Vapi, but all other traffic leaving your device is logged, except for the data sent back to your API clients.
+
+## Contact us
+
+For more information about Vapi On-Prem, please contact us at support@vapi.ai
diff --git a/fern/enterprise/plans.mdx b/fern/enterprise/plans.mdx
new file mode 100644
index 0000000..4f20e11
--- /dev/null
+++ b/fern/enterprise/plans.mdx
@@ -0,0 +1,23 @@
+---
+title: Vapi Enterprise
+subtitle: Build and scale with Vapi.
+slug: enterprise/plans
+---
+
+
+If you're building a production application on Vapi, we can help you every step of the way from idea to full-scale deployment.
+
+On the Pay-As-You-Go plan, there is a limit of **10 concurrent calls**. On Enterprise, we reserve GPUs for you on our Enterprise cluster so you can scale up to **millions of calls**.
+
+#### Enterprise Plans include:
+
+- Reserved concurrency and higher rate limits
+- Hands-on 24/7 support
+- Shared Slack channel with our team
+- Included minutes with volume pricing
+- Calls with our engineering team 2-3 times per week
+- Access to the Vapi SIP trunk for telephony
+
+## Contact us
+
+To get started on Vapi Enterprise, [fill out this form](https://book.vapi.ai).
diff --git a/fern/examples/inbound-support.mdx b/fern/examples/inbound-support.mdx
new file mode 100644
index 0000000..8b2d869
--- /dev/null
+++ b/fern/examples/inbound-support.mdx
@@ -0,0 +1,148 @@
+---
+title: Inbound Support Example ⚙️
+subtitle: Let's build a technical support assistant that remembers where we left off.
+slug: examples/inbound-support
+---
+
+
+We want a phone number we can call to get technical support. We want the assistant to use a provided set of troubleshooting guides to help walk the caller through solving their issue.
+
+As a bonus, we also want the assistant to remember by the phone number of the caller where we left off if we get disconnected.
+
+
+
+ We'll start by taking a look at the [Assistant API
+ reference](/api-reference/assistants/create-assistant) and define our
+ assistant:
+
+ ```json
+ {
+ "transcriber":{
+ "provider": "deepgram",
+ "keywords": ["iPhone:1", "MacBook:1.5", "iPad:1", "iMac:0.8", "Watch:1", "TV:1", "Apple:2"],
+ },
+ "model": {
+ "provider": "openai",
+ "model": "gpt-4",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You're a technical support assistant. You're helping a customer troubleshoot their Apple device. You can ask the customer questions, and you can use the following troubleshooting guides to help the customer solve their issue: ..."
+ }
+ ]
+ },
+ "forwardingPhoneNumber": "+16054440129",
+ "firstMessage": "Hey, I'm an A.I. assistant for Apple. I can help you troubleshoot your Apple device. What's the issue?",
+ "recordingEnabled": true,
+ }
+ ```
+
+ - `transcriber` - We're defining this to make sure the transcriber picks up the custom words related to our devices.
+ - `model` - We're using the OpenAI GPT-3.5-turbo model. It's much faster and preferred if we don't need GPT-4.
+ - `messages` - We're defining the assistant's instructions for how to run the call.
+ - `forwardingPhoneNumber` - Since we've added this, the assistant will be provided the [transferCall](/assistants#transfer-call) function to use if the caller asks to be transferred to a person.
+ - `firstMessage` - This is the first message the assistant will say when the user picks up.
+ - `recordingEnabled` - We're recording the call so we can hear the conversation later.
+
+
+
+ Since we want the assistant to remember where we left off, its configuration is going to change based on the caller. So, we're not going to use [temporary assistants](/assistants/persistent-assistants).
+
+ For this example, we're going to store the conversation on our server between calls and use the [Server URL's `assistant-request`](/server-url#retrieving-assistants) to fetch a new configuration based on the caller every time someone calls.
+
+
+
+ We'll buy a phone number for inbound calls using the [Phone Numbers API](/api-reference/phone-numbers/buy-phone-number).
+
+ ```json
+ {
+ "id": "c86b5177-5cd8-447f-9013-99e307a8a7bb",
+ "orgId": "aa4c36ba-db21-4ce0-9c6e-99e307a8a7bb",
+ "number": "+11234567890",
+ "createdAt": "2023-09-29T21:44:37.946Z",
+ "updatedAt": "2023-12-08T00:57:24.706Z",
+ }
+ ```
+
+
+
+ When someone calls our number, we want to fetch the assistant configuration from our server. We'll use the [Server URL's `assistant-request`](/server-url#retrieving-assistants) to do this.
+
+ First, we'll create an endpoint on our server for Vapi to hit. It'll receive messages as shown in the [Assistant Request](/server-url#retrieving-assistants-calling) docs. Once created, we'll add that endpoint URL to the **Server URL** field in the Account page on the [Vapi Dashboard](https://dashboard.vapi.ai).
+
+
+
+ We'll want to save the conversation at the end of the call for the next time they call. We'll use the [Server URL's `end-of-call-report`](/server-url#end-of-call-report) message to do this.
+
+ At the end of each call, we'll get a message like this:
+
+ ```json
+ {
+ "message": {
+ "type": "end-of-call-report",
+ "endedReason": "hangup",
+ "call": { Call Object },
+ "recordingUrl": "https://vapi-public.s3.amazonaws.com/recordings/1234.wav",
+ "summary": "The user mentioned they were having an issue with their iPhone restarting randomly. They restarted their phone, but the issue persisted. They mentioned they were using an iPhone 12 Pro Max. They mentioned they were using iOS 15.",
+ "transcript": "Hey, I'm an A.I. assistant for Apple...",
+ "messages":[
+ {
+ "role": "assistant",
+ "message": "Hey, I'm an A.I. assistant for Apple. I can help you troubleshoot your Apple device. What's the issue?",
+ },
+ {
+ "role": "user",
+ "message": "Yeah I'm having an issue with my iPhone restarting randomly.",
+ },
+ ...
+ ]
+ }
+ }
+ ```
+
+ We'll save the `call.customer.number` and `summary` fields to our database for the next time they call.
+
+
+ When our number receives a call, Vapi will also hit our server's endpoint with a message like this:
+
+ ```json
+ {
+ "message": {
+ "type": "assistant-request",
+ "call": { Call Object },
+ }
+ }
+ ```
+
+ We'll check our database to see if we have a conversation for this caller. If we do, we'll create an assistant configuration like in Step 1 and respond with it:
+
+ ```json
+ {
+ "assistant": {
+ ...
+ "model": {
+ "provider": "openai",
+ "model": "gpt-4",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You're a technical support assistant. Here's where we left off: ..."
+ }
+ ]
+ },
+ ...
+ }
+ }
+ ```
+
+ If we don't, we'll just respond with the assistant configuration from Step 1.
+
+
+
+
+ We'll call our number and see if it works. Give it a call, and tell it you're having an issue with your iPhone restarting randomly.
+
+ Hang up, and call back. Then ask what the issue was. The assistant should remember where we left off.
+
+
+
diff --git a/fern/examples/outbound-call-python.mdx b/fern/examples/outbound-call-python.mdx
new file mode 100644
index 0000000..42e5738
--- /dev/null
+++ b/fern/examples/outbound-call-python.mdx
@@ -0,0 +1,56 @@
+---
+title: Outbound Calls from Python 📞
+subtitle: Some sample code for placing an outbound call using Python
+slug: examples/outbound-call-python
+---
+
+
+```python
+import requests
+
+# Your Vapi API Authorization token
+auth_token = ''
+# The Phone Number ID, and the Customer details for the call
+phone_number_id = ''
+customer_number = "+14151231234"
+
+# Create the header with Authorization token
+headers = {
+ 'Authorization': f'Bearer {auth_token}',
+ 'Content-Type': 'application/json',
+}
+
+# Create the data payload for the API request
+data = {
+ 'assistant': {
+ "firstMessage": "Hey, what's up?",
+ "model": {
+ "provider": "openai",
+ "model": "gpt-3.5-turbo",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are an assistant."
+ }
+ ]
+ },
+ "voice": "jennifer-playht"
+ },
+ 'phoneNumberId': phone_number_id,
+ 'customer': {
+ 'number': customer_number,
+ },
+}
+
+# Make the POST request to Vapi to create the phone call
+response = requests.post(
+ 'https://api.vapi.ai/call/phone', headers=headers, json=data)
+
+# Check if the request was successful and print the response
+if response.status_code == 201:
+ print('Call created successfully')
+ print(response.json())
+else:
+ print('Failed to create call')
+ print(response.text)
+```
diff --git a/fern/examples/outbound-sales.mdx b/fern/examples/outbound-sales.mdx
new file mode 100644
index 0000000..a2e7692
--- /dev/null
+++ b/fern/examples/outbound-sales.mdx
@@ -0,0 +1,148 @@
+---
+title: Outbound Sales Example 📞
+subtitle: Let's build an outbound sales agent that can schedule appointments.
+slug: examples/outbound-sales
+---
+
+
+We want this agent to be able to call a list of leads and schedule appointments. We'll create our assistant, create a phone number for it, then we'll configure our server for function calling to book the appointments.
+
+
+
+ We'll start by taking a look at the [Assistant API
+ reference](/api-reference/assistants/create-assistant) and define our
+ assistant:
+
+ ```json
+ {
+ "transcriber":{
+ "provider": "deepgram",
+ "keywords": ["Bicky:1"]
+ },
+ "model": {
+ "provider": "openai",
+ "model": "gpt-4",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You're a sales agent for a Bicky Realty. You're calling a list of leads to schedule appointments to show them houses..."
+ }
+ ],
+ "functions": [
+ {
+ "name": "bookAppointment",
+ "description": "Used to book the appointment.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "datetime": {
+ "type": "string",
+ "description": "The date and time of the appointment in ISO format."
+ }
+ }
+ }
+ }
+ ]
+ },
+ "voice": {
+ "provider": "openai",
+ "voiceId": "onyx"
+ },
+ "forwardingPhoneNumber": "+16054440129",
+ "voicemailMessage": "Hi, this is Jennifer from Bicky Realty. We were just calling to let you know...",
+ "firstMessage": "Hi, this Jennifer from Bicky Realty. We're calling to schedule an appointment to show you a house. When would be a good time for you?",
+ "endCallMessage": "Thanks for your time.",
+ "endCallFunctionEnabled": true,
+ "recordingEnabled": false,
+ }
+ ```
+ Let's break this down:
+ - `transcriber` - We're defining this to make sure the transcriber picks up the custom word "Bicky"
+ - `model` - We're using the OpenAI GPT-4 model, which is better at function calling.
+ - `messages` - We're defining the assistant's instructions for how to run the call.
+ - `functions` - We're providing a bookAppointment function with a datetime parameter. The assistant can call this during the conversation to book the appointment.
+ - `voice` - We're using the Onyx voice from OpenAI.
+ - `forwardingPhoneNumber` - Since we've added this, the assistant will be provided the [transferCall](/assistants#transfer-call) function to use.
+ - `voicemailMessage` - If the call goes to voicemail, this message will be played.
+ - `firstMessage` - This is the first message the assistant will say when the user picks up.
+ - `endCallMessage` - This is the message the assistant will deciding to hang up.
+ - `endCallFunctionEnabled` - This will give the assistant the [endCall](/assistants#end-call) function.
+ - `recordingEnabled` - We've disabled recording, since we don't have the user's consent to record the call.
+
+ We'll then make a POST request to the [Create Assistant](/api-reference/assistants/create-assistant) endpoint to create the assistant.
+
+
+
+ We'll buy a phone number for outbound calls using the [Phone Numbers API](/phone-calling#set-up-a-phone-number).
+
+ ```json
+ {
+ "id": "c86b5177-5cd8-447f-9013-99e307a8a7bb",
+ "orgId": "aa4c36ba-db21-4ce0-9c6e-99e307a8a7bb",
+ "number": "+11234567890",
+ "createdAt": "2023-09-29T21:44:37.946Z",
+ "updatedAt": "2023-12-08T00:57:24.706Z",
+ }
+ ```
+
+ Great, let's take note of that `id` field- we'll need it later.
+
+
+
+ When the assistant calls that `bookAppointment` function, we'll want to handle that function call and actually book the appointment. We also want to let the user know if booking the appointment was unsuccessful.
+
+ First, we'll create an endpoint on our server for Vapi to hit. It'll receive messages as shown in the [Function Calling](/server-url#function-calling) docs. Once created, we'll add that endpoint URL to the **Server URL** field in the Account page on the [Vapi Dashboard](https://dashboard.vapi.ai).
+
+
+
+ So now, when the assistant decides to call `bookAppointment`, our server will get something like this:
+
+ ```json
+ {
+ "message": {
+ "type": "function-call",
+ "call": { Call Object },
+ "functionCall": {
+ "name": "bookAppointment",
+ "parameters": "{ \"datetime\": \"2023-09-29T21:44:37.946Z\"}"
+ }
+ }
+ }
+ ```
+
+ We'll do our own logic to book the appointment, then we'll respond to the request with the result to let the assistant know it was booked:
+
+ ```json
+ { "result": "The appointment was booked successfully." }
+ ```
+
+ or, if it failed:
+
+ ```json
+ { "result": "The appointment time is unavailable, please try another time." }
+ ```
+
+ So, when the assistant calls this function, these results will be appended to the conversation, and the assistant will respond to the user knowing the result.
+
+ Great, now we're ready to start calling leads!
+
+
+
+ We'll use the [Create Phone Call](/api-reference/calls/create-phone-call) endpoint to place a call to a lead:
+
+ ```json
+ {
+ "phoneNumberId": "c86b5177-5cd8-447f-9013-99e307a8a7bb",
+ "assistantId": "d87b5177-5cd8-447f-9013-99e307a8a7bb",
+ "customer": {
+ "number": "+11234567890"
+ }
+ }
+ ```
+
+ Since we also defined a `forwardingPhoneNumber`, when the user asks to speak to a human, the assistant will transfer the call to that number automatically.
+
+ We can then check the [Dashboard](https://dashboard.vapi.ai) to see the call logs and read the transcripts.
+
+
+
diff --git a/fern/examples/pizza-website.mdx b/fern/examples/pizza-website.mdx
new file mode 100644
index 0000000..0f52509
--- /dev/null
+++ b/fern/examples/pizza-website.mdx
@@ -0,0 +1,165 @@
+---
+title: Pizza Website Example 🍕
+subtitle: Let's build a pizza ordering assistant for our website.
+slug: examples/pizza-website
+---
+
+
+In this example, we'll be using the [Web SDK](https://github.com/VapiAI/web) to create an assistant that can take a pizza order. Since all the [Client SDKs](/sdks) have equivalent functionality, you can use this example as a guide for any Vapi client.
+
+We want to add a button to the page to start a call, update our UI with the call status, and display what the user's saying while they say it. When the user mentions a topping, we should add it to the pizza. When they're done, we should redirect them to checkout.
+
+
+
+ We'll start by taking a look at the [Assistant API
+ reference](/api-reference/assistants/create-assistant) and define our
+ assistant:
+
+ ```json
+ {
+ "model": {
+ "provider": "openai",
+ "model": "gpt-4",
+ "messages": [
+ {
+ "role": "system",
+ "content": "You're a pizza ordering assistant. The user will ask for toppings, you'll add them. When they're done, you'll redirect them to checkout."
+ }
+ ],
+ "functions": [
+ {
+ "name": "addTopping",
+ "description": "Used to add a topping to the pizza.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "topping": {
+ "type": "string",
+ "description": "The name of the topping. For example, 'pepperoni'."
+ }
+ }
+ }
+ },
+ {
+ "name": "goToCheckout",
+ "description": "Redirects the user to checkout and order their pizza.",
+ "parameters": {"type": "object", "properties": {}}
+ }
+ ]
+ },
+ "firstMessage": "Hi, I'm the pizza ordering assistant. What toppings would you like?",
+ }
+ ```
+ Let's break this down:
+ - `model` - We're using the OpenAI GPT-4 model, which is better at function calling.
+ - `messages` - We're defining the assistant's instructions for how to run the call.
+ - `functions` - We're providing a addTopping function with a topping parameter. The assistant can call this during the conversation to add a topping. We're also adding goToCheckout, with an empty parameters object. The assistant can call this to redirect the user to checkout.
+ - `firstMessage` - This is the first message the assistant will say when the user starts the call.
+
+ We'll then make a POST request to the [Create Assistant](/api-reference/assistants/create-assistant) endpoint to create the assistant.
+
+
+
+ We'll follow the `README` for the [Web SDK](https://github.com/VapiAI/web) to get it installed.
+
+ We'll then get our **Public Key** from the [Vapi Dashboard](https://dashboard.vapi.ai) and initialize the SDK:
+
+ ```js
+ import Vapi from '@vapi-ai/web';
+
+ const vapi = new Vapi('your-web-token');
+ ```
+
+
+
+ We'll add a button to the page that starts the call when clicked:
+
+ ```html
+
+
+ ```
+
+ ```js
+ const startCallButton = document.getElementById('start-call');
+
+ startCallButton.addEventListener('click', async () => {
+ await vapi.start('your-assistant-id');
+ });
+
+ const stopCallButton = document.getElementById('stop-call');
+
+ stopCallButton.addEventListener('click', async () => {
+ await vapi.stop();
+ });
+ ```
+
+
+
+ ```js
+ vapi.on('call-start', () => {
+ // Update UI to show that the call has started
+ });
+
+ vapi.on('call-end', () => {
+ // Update UI to show that the call has ended
+ });
+ ```
+
+
+
+
+ ```js
+ vapi.on('speech-start', () => {
+ // Update UI to show that the assistant is speaking
+ });
+
+vapi.on('speech-end', () => {
+// Update UI to show that the assistant is done speaking
+});
+
+````
+
+
+
+
+ All messages send to the [Server URL](/server-url), including `transcript` and `function-call` messages, are also sent to the client as `message` events. We'll need to check the `type` of the message to see what type it is.
+
+```js
+vapi.on("message", (msg) => {
+ if (msg.type !== "transcript") return;
+
+ if (msg.transcriptType === "partial") {
+ // Update UI to show the live partial transcript
+ }
+
+ if (msg.transcriptType === "final") {
+ // Update UI to show the final transcript
+ }
+});
+````
+
+
+
+
+```javascript
+vapi.on('message', (msg) => {
+ if (msg.type !== "function-call") return;
+
+if (msg.functionCall.name === "addTopping") {
+const topping = msg.functionCall.parameters.topping;
+// Add the topping to the pizza
+}
+
+if (msg.functionCall.name === "goToCheckout") {
+// Redirect the user to checkout
+}
+});
+
+```
+
+
+You should now have a working pizza ordering assistant! 🍕
+
+
+
+```
diff --git a/fern/examples/voice-widget.mdx b/fern/examples/voice-widget.mdx
new file mode 100644
index 0000000..12f5116
--- /dev/null
+++ b/fern/examples/voice-widget.mdx
@@ -0,0 +1,150 @@
+---
+title: Web Snippet
+subtitle: >-
+ Easily integrate the Vapi Voice Widget into your website for enhanced user
+ interaction.
+slug: examples/voice-widget
+---
+
+
+Improve your website's user interaction with the Vapi Voice Widget. This robust tool enables your visitors to engage with a voice assistant for support and interaction, offering a smooth and contemporary way to connect with your services.
+
+## Steps for Installation
+
+
+
+ Copy the snippet below and insert it into your website's HTML, ideally before the closing `