diff --git a/fern/GHL.mdx b/fern/GHL.mdx new file mode 100644 index 0000000..1cd6353 --- /dev/null +++ b/fern/GHL.mdx @@ -0,0 +1,151 @@ +--- +title: How to Connect Vapi with Make & GHL +slug: GHL +--- + + +Vapi's GHL/Make Tools integration allows you to directly import your GHL workflows and Make scenarios into Vapi as Tools. This enables you to create voicebots that can trigger your favorite app integrations and automate complex workflows using voice commands. + +## What are GHL/Make Tools? + +GHL (GoHighLevel) workflows and Make scenarios are powerful automation tools that allow you to connect and integrate various apps and services. With the GHL/Make Tools integration, you can now bring these automations into Vapi and trigger them using voice commands. + +## How does the integration work? + +1. **Import workflows and scenarios**: Navigate to the [Tools section](https://dashboard.vapi.ai/tools) in your Vapi dashboard and import your existing GHL workflows and Make scenarios. + +2. **Add Tools to your assistants**: Once imported, you can add these Tools to your AI assistants, enabling them to trigger the automations based on voice commands. + +3. **Trigger automations with voice**: Your AI assistants can now understand voice commands and execute the corresponding GHL workflows or Make scenarios, allowing for seamless voice-enabled automation. + +## Setting up the GHL/Make Tools integration + +1. **Create a GHL workflow or Make scenario**: Design your automation in GHL or Make, connecting the necessary apps and services. + +2. **Import the workflow/scenario into Vapi**: In the Vapi dashboard, navigate to the Tools section and click on "Import." Select the GHL workflow or Make scenario you want to import. + +3. **Configure the Tool**: Provide a name and description for the imported Tool, and map any required input variables to the corresponding Vapi entities (e.g., extracted from user speech). + +4. **Add the Tool to your assistant**: Edit your AI assistant and add the newly imported Tool to its capabilities. Specify the voice commands that should trigger the Tool. + +5. **Test the integration**: Engage with your AI assistant using the specified voice commands and verify that the corresponding GHL workflow or Make scenario is triggered successfully. + +## Use case examples + +### Booking appointments with AI callers + +- Import a GHL workflow that handles appointment booking +- Configure the workflow to accept appointment details (date, time, user info) from Vapi +- Add the Tool to your AI assistant, allowing it to book appointments based on voice commands + +### Updating CRMs with voice-gathered data + +- Import a Make scenario that updates your CRM with customer information +- Map the scenario's input variables to entities extracted from user speech +- Enable your AI assistant to gather customer information via voice and automatically update your CRM + +### Real Estate: Automated Property Information Retrieval + +- Import a Make scenario that retrieves property information from your MLS (Multiple Listing Service) or real estate database +- Configure the scenario to accept a property address or MLS ID as input +- Add the Tool to your AI assistant, allowing potential buyers to request property details using voice commands +- Your AI assistant can then provide key information about the property, such as price, square footage, number of bedrooms/bathrooms, and amenities + +### Healthcare/Telehealth: Appointment Reminders and Prescription Refills + +- Import a GHL workflow that sends appointment reminders and handles prescription refill requests +- Configure the workflow to accept patient information and appointment/prescription details from Vapi +- Add the Tool to your AI assistant, enabling patients to request appointment reminders or prescription refills using voice commands +- Your AI assistant can confirm the appointment details, send reminders via SMS or email, and forward prescription refill requests to the appropriate healthcare provider + +### Restaurant Ordering: Custom Order Placement and Delivery Tracking + +- Import a Make scenario that integrates with your restaurant's online ordering system and delivery tracking platform +- Configure the scenario to accept customer information, order details, and delivery preferences from Vapi +- Add the Tool to your AI assistant, allowing customers to place custom orders and track their delivery status using voice commands +- Your AI assistant can guide customers through the ordering process, suggest menu items based on preferences, and provide real-time updates on the order status and estimated delivery time + +## Best practices + +- Break down complex automations into smaller, focused workflows or scenarios for better maintainability +- Use clear and concise naming conventions for your imported Tools and their input variables +- Thoroughly test the integration to ensure reliable performance and accurate data passing +- Keep your GHL workflows and Make scenarios up to date to reflect any changes in the connected apps or services + +## Troubleshooting + +- If a Tool is not triggering as expected, verify that the voice commands are correctly configured and the input variables are properly mapped +- Check the Vapi logs and the GHL/Make execution logs to identify any errors or issues in the automation flow +- Ensure that the necessary API credentials and permissions are correctly set up in both Vapi and the integrated apps/services + +By leveraging Vapi's GHL/Make Tools integration, you can create powerful voice-enabled automations and streamline your workflows, all without extensive coding. Automate tasks, connect your favorite apps, and unlock the full potential of voice AI with Vapi. + +## Get Support + +Join our Discord to connect with other developers & connect with our team: + + + + Connect with our team & other developers using Vapi. + + + Send our support team an email. + + + +Here are some video tutorials that will guide you on how to use Vapi with services like Make and GoHighLevel: + +
+ + +## **What is Vapi's Knowledge Base?** +Our Knowledge Base is a collection of custom documents that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries. + +### **Why Use a Knowledge Base?** +Using a Knowledge Base with your voice AI assistant offers several benefits: + +* **Improved accuracy**: By integrating custom documents into your assistant, you can ensure that it provides accurate and up-to-date information to users. +* **Enhanced capabilities**: A Knowledge Base enables your assistant to answer complex queries and provide detailed responses to user inquiries. +* **Customization**: With a Knowledge Base, you can tailor your assistant's responses to specific domains or topics, making it more effective and informative. + +## **How to Create a Knowledge Base** + +To create a Knowledge Base, follow these steps: + +### **Step 1: Upload Your Documents** + +Navigate to Overview > Documents and upload your custom documents in Markdown, PDF, plain text, or Microsoft Word (.doc and .docx) format to Vapi's Knowledge Base. + +Adding documents to your Knowledge Base + +### **Step 2: Create an Assistant** + +Create a new assistant in Vapi and, on the right sidebar menu, select the document you've just added to the Knowledge Base feature. +Adding documents to your assistant + + +### **Step 3: Configure Your Assistant** + +Customize your assistant's system prompt to utilize the Knowledge Base for responding to user queries. + +## **Best Practices for Creating Effective Knowledge Bases** + +* **Organize Your documents**: Organize your documents by topic or category to ensure that your assistant can quickly retrieve relevant information. +* **Use Clear and concise language**: Use clear and concise language in your documents to ensure that your assistant can accurately understand and respond to user queries. +* **Keep your documents up-to-date**: Regularly update your documents to ensure that your assistant provides the most accurate and up-to-date information. + + + For more information on creating effective Knowledge Bases, check out our tutorial on [Best Practices for Knowledge Base Creation](https://youtu.be/i5mvqC5sZxU). + + +By following these guidelines, you can create a comprehensive Knowledge Base that enhances the capabilities of your voice AI assistant and provides valuable information to users. diff --git a/fern/customization/multilingual.mdx b/fern/customization/multilingual.mdx new file mode 100644 index 0000000..8e6aad9 --- /dev/null +++ b/fern/customization/multilingual.mdx @@ -0,0 +1,57 @@ +--- +title: Multilingual +subtitle: Learn how to set up and test multilingual support in Vapi. +slug: customization/multilingual +--- + + +Vapi's multilingual support is primarily facilitated through transcribers, which are part of the speech-to-text process. The pipeline consists of three key elements: text-to-speech, speech-to-text, and the llm model, which acts as the brain of the operation. Each of these elements can be customized using different providers. + +## Transcribers (Speech-to-Text) + +Currently, Vapi supports two providers for speech-to-text transcriptions: + +- `Deepgram` (nova - family models) +- `Talkscriber` (whisper model) + +Each provider supports different languages. For more detailed information, you can visit your dashboard and navigate to the transcribers tab on the assistant page. Here, you can see the languages supported by each provider and the available models. **Note that not all models support all languages**. For specific details, you can refer to the documentation for the corresponding providers. + +## Voice (Text-to-Speech) + +Once you have set your transcriber and corresponding language, you can choose a voice for text-to-speech in that language. For example, you can choose a voice with a Spanish accent if needed. + +Vapi currently supports the following providers for text-to-speech: + +- `PlayHT` +- `11labs` +- `Rime-ai` +- `Deepgram` +- `OpenAI` +- `Azure` +- `Lmnt` +- `Neets` + +Each provider offers varying degrees of language support. Azure, for instance, supports the most languages, with approximately 400 prebuilt voices across 140 languages and variants. You can also create your own custom languages with other providers. + +## Multilingual Support + +For multilingual support, you can choose providers like Eleven Labs or Azure, which have models and voices designed for this purpose. This allows your voice assistant to understand and respond in multiple languages, enhancing the user experience for non-English speakers. + +To set up multilingual support, you no longer need to specify the desired language when configuring the voice assistant. This configuration in the voice section is deprecated. + +Instead, you directly choose a voice that supports the desired language from your voice provider. This can be done when you are setting up or modifying your voice assistant. + +Here is an example of how to set up a voice assistant that speaks Spanish: + +```json +{ + "voice": { + "provider": "azure", + "voiceId": "es-ES-ElviraNeural" + } +} +``` + +In this example, the voice `es-ES-ElviraNeural` from the provider `azure` supports Spanish. You can replace `es-ES-ElviraNeural` with the ID of any other voice that supports your desired language. + +By leveraging Vapi's multilingual support, you can make your voice assistant more accessible and user-friendly, reaching a wider audience and providing a better user experience. diff --git a/fern/customization/provider-keys.mdx b/fern/customization/provider-keys.mdx new file mode 100644 index 0000000..1c6090b --- /dev/null +++ b/fern/customization/provider-keys.mdx @@ -0,0 +1,26 @@ +--- +title: Provider Keys +subtitle: Bring your own API keys to Vapi. +slug: customization/provider-keys +--- + + +Have a custom model or voice with one of the providers? Or an enterprise account with volume pricing? + +No problem! You can bring your own API keys to Vapi. You can add them in the [Dashboard](https://dashboard.vapi.ai) under the **Provider Keys** tab. Once your API key is validated, you won't be charged when using that provider through Vapi. Instead, you'll be charged directly by the provider. + +## Transcription Providers + +Currently, the only available transcription provider is `deepgram`. To use a custom model, you can specify the deepgram model ID in the `transcriber.model` parameter of the [Assistant](/api-reference/assistants/create-assistant). + +## Model Providers + +We are currently have support for any OpenAI-compatible endpoint. This includes services like [OpenRouter](https://openrouter.ai/), [AnyScale](https://www.anyscale.com/), [Together AI](https://www.together.ai/), or your own server. + +To use one of these providers, you can specify the `provider` and `model` in the `model` parameter of the [Assistant](/api-reference/assistants/create-assistant). + +You can find more details in the [Custom LLMs](customization/custom-llm/fine-tuned-openai-models) section of the documentation. + +## Voice Providers + +All voice providers are supported. Once you've validated your API through the [Dashboard](https://dashboard.vapi.ai), any voice ID from your provider can be used in the `voice.voiceId` field of the [Assistant](/api-reference/assistants/create-assistant). diff --git a/fern/customization/speech-configuration.mdx b/fern/customization/speech-configuration.mdx new file mode 100644 index 0000000..28fe4eb --- /dev/null +++ b/fern/customization/speech-configuration.mdx @@ -0,0 +1,35 @@ +--- +title: Speech Configuration +subtitle: Timing control for assistant speech +slug: customization/speech-configuration +--- + + +The Speaking Plan and Stop Speaking Plan are essential configurations designed to optimize the timing of when the assistant begins and stops speaking during interactions with a customer. These plans ensure that the assistant does not interrupt the customer and also prevents awkward pauses that can occur if the assistant starts speaking too late. Adjusting these parameters helps tailor the assistant’s responsiveness to different conversational dynamics. + +**Note**: At the moment these configurations can currently only be made via API. + +## Start Speaking Plan + +- **Wait Time Before Speaking**: You can set how long the assistant waits before speaking after the customer finishes. The default is 0.4 seconds, but you can increase it if the assistant is speaking too soon, or decrease it if there’s too much delay. + +- **Smart Endpointing**: This feature uses advanced processing to detect when the customer has truly finished speaking, especially if they pause mid-thought. It’s off by default but can be turned on if needed. + +- **Transcription-Based Detection**: Customize how the assistant determines that the customer has stopped speaking based on what they’re saying. This offers more control over the timing. + + +## Stop Speaking Plan + +- **Words to Stop Speaking**: Define how many words the customer needs to say before the assistant stops talking. If you want immediate reaction, set this to 0. Increase it to avoid interruptions by brief acknowledgments like "okay" or "right". + +- **Voice Activity Detection**: Adjust how long the customer needs to be speaking before the assistant stops. The default is 0.2 seconds, but you can tweak this to balance responsiveness and avoid false triggers. + +- **Pause Before Resuming**: Control how long the assistant waits before starting to talk again after being interrupted. The default is 1 second, but you can adjust it depending on how quickly the assistant should resume. + +## Considerations for Configuration + +- **Customer Style**: Think about whether the customer pauses mid-thought or provides continuous speech. Adjust wait times and enable smart endpointing as needed. + +- **Background Noise**: If there’s a lot of background noise, you may need to tweak the settings to avoid false triggers. + +- **Conversation Flow**: Aim for a balance where the assistant is responsive but not intrusive. Test different settings to find the best fit for your needs. diff --git a/fern/docs.yml b/fern/docs.yml new file mode 100644 index 0000000..af3818a --- /dev/null +++ b/fern/docs.yml @@ -0,0 +1,352 @@ +instances: + - url: vapi.docs.buildwithfern.com + +title: Vapi +favicon: static/images/favicon.png +logo: + light: static/images/logo/logo-light.png + dark: static/images/logo/logo-dark.png + href: / + height: 28 +colors: + accentPrimary: + dark: '#94ffd2' + light: '#37aa9d' + background: + dark: '#000000' + light: '#FFFFFF' +experimental: + mdx-components: + - snippets +css: assets/styles.css +navbar-links: + - type: minimal + text: Home + href: https://vapi.ai/ + - type: minimal + text: Pricing + href: /pricing + - type: minimal + text: Status + href: https://status.vapi.ai/ + - type: minimal + text: Changelog + href: /changelog + - type: minimal + text: Support + href: /support + - type: filled + text: Dahshboard + rightIcon: fa-solid fa-chevron-right + href: https://example.com/login + rounded: true +tabs: + api-reference: + slug: api-reference + display-name: API Reference + documentation: + display-name: Documentation + slug: documentation +layout: + tabs-placement: header + searchbar-placement: header +navigation: + - tab: documentation + layout: + - section: '' + contents: + - page: Introduction + path: introduction.mdx + - section: General + contents: + - section: How Vapi Works + contents: + - page: Core Models + path: quickstart.mdx + - page: Orchestration Models + path: how-vapi-works.mdx + - page: Knowledge Base + path: knowledgebase.mdx + - section: Pricing + contents: + - page: Overview + path: pricing.mdx + - page: Cost Routing + path: billing/cost-routing.mdx + - page: Billing Limits + path: billing/billing-limits.mdx + - page: Estimating Costs + path: billing/estimating-costs.mdx + - page: Billing Examples + path: billing/examples.mdx + - section: Enterprise + contents: + - page: Vapi Enterprise + path: enterprise/plans.mdx + - page: On-Prem Deployments + path: enterprise/onprem.mdx + - page: Changelog + path: changelog.mdx + - page: Support + path: support.mdx + - page: Status + path: status.mdx + - section: Quickstart + contents: + - page: Dashboard + path: quickstart/dashboard.mdx + - page: Inbound Calling + path: quickstart/inbound.mdx + - page: Outbound Calling + path: quickstart/outbound.mdx + - page: Web Calling + path: quickstart/web.mdx + - section: Client SDKs + contents: + - page: Overview + path: sdks.mdx + - page: Web SDK + path: sdk/web.mdx + - page: Web Snippet + path: examples/voice-widget.mdx + - section: Examples + contents: + - page: Outbound Sales + path: examples/outbound-sales.mdx + - page: Inbound Support + path: examples/inbound-support.mdx + - page: Pizza Website + path: examples/pizza-website.mdx + - page: Python Outbound Snippet + path: examples/outbound-call-python.mdx + - page: Code Resources + path: resources.mdx + - section: Customization + contents: + - page: Provider Keys + path: customization/provider-keys.mdx + - section: Custom LLM + contents: + - page: Fine-tuned OpenAI models + path: customization/custom-llm/fine-tuned-openai-models.mdx + - page: Custom LLM + path: customization/custom-llm/using-your-server.mdx + - section: Custom Voices + contents: + - page: Introduction + path: customization/custom-voices/custom-voice.mdx + - page: Elevenlabs + path: customization/custom-voices/elevenlabs.mdx + - page: PlayHT + path: customization/custom-voices/playht.mdx + - page: Custom Keywords + path: customization/custom-keywords.mdx + - page: Knowledge Base + path: customization/knowledgebase.mdx + - page: Multilingual + path: customization/multilingual.mdx + - page: JWT Authentication + path: customization/jwt-authentication.mdx + - page: Speech Configuration + path: customization/speech-configuration.mdx + - section: Core Concepts + contents: + - section: Assistants + contents: + - page: Introduction + path: assistants.mdx + - page: Function Calling + path: assistants/function-calling.mdx + - page: Persistent Assistants + path: assistants/persistent-assistants.mdx + - page: Dynamic Variables + path: assistants/dynamic-variables.mdx + - page: Call Analysis + path: assistants/call-analysis.mdx + - page: Background Messages + path: assistants/background-messages.mdx + - section: Blocks + contents: + - page: Introduction + path: blocks.mdx + - page: Steps + path: blocks/steps.mdx + - page: Block Types + path: blocks/block-types.mdx + - section: Server URL + contents: + - page: Introduction + path: server-url.mdx + - page: Setting Server URLs + path: server-url/setting-server-urls.mdx + - page: Server Events + path: server-url/events.mdx + - page: Developing Locally + path: server-url/developing-locally.mdx + - section: Phone Calling + contents: + - page: Introduction + path: phone-calling.mdx + - section: Squads + contents: + - page: Introduction + path: squads.mdx + - page: Example + path: squads-example.mdx + - section: Advanced Concepts + contents: + - section: Calls + contents: + - page: Call Forwarding + path: call-forwarding.mdx + - page: Ended Reason + path: calls/call-ended-reason.mdx + - page: SIP + path: advanced/calls/sip.mdx + - page: Live Call Control + path: calls/call-features.mdx + - page: Make & GHL Integration + path: GHL.mdx + - page: Tools Calling + path: tools-calling.mdx + - page: Prompting Guide + path: prompting-guide.mdx + - section: Glossary + contents: + - page: Definitions + path: glossary.mdx + - page: FAQ + path: faq.mdx + - section: Community + contents: + - section: Videos + contents: + - page: Appointment Scheduling + path: community/appointment-scheduling.mdx + - page: Comparisons + path: community/comparisons.mdx + - page: Conferences + path: community/conferences.mdx + - page: Demos + path: community/demos.mdx + - page: GoHighLevel + path: community/ghl.mdx + - page: Guide + path: community/guide.mdx + - page: Inbound + path: community/inbound.mdx + - page: Knowledgebase + path: community/knowledgebase.mdx + - page: Outbound + path: community/outbound.mdx + - page: Podcast + path: community/podcast.mdx + - page: Snippets & SDKs Tutorials + path: community/snippets-sdks-tutorials.mdx + - page: Special Mentions + path: community/special-mentions.mdx + - page: Squads + path: community/squads.mdx + - page: Television + path: community/television.mdx + - page: Usecase + path: community/usecase.mdx + - page: My Vapi + path: community/myvapi.mdx + - page: Expert Directory + path: community/expert-directory.mdx + - section: Providers + contents: + - section: Voice + contents: + - page: ElevenLabs + path: providers/voice/elevenlabs.mdx + - page: PlayHT + path: providers/voice/playht.mdx + - page: Azure + path: providers/voice/azure.mdx + - page: OpenAI + path: providers/voice/openai.mdx + - page: Neets + path: providers/voice/neets.mdx + - page: Cartesia + path: providers/voice/cartesia.mdx + - page: LMNT + path: providers/voice/imnt.mdx + - page: RimeAI + path: providers/voice/rimeai.mdx + - page: Deepgram + path: providers/voice/deepgram.mdx + - section: Models + contents: + - page: OpenAI + path: providers/model/openai.mdx + - page: Groq + path: providers/model/groq.mdx + - page: DeepInfra + path: providers/model/deepinfra.mdx + - page: Perplexity + path: providers/model/perplexity.mdx + - page: TogetherAI + path: providers/model/togetherai.mdx + - page: OpenRouter + path: providers/model/openrouter.mdx + - section: Transcription + contents: + - page: Deepgram + path: providers/transcriber/deepgram.mdx + - page: Gladia + path: providers/transcriber/gladia.mdx + - page: Talkscriber + path: providers/transcriber/talkscriber.mdx + - page: Voiceflow + path: providers/voiceflow.mdx + - section: Security & Privacy + contents: + - page: HIPAA Compliance + path: security-and-privacy/hipaa.mdx + - page: SOC-2 Compliance + path: security-and-privacy/soc.mdx + - page: Privacy Policy + path: security-and-privacy/privacy-policy.mdx + - page: Terms of Service + path: security-and-privacy/tos.mdx + - tab: api-reference + layout: + - api: API Reference + flattened: true + paginated: true + snippets: + typescript: "@vapi/server-sdk" + python: "vapi_server_sdk" + # - section: Assistants + # contents: [] + # - section: Calls + # contents: [] + # - section: Phone Numbers + # contents: [] + # - section: Files + # contents: [] + # - section: Squads + # contents: [] + # - section: Tools + # contents: [] + # - section: Analytics + # contents: [] + - section: Server and Client + contents: + - page: ServerMessage + path: api-reference/messages/server-message.mdx + - page: ServerMessageResponse + path: api-reference/messages/server-message-response.mdx + - page: ClientMessage + path: api-reference/messages/client-message.mdx + - page: ClientInboundMessage + path: api-reference/messages/client-inbound-message.mdx + - section: '' + contents: + - page: Swagger + path: api-reference/swagger.mdx + - page: OpenAPI + path: api-reference/openapi.mdx + \ No newline at end of file diff --git a/fern/enterprise/onprem.mdx b/fern/enterprise/onprem.mdx new file mode 100644 index 0000000..657edde --- /dev/null +++ b/fern/enterprise/onprem.mdx @@ -0,0 +1,34 @@ +--- +title: On-Prem Deployments +subtitle: Deploy Vapi in your private cloud. +slug: enterprise/onprem +--- + + +Vapi On-Prem allows you to deploy Vapi's best in class enterprise voice infrastructure AI directly in your own cloud. It can be deployed in a dockerized format on any cloud provider, in any geographic location, running on your GPUs. + +With On-Prem, your audio and text data stays in your cloud. Data never passes through Vapi's servers. If you're are handling sensitive data (e.g. health, financial, legal) and are under strict data requirements, you should consider deploying on-prem. + +Your device regularly sends performance and usage information to Vapi's cloud. This data helps adjust your device's GPU resources and is also used for billing. All network traffic from your device is tracked in an audit log, letting your engineering or security team see what the device is doing at all times. + +## Frequently Asked Questions + +#### Can the appliance adjust to my needs? + +Yes, the Vapi On-Prem appliance automatically adjusts its GPU resources to handle your workload as required by our service agreement. It can take a few minutes to adjust to changes in your workload. If you need quicker adjustments, you might want to ask for more GPUs by contacting support@vapi.ai. + +#### What if I can’t get enough GPUs from my cloud provider? + +If you're struggling to get more GPUs from your provider, contact support@vapi.ai for help. + +#### Can I access Vapi's AI models? + +No, our AI models are on secure machines in your Isolated VPC and you can’t log into these machines or check their files. + +#### How can I make sure my data stays within my cloud? + +Your device operates in VPCs that you control. You can check the network settings and firewall rules, and look at traffic logs to make sure everything is as it should be. The Control VPC uses open source components, allowing you to make sure the policies are being followed. Performance data and model updates are sent to Vapi, but all other traffic leaving your device is logged, except for the data sent back to your API clients. + +## Contact us + +For more information about Vapi On-Prem, please contact us at support@vapi.ai diff --git a/fern/enterprise/plans.mdx b/fern/enterprise/plans.mdx new file mode 100644 index 0000000..4f20e11 --- /dev/null +++ b/fern/enterprise/plans.mdx @@ -0,0 +1,23 @@ +--- +title: Vapi Enterprise +subtitle: Build and scale with Vapi. +slug: enterprise/plans +--- + + +If you're building a production application on Vapi, we can help you every step of the way from idea to full-scale deployment. + +On the Pay-As-You-Go plan, there is a limit of **10 concurrent calls**. On Enterprise, we reserve GPUs for you on our Enterprise cluster so you can scale up to **millions of calls**. + +#### Enterprise Plans include: + +- Reserved concurrency and higher rate limits +- Hands-on 24/7 support +- Shared Slack channel with our team +- Included minutes with volume pricing +- Calls with our engineering team 2-3 times per week +- Access to the Vapi SIP trunk for telephony + +## Contact us + +To get started on Vapi Enterprise, [fill out this form](https://book.vapi.ai). diff --git a/fern/examples/inbound-support.mdx b/fern/examples/inbound-support.mdx new file mode 100644 index 0000000..8b2d869 --- /dev/null +++ b/fern/examples/inbound-support.mdx @@ -0,0 +1,148 @@ +--- +title: Inbound Support Example ⚙️ +subtitle: Let's build a technical support assistant that remembers where we left off. +slug: examples/inbound-support +--- + + +We want a phone number we can call to get technical support. We want the assistant to use a provided set of troubleshooting guides to help walk the caller through solving their issue. + +As a bonus, we also want the assistant to remember by the phone number of the caller where we left off if we get disconnected. + + + + We'll start by taking a look at the [Assistant API + reference](/api-reference/assistants/create-assistant) and define our + assistant: + + ```json + { + "transcriber":{ + "provider": "deepgram", + "keywords": ["iPhone:1", "MacBook:1.5", "iPad:1", "iMac:0.8", "Watch:1", "TV:1", "Apple:2"], + }, + "model": { + "provider": "openai", + "model": "gpt-4", + "messages": [ + { + "role": "system", + "content": "You're a technical support assistant. You're helping a customer troubleshoot their Apple device. You can ask the customer questions, and you can use the following troubleshooting guides to help the customer solve their issue: ..." + } + ] + }, + "forwardingPhoneNumber": "+16054440129", + "firstMessage": "Hey, I'm an A.I. assistant for Apple. I can help you troubleshoot your Apple device. What's the issue?", + "recordingEnabled": true, + } + ``` + + - `transcriber` - We're defining this to make sure the transcriber picks up the custom words related to our devices. + - `model` - We're using the OpenAI GPT-3.5-turbo model. It's much faster and preferred if we don't need GPT-4. + - `messages` - We're defining the assistant's instructions for how to run the call. + - `forwardingPhoneNumber` - Since we've added this, the assistant will be provided the [transferCall](/assistants#transfer-call) function to use if the caller asks to be transferred to a person. + - `firstMessage` - This is the first message the assistant will say when the user picks up. + - `recordingEnabled` - We're recording the call so we can hear the conversation later. + + + + Since we want the assistant to remember where we left off, its configuration is going to change based on the caller. So, we're not going to use [temporary assistants](/assistants/persistent-assistants). + + For this example, we're going to store the conversation on our server between calls and use the [Server URL's `assistant-request`](/server-url#retrieving-assistants) to fetch a new configuration based on the caller every time someone calls. + + + + We'll buy a phone number for inbound calls using the [Phone Numbers API](/api-reference/phone-numbers/buy-phone-number). + + ```json + { + "id": "c86b5177-5cd8-447f-9013-99e307a8a7bb", + "orgId": "aa4c36ba-db21-4ce0-9c6e-99e307a8a7bb", + "number": "+11234567890", + "createdAt": "2023-09-29T21:44:37.946Z", + "updatedAt": "2023-12-08T00:57:24.706Z", + } + ``` + + + + When someone calls our number, we want to fetch the assistant configuration from our server. We'll use the [Server URL's `assistant-request`](/server-url#retrieving-assistants) to do this. + + First, we'll create an endpoint on our server for Vapi to hit. It'll receive messages as shown in the [Assistant Request](/server-url#retrieving-assistants-calling) docs. Once created, we'll add that endpoint URL to the **Server URL** field in the Account page on the [Vapi Dashboard](https://dashboard.vapi.ai). + + + + We'll want to save the conversation at the end of the call for the next time they call. We'll use the [Server URL's `end-of-call-report`](/server-url#end-of-call-report) message to do this. + + At the end of each call, we'll get a message like this: + + ```json + { + "message": { + "type": "end-of-call-report", + "endedReason": "hangup", + "call": { Call Object }, + "recordingUrl": "https://vapi-public.s3.amazonaws.com/recordings/1234.wav", + "summary": "The user mentioned they were having an issue with their iPhone restarting randomly. They restarted their phone, but the issue persisted. They mentioned they were using an iPhone 12 Pro Max. They mentioned they were using iOS 15.", + "transcript": "Hey, I'm an A.I. assistant for Apple...", + "messages":[ + { + "role": "assistant", + "message": "Hey, I'm an A.I. assistant for Apple. I can help you troubleshoot your Apple device. What's the issue?", + }, + { + "role": "user", + "message": "Yeah I'm having an issue with my iPhone restarting randomly.", + }, + ... + ] + } + } + ``` + + We'll save the `call.customer.number` and `summary` fields to our database for the next time they call. + + + When our number receives a call, Vapi will also hit our server's endpoint with a message like this: + + ```json + { + "message": { + "type": "assistant-request", + "call": { Call Object }, + } + } + ``` + + We'll check our database to see if we have a conversation for this caller. If we do, we'll create an assistant configuration like in Step 1 and respond with it: + + ```json + { + "assistant": { + ... + "model": { + "provider": "openai", + "model": "gpt-4", + "messages": [ + { + "role": "system", + "content": "You're a technical support assistant. Here's where we left off: ..." + } + ] + }, + ... + } + } + ``` + + If we don't, we'll just respond with the assistant configuration from Step 1. + + + + + We'll call our number and see if it works. Give it a call, and tell it you're having an issue with your iPhone restarting randomly. + + Hang up, and call back. Then ask what the issue was. The assistant should remember where we left off. + + + diff --git a/fern/examples/outbound-call-python.mdx b/fern/examples/outbound-call-python.mdx new file mode 100644 index 0000000..42e5738 --- /dev/null +++ b/fern/examples/outbound-call-python.mdx @@ -0,0 +1,56 @@ +--- +title: Outbound Calls from Python 📞 +subtitle: Some sample code for placing an outbound call using Python +slug: examples/outbound-call-python +--- + + +```python +import requests + +# Your Vapi API Authorization token +auth_token = '' +# The Phone Number ID, and the Customer details for the call +phone_number_id = '' +customer_number = "+14151231234" + +# Create the header with Authorization token +headers = { + 'Authorization': f'Bearer {auth_token}', + 'Content-Type': 'application/json', +} + +# Create the data payload for the API request +data = { + 'assistant': { + "firstMessage": "Hey, what's up?", + "model": { + "provider": "openai", + "model": "gpt-3.5-turbo", + "messages": [ + { + "role": "system", + "content": "You are an assistant." + } + ] + }, + "voice": "jennifer-playht" + }, + 'phoneNumberId': phone_number_id, + 'customer': { + 'number': customer_number, + }, +} + +# Make the POST request to Vapi to create the phone call +response = requests.post( + 'https://api.vapi.ai/call/phone', headers=headers, json=data) + +# Check if the request was successful and print the response +if response.status_code == 201: + print('Call created successfully') + print(response.json()) +else: + print('Failed to create call') + print(response.text) +``` diff --git a/fern/examples/outbound-sales.mdx b/fern/examples/outbound-sales.mdx new file mode 100644 index 0000000..a2e7692 --- /dev/null +++ b/fern/examples/outbound-sales.mdx @@ -0,0 +1,148 @@ +--- +title: Outbound Sales Example 📞 +subtitle: Let's build an outbound sales agent that can schedule appointments. +slug: examples/outbound-sales +--- + + +We want this agent to be able to call a list of leads and schedule appointments. We'll create our assistant, create a phone number for it, then we'll configure our server for function calling to book the appointments. + + + + We'll start by taking a look at the [Assistant API + reference](/api-reference/assistants/create-assistant) and define our + assistant: + + ```json + { + "transcriber":{ + "provider": "deepgram", + "keywords": ["Bicky:1"] + }, + "model": { + "provider": "openai", + "model": "gpt-4", + "messages": [ + { + "role": "system", + "content": "You're a sales agent for a Bicky Realty. You're calling a list of leads to schedule appointments to show them houses..." + } + ], + "functions": [ + { + "name": "bookAppointment", + "description": "Used to book the appointment.", + "parameters": { + "type": "object", + "properties": { + "datetime": { + "type": "string", + "description": "The date and time of the appointment in ISO format." + } + } + } + } + ] + }, + "voice": { + "provider": "openai", + "voiceId": "onyx" + }, + "forwardingPhoneNumber": "+16054440129", + "voicemailMessage": "Hi, this is Jennifer from Bicky Realty. We were just calling to let you know...", + "firstMessage": "Hi, this Jennifer from Bicky Realty. We're calling to schedule an appointment to show you a house. When would be a good time for you?", + "endCallMessage": "Thanks for your time.", + "endCallFunctionEnabled": true, + "recordingEnabled": false, + } + ``` + Let's break this down: + - `transcriber` - We're defining this to make sure the transcriber picks up the custom word "Bicky" + - `model` - We're using the OpenAI GPT-4 model, which is better at function calling. + - `messages` - We're defining the assistant's instructions for how to run the call. + - `functions` - We're providing a bookAppointment function with a datetime parameter. The assistant can call this during the conversation to book the appointment. + - `voice` - We're using the Onyx voice from OpenAI. + - `forwardingPhoneNumber` - Since we've added this, the assistant will be provided the [transferCall](/assistants#transfer-call) function to use. + - `voicemailMessage` - If the call goes to voicemail, this message will be played. + - `firstMessage` - This is the first message the assistant will say when the user picks up. + - `endCallMessage` - This is the message the assistant will deciding to hang up. + - `endCallFunctionEnabled` - This will give the assistant the [endCall](/assistants#end-call) function. + - `recordingEnabled` - We've disabled recording, since we don't have the user's consent to record the call. + + We'll then make a POST request to the [Create Assistant](/api-reference/assistants/create-assistant) endpoint to create the assistant. + + + + We'll buy a phone number for outbound calls using the [Phone Numbers API](/phone-calling#set-up-a-phone-number). + + ```json + { + "id": "c86b5177-5cd8-447f-9013-99e307a8a7bb", + "orgId": "aa4c36ba-db21-4ce0-9c6e-99e307a8a7bb", + "number": "+11234567890", + "createdAt": "2023-09-29T21:44:37.946Z", + "updatedAt": "2023-12-08T00:57:24.706Z", + } + ``` + + Great, let's take note of that `id` field- we'll need it later. + + + + When the assistant calls that `bookAppointment` function, we'll want to handle that function call and actually book the appointment. We also want to let the user know if booking the appointment was unsuccessful. + + First, we'll create an endpoint on our server for Vapi to hit. It'll receive messages as shown in the [Function Calling](/server-url#function-calling) docs. Once created, we'll add that endpoint URL to the **Server URL** field in the Account page on the [Vapi Dashboard](https://dashboard.vapi.ai). + + + + So now, when the assistant decides to call `bookAppointment`, our server will get something like this: + + ```json + { + "message": { + "type": "function-call", + "call": { Call Object }, + "functionCall": { + "name": "bookAppointment", + "parameters": "{ \"datetime\": \"2023-09-29T21:44:37.946Z\"}" + } + } + } + ``` + + We'll do our own logic to book the appointment, then we'll respond to the request with the result to let the assistant know it was booked: + + ```json + { "result": "The appointment was booked successfully." } + ``` + + or, if it failed: + + ```json + { "result": "The appointment time is unavailable, please try another time." } + ``` + + So, when the assistant calls this function, these results will be appended to the conversation, and the assistant will respond to the user knowing the result. + + Great, now we're ready to start calling leads! + + + + We'll use the [Create Phone Call](/api-reference/calls/create-phone-call) endpoint to place a call to a lead: + + ```json + { + "phoneNumberId": "c86b5177-5cd8-447f-9013-99e307a8a7bb", + "assistantId": "d87b5177-5cd8-447f-9013-99e307a8a7bb", + "customer": { + "number": "+11234567890" + } + } + ``` + + Since we also defined a `forwardingPhoneNumber`, when the user asks to speak to a human, the assistant will transfer the call to that number automatically. + + We can then check the [Dashboard](https://dashboard.vapi.ai) to see the call logs and read the transcripts. + + + diff --git a/fern/examples/pizza-website.mdx b/fern/examples/pizza-website.mdx new file mode 100644 index 0000000..0f52509 --- /dev/null +++ b/fern/examples/pizza-website.mdx @@ -0,0 +1,165 @@ +--- +title: Pizza Website Example 🍕 +subtitle: Let's build a pizza ordering assistant for our website. +slug: examples/pizza-website +--- + + +In this example, we'll be using the [Web SDK](https://github.com/VapiAI/web) to create an assistant that can take a pizza order. Since all the [Client SDKs](/sdks) have equivalent functionality, you can use this example as a guide for any Vapi client. + +We want to add a button to the page to start a call, update our UI with the call status, and display what the user's saying while they say it. When the user mentions a topping, we should add it to the pizza. When they're done, we should redirect them to checkout. + + + + We'll start by taking a look at the [Assistant API + reference](/api-reference/assistants/create-assistant) and define our + assistant: + + ```json + { + "model": { + "provider": "openai", + "model": "gpt-4", + "messages": [ + { + "role": "system", + "content": "You're a pizza ordering assistant. The user will ask for toppings, you'll add them. When they're done, you'll redirect them to checkout." + } + ], + "functions": [ + { + "name": "addTopping", + "description": "Used to add a topping to the pizza.", + "parameters": { + "type": "object", + "properties": { + "topping": { + "type": "string", + "description": "The name of the topping. For example, 'pepperoni'." + } + } + } + }, + { + "name": "goToCheckout", + "description": "Redirects the user to checkout and order their pizza.", + "parameters": {"type": "object", "properties": {}} + } + ] + }, + "firstMessage": "Hi, I'm the pizza ordering assistant. What toppings would you like?", + } + ``` + Let's break this down: + - `model` - We're using the OpenAI GPT-4 model, which is better at function calling. + - `messages` - We're defining the assistant's instructions for how to run the call. + - `functions` - We're providing a addTopping function with a topping parameter. The assistant can call this during the conversation to add a topping. We're also adding goToCheckout, with an empty parameters object. The assistant can call this to redirect the user to checkout. + - `firstMessage` - This is the first message the assistant will say when the user starts the call. + + We'll then make a POST request to the [Create Assistant](/api-reference/assistants/create-assistant) endpoint to create the assistant. + + + + We'll follow the `README` for the [Web SDK](https://github.com/VapiAI/web) to get it installed. + + We'll then get our **Public Key** from the [Vapi Dashboard](https://dashboard.vapi.ai) and initialize the SDK: + + ```js + import Vapi from '@vapi-ai/web'; + + const vapi = new Vapi('your-web-token'); + ``` + + + + We'll add a button to the page that starts the call when clicked: + + ```html + + + ``` + + ```js + const startCallButton = document.getElementById('start-call'); + + startCallButton.addEventListener('click', async () => { + await vapi.start('your-assistant-id'); + }); + + const stopCallButton = document.getElementById('stop-call'); + + stopCallButton.addEventListener('click', async () => { + await vapi.stop(); + }); + ``` + + + + ```js + vapi.on('call-start', () => { + // Update UI to show that the call has started + }); + + vapi.on('call-end', () => { + // Update UI to show that the call has ended + }); + ``` + + + + + ```js + vapi.on('speech-start', () => { + // Update UI to show that the assistant is speaking + }); + +vapi.on('speech-end', () => { +// Update UI to show that the assistant is done speaking +}); + +```` + + + + + All messages send to the [Server URL](/server-url), including `transcript` and `function-call` messages, are also sent to the client as `message` events. We'll need to check the `type` of the message to see what type it is. + +```js +vapi.on("message", (msg) => { + if (msg.type !== "transcript") return; + + if (msg.transcriptType === "partial") { + // Update UI to show the live partial transcript + } + + if (msg.transcriptType === "final") { + // Update UI to show the final transcript + } +}); +```` + + + + +```javascript +vapi.on('message', (msg) => { + if (msg.type !== "function-call") return; + +if (msg.functionCall.name === "addTopping") { +const topping = msg.functionCall.parameters.topping; +// Add the topping to the pizza +} + +if (msg.functionCall.name === "goToCheckout") { +// Redirect the user to checkout +} +}); + +``` + + +You should now have a working pizza ordering assistant! 🍕 + + + +``` diff --git a/fern/examples/voice-widget.mdx b/fern/examples/voice-widget.mdx new file mode 100644 index 0000000..12f5116 --- /dev/null +++ b/fern/examples/voice-widget.mdx @@ -0,0 +1,150 @@ +--- +title: Web Snippet +subtitle: >- + Easily integrate the Vapi Voice Widget into your website for enhanced user + interaction. +slug: examples/voice-widget +--- + + +Improve your website's user interaction with the Vapi Voice Widget. This robust tool enables your visitors to engage with a voice assistant for support and interaction, offering a smooth and contemporary way to connect with your services. + +## Steps for Installation + + + + Copy the snippet below and insert it into your website's HTML, ideally before the closing `` tag. + + ```html + + ``` + + + + From your Vapi dashboard, create an assistant to get the assistant ID. Alternatively, define an assistant configuration directly in your website's code as demonstrated in the example below. + ```javascript + const assistant = { + model: { + provider: "openai", + model: "gpt-3.5-turbo", + systemPrompt: + "You're a versatile AI assistant named Vapi who is fun to talk with.", + }, + voice: { + provider: "11labs", + voiceId: "paula", + }, + firstMessage: "Hi, I am Vapi how can I assist you today?", + }; + ``` + + + + Modify the `buttonConfig` object to align with your website's style and branding. Choose between a pill or round button and set colors, positions, and icons. + ```javascript + const buttonConfig = { + position: "bottom-right", // "bottom" | "top" | "left" | "right" | "top-right" | "top-left" | "bottom-left" | "bottom-right" + offset: "40px", // decide how far the button should be from the edge + width: "50px", // min-width of the button + height: "50px", // height of the button + idle: { // button state when the call is not active. + color: `rgb(93, 254, 202)`, + type: "pill", // or "round" + title: "Have a quick question?", // only required in case of Pill + subtitle: "Talk with our AI assistant", // only required in case of pill + icon: `https://unpkg.com/lucide-static@0.321.0/icons/phone.svg`, + }, + loading: { // button state when the call is connecting + color: `rgb(93, 124, 202)`, + type: "pill", // or "round" + title: "Connecting...", // only required in case of Pill + subtitle: "Please wait", // only required in case of pill + icon: `https://unpkg.com/lucide-static@0.321.0/icons/loader-2.svg`, + }, + active: { // button state when the call is in progress or active. + color: `rgb(255, 0, 0)`, + type: "pill", // or "round" + title: "Call is in progress...", // only required in case of Pill + subtitle: "End the call.", // only required in case of pill + icon: `https://unpkg.com/lucide-static@0.321.0/icons/phone-off.svg`, + }, + }; + ``` + + + + You can use the `vapiInstance` returned from the run function in the snippet to further customize the behaviour. For instance, you might want to listen to various EventSource, or even send some messages to the bot programmatically. + + ```js + vapiInstance.on('speech-start', () => { + console.log('Speech has started'); + }); + + vapiInstance.on('speech-end', () => { + console.log('Speech has ended'); + }); + + vapiInstance.on('call-start', () => { + console.log('Call has started'); + }); + + vapiInstance.on('call-end', () => { + console.log('Call has stopped'); + }); + + vapiInstance.on('volume-level', (volume) => { + console.log(`Assistant volume level: ${volume}`); + }); + + // Function calls and transcripts will be sent via messages + vapiInstance.on('message', (message) => { + console.log(message); + }); + + vapiInstance.on('error', (e) => { + console.error(e) + }); + ``` + + + + +## Customization + +Modify your assistant's behavior and the initial message users will see. Refer to the provided examples to customize the assistant's model, voice, and initial greeting. + +## UI Customization + +For advanced styling, target the exposed CSS and other classes to ensure the widget's appearance aligns with your website's design. Here is a list of the classes you can customize: + +- `.vapi-btn`: The primary class for the Vapi button. +- `.vapi-btn-is-idle`: The class for the Vapi button when the call is disconnected. +- `.vapi-btn-is-active`: The class for the Vapi button when the call is active. +- `.vapi-btn-is-loading`: The class for the Vapi button when the call is connecting. +- `.vapi-btn-is-speaking`: The class for the Vapi button when the bot is speaking. +- `.vapi-btn-pill`: The class for Vapi button to set pill variant. +- `.vapi-btn-round`: The class for Vapi button to set round variant. diff --git a/fern/faq.mdx b/fern/faq.mdx new file mode 100644 index 0000000..b90cd5a --- /dev/null +++ b/fern/faq.mdx @@ -0,0 +1,7 @@ +--- +title: Frequently Asked Questions +subtitle: Frequently asked questions about Vapi. +slug: faq +--- + + diff --git a/fern/fern.config.json b/fern/fern.config.json index 02f2b90..fb5fb7c 100644 --- a/fern/fern.config.json +++ b/fern/fern.config.json @@ -1,4 +1,4 @@ { "organization": "vapi", - "version": "0.43.8" + "version": "0.44.11" } \ No newline at end of file diff --git a/fern/glossary.mdx b/fern/glossary.mdx new file mode 100644 index 0000000..1ed239a --- /dev/null +++ b/fern/glossary.mdx @@ -0,0 +1,122 @@ +--- +title: Definitions +subtitle: Useful terms and definitions for Vapi & voice AI applications. +slug: glossary +--- + + +## A + +### At-cost + +"At-cost" is often use when discussing pricing. It means "without profit to the seller". Vapi charges at-cost for requests made to [STT](/glossary#stt), [LLM](/glossary#large-language-model), & [TTS](/glossary#tts) providers. + +## B + +### Backchanneling + +A [backchannel]() occurs when a listener provides verbal or non-verbal feedback to a speaker during a conversation. + +Examples of backchanneling in English include such expressions as "yeah", "OK", "uh-huh", "hmm", "right", and "I see". + +This feedback is often not semantically significant to the conversation, but rather serves to signify the listener's attention, understanding, sympathy, or agreement. + +## E + +### Endpointing + +See [speech endpointing](/glossary#speech-endpointing). + +## I + +### Inbound Call + +This is a call received by an assistant **_from_** another phone number (w/ the assistant being the "person" answering). The call comes **"in"**-ward to a number (from an external caller) — hence the term "inbound call". + +### Inference + +You may often hear the term "run inference" when referring to running a large language model against an input prompt to receive text output back out. + +The process of running a prompt against an LLM for output is called "inference". + +## L + +### Large Language Model + +Large Language Models (or "LLM", for short) are machine learning models trained on large amounts of text, & later used to generate text in a probabilistic manner, "token-by-token". + +For further reading see [large language model wiki](https://en.wikipedia.org/wiki/Large_language_model). + +### LLM + +See [Large Language Model](/glossary#large-language-model). + +## O + +### Outbound Call + +This is a call made by an assistant **_to_** another target phone number (w/ the assistant being the "person" dialing). The call goes **"out"**-ward to another number — hence the term "outbound call". + +## S + +### Server URL + +A "server url" is an endpoint you expose to Vapi to receive conversation data in real-time. Server urls can reply with meaningful responses, distinguishing them from traditional [webhooks](/glossary#webhook). + +See our [server url](/server-url) guide to learn more. + +### SDK + +Stands for "Software Development Kit" — these are pre-packaged libraries & platform-specific building tools that a software publisher creates to expedite & increase the ease of integration for developers. + +### Speech Endpointing + +Speech endpointing is the process of detecting the start and end of (a line of) speech in an audio signal. This is an important function in conversation turn detection. + +A starting heuristic for the end of a user's speech is the detection of silence. If someone does not speak for a certain amount of milliseconds, the utterance can be considered complete. + +A more robust & ideal approach is to actually understand what the user is saying (as well as the current conversation's state & the speech turn's intent) to determine if the user is just pausing for effect, or actually finished speaking. + +Vapi uses a combination of silence detection and machine learning models to properly endpoint conversation speech (to prevent improper interruption & encourage proper [backchanneling](/glossary#backchanneling)). + +Additional reading on speech endpointing can be found [here](https://en.wikipedia.org/wiki/Speech_segmentation) & on [Deepgram's docs](https://developers.deepgram.com/docs/endpointing). + +### STT + +An abbreviation used for "Speech-to-text". The process of converting physical sound waves into raw transcript text (a process called "transcription"). + +## T + +### Telemarketing Sales Rule + +The Telemarketing Sales Rule (or "TSR" for short) is a regulation established by the Federal Trade Commission ([ftc.gov](https://www.ftc.gov/)) in the United States to protect consumers from deceptive and abusive telemarketing practices. + +**You may only conduct outbound calls to phone numbers which you have consent to contact.** Violating TSR rules can result in significant civil (or even criminal) penalties. + +Learn more on the [FCC website](https://www.ftc.gov/legal-library/browse/rules/telemarketing-sales-rule). + +### TTS + +An abbreviation used for "Text-to-speech". The process of converting raw text into playable audio data. + +## V + +### Voice-to-Voice + +"Voice-to-voice" is often a term brought up in discussing voice AI system latency — the time it takes to go from a user finishing their speech (however that endpoint is computed) → to the AI agent's first speech chunk/byte being played back on a client’s device. + +Ideally, this process should happen in \<1s, better if closer to 500-700ms (responding too quickly can be an issue as well). Voice AI applications must closely watch this metric to ensure their applications stay responsive & usable. + +## W + +### Webhook + +A webhook is a server endpoint you expose to external services with the intention of receiving external data in real-time. Your exposed URL is essentially a "drop-bin" for data to come in from external providers to update & inform your systems. + +Traditionally, webhooks are unidirectional & stateless. Endpoints only reply with status code to signal acknowledgement. + + + To make the distinction clear, Vapi calls these "[server urls](/server-url)". + Certain requests made to your server (like assistant requests) require a reply + with meaningful data. + diff --git a/fern/how-vapi-works.mdx b/fern/how-vapi-works.mdx new file mode 100644 index 0000000..26cb6a7 --- /dev/null +++ b/fern/how-vapi-works.mdx @@ -0,0 +1,68 @@ +--- +title: Orchestration Models +subtitle: All the fancy stuff Vapi does on top of the core models. +slug: how-vapi-works +--- + + +Vapi also runs a suite of audio and text models that make it's latency-optimized Speech-to-Text (STT), Large Language Model (LLM), & Text-to-Speech (TTS) pipeline feel human. + +Here's a high-level overview of the Vapi architecture: + + + + + +These are some of the models that are part of the Orchestration suite. We currently have lots of other models in the pipeline that will be added to the orchestration suite soon. The ultimate goal is to achieve human performance. + +### Endpointing + +Endpointing is a fancy word for knowing when the user is done speaking. Traditional methods use silence detection with a timeout. Unfortunately, if we want sub-second response-times, that's not going to work. + +Vapi's uses a custom fusion audio-text model to know when a user has completed their turn. Based on both the user's tone and what they're saying, it decides how long to pause before hitting the LLM. + +This is critical to make sure the user isn't interrupted mid-thought while still providing sub-second response times when they're done speaking. + +### Interruptions (Barge-in) + +Interruptions (aka. barge-in in research circles) is the ability to detect when the user would like to interject and stop the assistant's speech. + +Vapi uses a custom model to distinguish when there is a true interruption, like "stop", "hold up", "that's not what I mean, and when there isn't, like "yeah", "oh gotcha", "okay." + +It also keeps track of where the assistant was cut off, so the LLM knows what it wasn't able to say. + +### Background Noise Filtering + +Many of our models, including the transcriber, are audio-based. In the real world, things like music and car horns can interfere with model performance. + +We use a proprietary real-time noise filtering model to ensure the audio is cleaned without sacrificing latency, before it reaches the inner models of the pipeline. + +### Background Voice Filtering + +We rely quite heavily on the transcription model to know what's going on, for interruptions, endpointing, backchanneling, and for the user's statement passed to the LLM. + +Transcription models are built to pick up everything that sounds like speech, so this can be a problem. As you can imagine, having a TV on in the background or echo coming back into the mic can severely impact the conversation ability of a system like Vapi. + +Background noise cancellation is a well-researched problem. Background voice cancellation is not. To solve this, we built proprietary audio filtering model that's able to **focus in** on the primary speaker and block everything else out. + +### Backchanneling + +Humans like to affirm each other while they speak with statements like "yeah", "uh-huh", "got it", "oh no!" + +They're not considered interruptions, they're just used to let the speaker know that their statement has been understood, and encourage the user to continue their statement. + +A backchannel cue used at the wrong moment can derail a user's statement. Vapi uses a proprietary fusion audio text model to determine the best moment to backchannel and to decide which backchannel cue is most appropriate to use. + +### Emotion Detection + +How a person says something is just as important as what they're saying. So we've trained a real-time audio model to extract the emotional inflection of the user's statement. + +This emotional information is then fed into the LLM, so knows to behave differently if the user is angry, annoyed, or confused. + +### Filler Injection + +The output of LLMs tends to be formal, and not conversational. People speak with phrases like "umm", "ahh", "i mean", "like", "so", etc. + +You can prompt the model to output like this, but we treat our user's prompts as **sacred**. Making a change like this to a prompt can change the behavior in unintended ways. + +To ensure we don't add additional latency transforming the output, we've built a custom model that's able to convert streaming input and make it sound conversational in real-time. diff --git a/fern/introduction.mdx b/fern/introduction.mdx new file mode 100644 index 0000000..aa46ab2 --- /dev/null +++ b/fern/introduction.mdx @@ -0,0 +1,243 @@ +--- +title: Introduction +subtitle: Vapi is the Voice AI platform for developers. +slug: introduction +--- + + + + + +Vapi lets developers build, test, & deploy voice AI agents in minutes rather than months — solving for the foundational challenges voice AI applications face: + + + + Turn-taking, interruption handling, backchanneling, and more. + + + Responsive conversation demands low latency. Internationally. (\<500-800ms voice-to-voice). + + + Taking actions during conversation, getting data to your services for custom actions. + + + Review conversation audio, transcripts, & metadata. + + + + + Implemented from scratch, this functionality can take months to build, and + large, continuous, resources to maintain & improve. + + +Vapi abstracts away these complexities, allowing developers to focus on the core of their voice AI application's business logic. **Shipping in days, not months.** + +## Quickstart Guides + +Get up & running in minutes with one of our [quickstart](/quickstart) guides: + +#### No Code + + + + The easiest way to start with Vapi. Run a voice agent in minutes. + + + Quickly get started handling inbound phone calls. + + + Quickly get started sending outbound phone calls. + + + +#### Platform-Specific + + + + Quickly get started making web calls. Web developers, this is for you. + + + +## Examples + +Explore end-to-end examples for some common voice workflows: + + + + We’ll build an outbound sales agent that can schedule appointments. + + + We’ll build an technical support assistant that remembers where we left off. + + + We'll build an order taking agent for our pizza website. + + + +## Key Concepts + +Gain a deep understanding of key concepts in Vapi, as well as how Vapi works: + +#### Core Concepts + + + + Assistants set the foundation for applications built on Vapi. + + + Server URLs allow Vapi to deliver your application data in realtime. + + + Learn the ins-and-outs of telephony & conducting phone calls on Vapi. + + + Learn about privacy concepts like HIPAA & data privacy on Vapi. + + + +#### Platform + + + + Learn what goes on behind-the-scenes to make Vapi work. + + + +## Explore Our SDKs + +Our SDKs are open source, and available on [our GitHub](https://github.com/VapiAI): + + + + Add a Vapi assistant to your web application. + + + Add a Vapi assistant to your iOS app. + + + Add a Vapi assistant to your Flutter app. + + + Add a Vapi assistant to your React Native app. + + + Multi-platform. Mac, Windows, and Linux. + + + +## FAQ + +Common questions asked by other users: + + + +## Get Support + +Join our Discord to connect with other developers & connect with our team: + + + + Connect with our team & other developers using Vapi. + + + Send our support team an email. + + diff --git a/fern/knowledgebase.mdx b/fern/knowledgebase.mdx new file mode 100644 index 0000000..891415b --- /dev/null +++ b/fern/knowledgebase.mdx @@ -0,0 +1,64 @@ +--- +title: Creating Custom Knowledge Bases for Your Voice AI Assistants +subtitle: >- + Learn how to create and integrate custom knowledge bases into your voice AI + assistants. +slug: knowledgebase +--- + + + +## **What is Vapi's Knowledge Base?** +Our Knowledge Base is a collection of custom documents that contain information on specific topics or domains. By integrating a Knowledge Base into your voice AI assistant, you can enable it to provide more accurate and informative responses to user queries. + +### **Why Use a Knowledge Base?** +Using a Knowledge Base with your voice AI assistant offers several benefits: + +* **Improved accuracy**: By integrating custom documents into your assistant, you can ensure that it provides accurate and up-to-date information to users. +* **Enhanced capabilities**: A Knowledge Base enables your assistant to answer complex queries and provide detailed responses to user inquiries. +* **Customization**: With a Knowledge Base, you can tailor your assistant's responses to specific domains or topics, making it more effective and informative. + +## **How to Create a Knowledge Base** + +To create a Knowledge Base, follow these steps: + +### **Step 1: Upload Your Documents** + +Navigate to Overview > Documents and upload your custom documents in Markdown, PDF, plain text, or Microsoft Word (.doc and .docx) format to Vapi's Knowledge Base. + + +Adding documents to your Knowledge Base + + +### **Step 2: Create an Assistant** + +Create a new assistant in Vapi and, on the right sidebar menu, select the document you've just added to the Knowledge Base feature. + + +Adding documents to your assistant + + + +### **Step 3: Configure Your Assistant** + +Customize your assistant's system prompt to utilize the Knowledge Base for responding to user queries. + +## **Best Practices for Creating Effective Knowledge Bases** + +* **Organize Your documents**: Organize your documents by topic or category to ensure that your assistant can quickly retrieve relevant information. +* **Use Clear and concise language**: Use clear and concise language in your documents to ensure that your assistant can accurately understand and respond to user queries. +* **Keep your documents up-to-date**: Regularly update your documents to ensure that your assistant provides the most accurate and up-to-date information. + + + For more information on creating effective Knowledge Bases, check out our tutorial on [Best Practices for Knowledge Base Creation](https://youtu.be/i5mvqC5sZxU). + + +By following these guidelines, you can create a comprehensive Knowledge Base that enhances the capabilities of your voice AI assistant and provides valuable information to users. \ No newline at end of file diff --git a/fern/phone-calling.mdx b/fern/phone-calling.mdx new file mode 100644 index 0000000..a3dd1ad --- /dev/null +++ b/fern/phone-calling.mdx @@ -0,0 +1,44 @@ +--- +title: Phone Calling +subtitle: Learn how to create and configure phone numbers with Vapi. +slug: phone-calling +--- + + + + +You can set up a phone number to place and receive phone calls. Phone numbers can be bought directly through Vapi, or you can use your own from Twilio. + +You can buy a phone number through the dashboard or use the [`/phone-numbers/buy`](/api-reference/phone-numbers/buy-phone-number)` endpoint. + +If you want to use your own phone number, you can also use the dashboard or the [`/phone-numbers/import`](/api-reference/phone-numbers/import-twilio-number) endpoint. This will use your Twilio credentials to verify the number and configure it with Vapi services. + + + + + You can place an outbound call from one of your phone numbers using the + [`/call/phone`](/api-reference/calls/create-phone-call) endpoint. If the system message will be + different with every call, you can specify a temporary assistant in the `assistant` field. If you + want to reuse an assistant, you can specify its ID in the `assistantId` field. + + + +You can provide an `assistantId` to a phone number and it will use that assistant when receiving inbound calls. + +You may want to specify the assistant based on the caller's phone number. If a phone number doesn't have an `assistantId`, Vapi will attempt to retrieve the assistant from your server using your [Server URL](/server-url#retrieving-assistants). + + + +Video Tutorial on How to Import Numbers from Twilio for International Calls: +
+