diff --git a/README.md b/README.md index 5009d006..16187963 100644 --- a/README.md +++ b/README.md @@ -35,62 +35,3 @@ The PRs are reviewed every Thursday by our team. If you wish to suggest a new topic, want to request additional details, or raise any other issues, you can [Open an Issue](https://github.com/symblai/symbl-docs/issues/new). The resolution time for issues reported is the same as for PRs. - -## Setting up Docs locally - -1. Clone the Symbl Docs Repo - -- Using SSH: - -```js -$ git clone git@github.com:symblai/symbl-docs.git -``` - - OR - -- Using HTTPS: - -```js -$ git clone https://github.com/symblai/symbl-docs.git -``` - -2. Open your local directory where you cloned the docs - -```js -$ cd symbl-docs -``` - -3. Switch to develop branch to make changes - -```js -$ git checkout develop -``` -### Installation - -```js -$ yarn -``` - -### Local Development - -```js -$ yarn start -``` - -This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server. - -### Build - -```js -$ yarn build -``` - -This command generates static content into the `build` directory and can be served using any static content hosting service. - -### Deployment - -```js -$ GIT_USER= USE_SSH=true yarn deploy -``` - -If you are using GitHub Pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch. diff --git a/docs/introduction.md b/docs/introduction.md index 11a6ca7f..1a476062 100644 --- a/docs/introduction.md +++ b/docs/introduction.md @@ -16,7 +16,7 @@ our APIs or SDKs, we've got you covered!

API Reference

Browse through our APIs, learn how they work and get detailed descriptions and sample codes for each endpoint.
-
-
- -## What is Symbl? ---- - -[Symbl](https://symbl.ai/) is an AI-powered, API first, Conversation Intelligence platform for natural human conversations that works on audio, video, and textual content in real-time or recorded files. Symbl’s APIs let you generate real-time Sentiment Analysis, Action Items, Topics, Trackers, Summary and much more in your applications. - -
Learn more ➑️  
-## Getting Started -### Step 1: Get Symbl API Credentials +## Getting Started with Symbl +### Step 1. Get API Credentials --- Sign up on the [Symbl Platform](https://platform.symbl.ai/#/login) and grab your API Credentials.
Using the Symbl credentials, you can [generate the authentication token](/docs/developer-tools/authentication) that you can use everytime you make Symbl API calls.   -### Step 2: Send Recorded Conversation OR Connect Live +### Step 2. Send Recorded Conversation OR Connect Live --- -Using the following APIs, send conversation data in real-time or after the conversation has taken place (async). +Using the APIs given below, send conversation data in real-time or after the conversation has taken place (async) i.e., with recorded data.     πŸ‘‰   [Async APIs](/docs/async-api/introduction) allow you to send text, audio, or video conversations in recorded format.
    πŸ‘‰   [Streaming APIs](/docs/streamingapi/introduction) allow you to connect Symbl on a live call via WebSocket protocol.
    πŸ‘‰   [Telephony APIs](/docs/telephony/introduction) allow you to connect Symbl on a live audio conversation via SIP and PSTN.
- -Before getting the Conversation Intelligence, you must wait for the processing job to complete. - - -### Step 3: Get Conversation Intelligence +### Step 3. Get Conversation Intelligence --- -Step 2 returns a `conversationId` by default. Use this in the **Conversation API** to generate any of the following Conversation Intelligence: +In Step 2, the `conversationId` is returned. Use the Conversation ID in the **Conversation API** to generate Symbl's Conversation Intelligence and insights: -   πŸ‘‰   [Speech-to-Text (Transcripts)](/docs/concepts/speech-to-text)
-   πŸ‘‰   [Topics](/docs/concepts/topics)
-   πŸ‘‰   [Sentiment Analysis](/docs/concepts/sentiment-analysis)
-   πŸ‘‰   [Action Items](/docs/concepts/action-items)
-   πŸ‘‰   [Follow-Ups](/docs/concepts/follow-ups)
-   πŸ‘‰   [Questions](/docs/concepts/questions)
-   πŸ‘‰   [Trackers](/docs/concepts/trackers)
-   πŸ‘‰   [Conversation Groups](/docs/concepts/conversation-groups)
-   πŸ‘‰   [Conversation Analytics](/docs/concepts/conversational-analytics)
-   πŸ‘‰   [Topic Hierarchy](/docs/concepts/topic-hierarchy)
+   πŸ‘‰   [Get Speech-to-Text (Transcripts)](/docs/concepts/speech-to-text)
+   πŸ‘‰   [Get Topics](/docs/concepts/topics)
+   πŸ‘‰   [Get Sentiment Analysis](/docs/concepts/sentiment-analysis)
+   πŸ‘‰   [Get Action Items](/docs/concepts/action-items)
+   πŸ‘‰   [Get Follow-Ups](/docs/concepts/follow-ups)
+   πŸ‘‰   [Get Questions](/docs/concepts/questions)
+   πŸ‘‰   [Get Trackers](/docs/concepts/trackers)
+   πŸ‘‰   [Get Conversation Groups](/docs/concepts/conversation-groups)
+   πŸ‘‰   [Get Conversation Analytics](/docs/concepts/conversational-analytics)
+   πŸ‘‰   [Get Topic Hierarchy](/docs/concepts/topic-hierarchy)
... and more. -Also check out our features in Labs such as Summarization, Comprehensive Action Items, Identifying and Redacting PII in the [Labs section](/docs/labs). - +Also, check out our features in the [Labs section](/docs/labs) that are currently being . Symbl Labs is an experimental wing that intends to apply and explore AI research on human conversations.
diff --git a/docs/javascript-sdk/reference/reference.md b/docs/javascript-sdk/reference/reference.md index e8d3c9cb..ab1bfcf7 100644 --- a/docs/javascript-sdk/reference/reference.md +++ b/docs/javascript-sdk/reference/reference.md @@ -239,6 +239,7 @@ Name | Description #### Code Example ```js +const {sdk, SpeakerEvent} = require("@symblai/symbl-js); const speakerEvent = new SpeakerEvent(); speakerEvent.type = SpeakerEvent.types.startedSpeaking; speakerEvent.user = { diff --git a/docs/labs.md b/docs/labs.md index 63de7d51..a76adf21 100644 --- a/docs/labs.md +++ b/docs/labs.md @@ -8,12 +8,12 @@ slug: /labs/ --- Symbl Labs is our experimental wing designed to share our bleeding edge AI research on human conversations with anyone who wants to explore its limits. -You can access the Labs features using your Symbl App Id and Secret. If you don't already have it, sign up on the platform to get your credentials. +You can access the Labs features using your Symbl App ID and Secret. If you don't already have it, [sign up](https://platform.symbl.ai/#/signup) on the Symbl Platform to get your credentials. Note that the base URL for all Symbl Labs feature is always `https://api-labs.symbl.ai` :::note -The usage of data for Labs projects is stored for enhancing our research. We may continue to build, iterate, mutate or discontinue any of the below given features on the sole discretion of our team as deemed necessary. +The usage of data for Labs projects is stored for enhancing our research. We may continue to build, iterate, mutate or discontinue any of the Labs features on the sole discretion of our team, as deemed necessary. ::: For any queries or feedback, please contact us at labs@symbl.ai. diff --git a/docs/sdk-intro.md b/docs/sdk-intro.md index 36d1e5c8..3ed844db 100644 --- a/docs/sdk-intro.md +++ b/docs/sdk-intro.md @@ -8,6 +8,10 @@ slug: /sdk-intro/ import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +import JavaScriptLogo from '@site/static/img/javascript-logo.png'; +import PythonLogo from '@site/static/img/python-logo.png'; +import CsharpLogo from '@site/static/img/csharp-logo.png'; +import WebSdkLogo from '@site/static/img/SDK.png'; --- @@ -16,16 +20,28 @@ Programmatically use Symbl APIs and integrate it with your web applications and Use Symbl's SDKs to directly add Symbl's capabilities onto your web conferencing platforms. It is available in popular programming languages given below:   - - - -
- - -  -  -  +

diff --git a/docs/streamingapi/tutorials/get-real-time-data.md b/docs/streamingapi/tutorials/get-real-time-data.md new file mode 100644 index 00000000..ad0fca23 --- /dev/null +++ b/docs/streamingapi/tutorials/get-real-time-data.md @@ -0,0 +1,709 @@ +--- +id: get-real-time-data +title: Live speech to text and AI insights on local server +slug: /streamingapi/tutorials/get-real-time-data/ +--- + +--- + +In this guide you will be shown how to use Symbl's Javascript SDK to enable your device's microphone for recording audio and processing. This example was built to run on Mac or Windows PCs. You will learn how to use Symbl's API for speech-to-text transcription and real-time AI insights, such as [follow-ups](/docs/concepts/follow-ups), [action items](/docs/concepts/action-items), [topics](/docs/concepts/topics) and [questions](/docs/conversation-api/questions). + +Throughout the guide you'll find various references to these variable names, which you will have to replace with your values: + +Key | Description +---------- | ------- +```appId``` | The application ID you get from the [home page of the platform](https://platform.symbl.ai/). +```appSecret``` | The application secret you get from the [home page of the platform](https://platform.symbl.ai/). +```emailAddress``` | The email address you wish to send the summary email to. The summary email summarizes the conversation and any conversational insights gained from it. + +[View the full example on GitHub]( +https://github.com/symblai/receive-ai-insights-with-real-time-websockets) + +:::info Identification and Redaction of PII data +Symbl allows you to identify and redact Personally Identifiable Information (PII) from messages and insights with Streaming APIs. Learn more in the [PII Identification and Redaction](/docs/concepts/redaction-pii) page. +::: + +## Contents + +In this guide you will learn the following: + +* [Getting Started](#getting-started) +* [Initialize SDK](#initialize-sdk) +* [Real-time Request Configuration Options](#real-time-request-configuration-options) + * [Insight Types (insightTypes)](#insight-types-insighttypes) + * [Config (config)](#config-config) + * [Speaker (speaker)](#speaker-speaker) + * [Handlers (handlers)](#handlers-handlers) + * [Full Configuration Object](#full-configuration-object) +* [Handle the audio stream](#handle-the-audio-stream) +* [Process speech using device's microphone](#process-speech-using-the-devices-microphone) +* [Test](#test) +* [Grabbing the Conversation ID](#grabbing-the-conversation-id) +* [Full Code Sample](#full-code-sample) + +## Getting started + +To get this example running, you need to install the node packages `@symblai/symbl-js`, `uuid` and `mic`. You can do that via with `npm install @symblai/symbl-js`, `npm install uuid` and `npm install mic`. We're using `mic` to simply get audio from the microphone and pass it on to the WebSocket connection. + +`mic` also requires you to install `sox`. To install `sox` choose the option which fits your operating system: + +**Mac**: `brew install sox`
+**Windows and Linux**: [Installation of SoX on different Platforms](https://at.projects.genivi.org/wiki/display/PROJ/Installation+of+SoX+on+different+Platforms) + +```javascript +const {sdk} = require('@symblai/symbl-js'); +``` + +Simple setup for `mic`. You can view the full configuration options for `mic` [here](https://github.com/ashishbajaj99/mic) + + +```javascript +const mic = require('mic'); +const sampleRateHertz = 16000; +const micInstance = mic({ + rate: sampleRateHertz, + channels: '1', + debug: false, + exitOnSilence: 6 +}); +``` + +## Initialize SDK + + +You can get the `appId` and `appSecret` values from the [Symbl Platform](https://platform.symbl.ai). + +```javascript +(async () => { + try { + await sdk.init({ + appId: appId, + appSecret: appSecret, + basePath: 'https://api.symbl.ai' + }) + } catch (e) {} +})() +``` +You will also need a unique ID to associate with our Symbl request. You will create +this ID using `uuid` package + +```js +const id = uuid(); +``` + + +## Real-time Request Configuration Options + +Now you can start the connection using `sdk.startRealtimeRequest`. You will need to create a configuration object for the connection + + +```js +const connection = await sdk.startRealtimeRequest(configurationObject); +``` + +Here is the breakdown of the configuration types: + +### Insight Types (`insightTypes`) + +* `insightTypes` - This array represents the type of insights that are to be detected. Today the supported types are `action_item` and `question`. + +```js +{ + insightTypes: ['action_item', 'question'] +} +``` + +#### Action Item (`action_item`) + +An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to act in the future. Action items will be returned via the `onInsightResponse` callback. + +These actions can be definitive and owned with a commitment to working on a presentation, sharing a file, completing a task, etc. Or they can be non-definitive like an idea, suggestion or an opinion that could be worked upon. + +All action items are generated with action phrases, assignees and due dates so that you can build workflow automation with your tools. + + +##### Action Item JSON Response Example + +This is an example of an `action_item` returned via the `onInsightResponse` callback function. + +```js +[{ + "id": "94020eb9-b688-4d56-945c-a7e5282258cc", + "confidence": 0.9909798145016999, + "messageReference": { + "id": "94020eb9-b688-4d56-945c-a7e5282258cc" + }, + "hints": [{ + "key": "informationScore", + "value": "0.9782608695652174" + }, { + "key": "confidenceScore", + "value": "0.9999962500210938" + }, { + "key": "comprehensionScore", + "value": "0.9983848333358765" + }], + "type": "action_item", + "assignee": { + "id": "e2c5acf8-b9ed-421a-b3b3-02a5ae9796a0", + "name": "John Doe", + "userId": "emailAddress" + }, + "dueBy": { + "value": "2021-02-05T00:00:00-07:00" + }, + "tags": [{ + "type": "date", + "text": "today", + "beginOffset": 39, + "value": { + "value": { + "datetime": "2021-02-05" + } + } + }, { + "type": "person", + "text": "John Doe", + "beginOffset": 8, + "value": { + "value": { + "name": "John Doe", + "id": "e2c5acf8-b9ed-421a-b3b3-02a5ae9796a0", + "assignee": true, + "userId": "emailAddress" + } + } + }], + "dismissed": false, + "payload": { + "content": "Perhaps John Doe can submit the report today.", + "contentType": "text/plain" + }, + "from": { + "id": "e2c5acf8-b9ed-421a-b3b3-02a5ae9796a0", + "name": "John Doe", + "userId": "emailAddress" + } +}] +``` + + +#### Question (`question`) + +The API will find explicit questions or request for information that comes up during the conversation. Questions will be returned via the `onInsightResponse` callback. + +##### Question JSON Response Example + +This is an example of a `question` returned via the `onInsightResponse` callback function. + + +```js +[ + { + "id": "5a1fc496-bdda-4496-93cc-ef9714a63b1b", + "confidence": 0.9677371919681392, + "messageReference": { + "id": "541b6de9-1d0d-40af-a506-54fdf52b996d" + }, + "hints": [ + { + "key": "confidenceScore", + "value": "0.9998153329948111" + }, + { + "key": "comprehensionScore", + "value": "0.9356590509414673" + } + ], + "type": "question", + "assignee": { + "id": "7a717fc4-f292-4f26-88d3-ed63440e1f91", + "name": "John Doe", + "userId": "EMAIL_ADDRESS" + }, + "tags": [], + "dismissed": false, + "payload": { + "content": "How much will all of this cost?", + "contentType": "text/plain" + }, + "from": { + "id": "7a717fc4-f292-4f26-88d3-ed63440e1f91", + "name": "John Doe", + "userId": "EMAIL_ADDRESS" + } + } +] +``` + +### Config (`config`) + +```js +config: { + meetingTitle: 'My Test Meeting', + confidenceThreshold: 0.7, + timezoneOffset: 480, // Offset in minutes from UTC + languageCode: 'en-US', + sampleRateHertz +}, +``` + +* `config`: This configuration object encapsulates the properties which directly relate to the conversation generated by the audio being passed. + + * `meetingTitle`: This optional parameter specifies the name of the conversation generated. You can get more info on conversations [here](/docs/conversation-api/conversation-data) + + * `confidenceThreshold`: This optional parameter specifies the confidence threshold for detecting the insights. Only the insights that have `confidenceScore` more than this value will be returned. + + * `timezoneOffset`: This specifies the actual timezoneOffset used for detecting the time/date-related entities. + + * `languageCode`: It specifies the language to be used for transcribing the audio in BCP-47 format. (Needs to be same as the language in which audio is spoken) + + * `sampleRateHertz`: It specifies the sampleRate for this audio stream. + + +### Speaker (`speaker`) + +```js +speaker: { + // Optional, if not specified, will simply not send an email in the end. + userId: 'emailAddress', // Update with valid email + name: 'My name' +}, +``` + +`speaker`: Optionally specify the details of the speaker whose data is being passed in the stream. This enables an e-mail with the Summary UI URL to be sent after the end of the stream. + +### Handlers (`handlers`) + +```js +handlers: { + /** + * This will return live speech-to-text transcription of the call. + */ + onSpeechDetected: (data) => { + if (data) { + const {punctuated} = data + console.log('Live: ', punctuated && punctuated.transcript) + console.log(''); + } + console.log('onSpeechDetected ', JSON.stringify(data, null, 2)); + }, + /** + * When processed messages are available, this callback will be called. + */ + onMessageResponse: (data) => { + console.log('onMessageResponse', JSON.stringify(data, null, 2)) + }, + /** + * When Symbl detects an insight, this callback will be called. + */ + onInsightResponse: (data) => { + console.log('onInsightResponse', JSON.stringify(data, null, 2)) + }, + /** + * When Symbl detects a topic, this callback will be called. + */ + onTopicResponse: (data) => { + console.log('onTopicResponse', JSON.stringify(data, null, 2)) + } +} +``` + + +* `handlers`: This object has the callback functions for different events + + * `onSpeechDetected`: To retrieve the real-time transcription results as soon as they are detected. You can use this callback to render live transcription which is specific to the speaker of this audio stream. + + #### onSpeechDetected JSON Response Example + + ```js + { + "type": "recognition_result", + "isFinal": true, + "payload": { + "raw": { + "alternatives": [{ + "words": [{ + "word": "Hello", + "startTime": { + "seconds": "3", + "nanos": "800000000" + }, + "endTime": { + "seconds": "4", + "nanos": "200000000" + } + }, { + "word": "world.", + "startTime": { + "seconds": "4", + "nanos": "200000000" + }, + "endTime": { + "seconds": "4", + "nanos": "800000000" + } + }], + "transcript": "Hello world.", + "confidence": 0.9128385782241821 + }] + } + }, + "punctuated": { + "transcript": "Hello world." + }, + "user": { + "userId": "emailAddress", + "name": "John Doe", + "id": "23681108-355b-4fc3-9d94-ed47dd39fa56" + } + } + ``` + + * `onMessageResponse`: This callback function contains the "finalized" transcription data for this speaker and if used with multiple streams with other speakers this callback would also provide their messages. + The "finalized" messages mean that the automatic speech recognition has finalized the state of this part of transcription and has declared it "final". Therefore, this transcription will be more accurate than `onSpeechDetected`. + + #### onMessageResponse JSON Response Example + + ```js + [{ + "from": { + "id": "0a7a36b1-047d-4d8c-8958-910317ed9edc", + "name": "John Doe", + "userId": "emailAddress" + }, + "payload": { + "content": "Hello world.", + "contentType": "text/plain" + }, + "id": "59c224c2-54c5-4762-9582-961bf250b478", + "channel": { + "id": "realtime-api" + }, + "metadata": { + "disablePunctuation": true, + "timezoneOffset": 480, + "originalContent": "Hello world.", + "words": "[{\"word\":\"Hello\",\"startTime\":\"2021-02-04T20:34:59.029Z\",\"endTime\":\"2021-02-04T20:34:59.429Z\"},{\"word\":\"world.\",\"startTime\":\"2021-02-04T20:34:59.429Z\",\"endTime\":\"2021-02-04T20:35:00.029Z\"}]", + "originalMessageId": "59c224c2-54c5-4762-9582-961bf250b478" + }, + "dismissed": false, + "duration": { + "startTime": "2021-02-04T20:34:59.029Z", + "endTime": "2021-02-04T20:35:00.029Z" + } + }] + ``` + + * `onInsightResponse`: This callback provides you with any of the detected insights in real-time as they are detected. As with the `onMessageResponse` this would also return every speaker's insights in case of multiple streams. + + **View the examples for `onInsightResponse` [here](#insight-types-insighttypes).** + + * `onTrackerResponse`: This callback provides you with any of the detected trackers in real-time as they are detected. As with the `onMessageResponse` this would also return every tracker in case of multiple streams. + + #### onTrackerResponse JSON Response Example + + ```json + [ + { + "id": "4527907378937856", + "name": "My Awesome Tracker", + "matches": [ + { + "messageRefs": [ + { + "id": "4670860273123328", + "text": "Wearing mask is a good safety measure.", + "offset": -1 + } + ], + "type": "vocabulary", + "value": "wear mask", + "insightRefs": [] + } + ] + } + ] + ``` + + * `onTopicResponse`: This callback provides you with any of the detected topics in real-time as they are detected. As with the `onMessageResponse` this would also return every topic in case of multiple streams. + + #### onTopicResponse JSON Response Example + + ```json + [{ + "id": "e69a5556-6729-11eb-ab14-2aee2deabb1b", + "messageReferences": [{ + "id": "0df44422-0248-47e9-8814-e87f63404f2c", + "relation": "text instance" + }], + "phrases": "auto insurance", + "rootWords": [{ + "text": "auto" + }], + "score": 0.9, + "type": "topic" + }] + ``` + +### Full Configuration Object + +```js +const connection = await sdk.startRealtimeRequest({ + id, + insightTypes: ['action_item', 'question'], + config: { + meetingTitle: 'My Test Meeting', + confidenceThreshold: 0.7, + timezoneOffset: 480, // Offset in minutes from UTC + languageCode: 'en-US', + sampleRateHertz + }, + speaker: { + // Optional, if not specified, will simply not send an email in the end. + userId: 'emailAddress', // Update with valid email + name: 'My name' + }, + handlers: { + /** + * This will return live speech-to-text transcription of the call. + */ + onSpeechDetected: (data) => { + if (data) { + const {punctuated} = data + console.log('Live: ', punctuated && punctuated.transcript) + console.log(''); + } + console.log('onSpeechDetected ', JSON.stringify(data, null, 2)); + }, + /** + * When processed messages are available, this callback will be called. + */ + onMessageResponse: (data) => { + console.log('onMessageResponse', JSON.stringify(data, null, 2)) + }, + /** + * When Symbl detects an insight, this callback will be called. + */ + onInsightResponse: (data) => { + console.log('onInsightResponse', JSON.stringify(data, null, 2)) + }, + /** + * When Symbl detects a topic, this callback will be called. + */ + onTopicResponse: (data) => { + console.log('onTopicResponse', JSON.stringify(data, null, 2)) + }, + /** + * When trackers are detected, this callback will be called. + */ + onTrackerResponse: (data) => { + console.log('onTrackerResponse', JSON.stringify(data, null, 2)) + }, + } +}); +``` + +## Handle the audio stream + +The connection should now be established to the Web Socket. Now you must create several handlers which will handle the audio stream. You can view all the valid handlers [here](https://github.com/ashishbajaj99/mic): + +```js +const micInputStream = micInstance.getAudioStream() +/** Raw audio stream */ +micInputStream.on('data', (data) => { + // Push audio from Microphone to websocket connection + connection.sendAudio(data) +}) + +micInputStream.on('error', function (err) { + console.log('Error in Input Stream: ' + err) +}) + +micInputStream.on('startComplete', function () { + console.log('Started listening to Microphone.') +}) + +micInputStream.on('silence', function () { + console.log('Got SIGNAL silence') +}) +``` + +## Process speech using the device's microphone + +Now you start the recording: + + +```js +micInstance.start() +``` + +Your microphone should now be open to input which will be sent to the Web Socket for processing. The microphone will continue to accept input until the application is stopped or until you tell the connection to stop: + + +```js +/** + * Stop connection after 1 minute i.e. 60 secs + */ +setTimeout(async () => { + // Stop listening to microphone + micInstance.stop() + console.log('Stopped listening to Microphone.') + try { + // Stop connection + await connection.stop() + console.log('Connection Stopped.') + } catch (e) { + console.error('Error while stopping the connection.', e) + } +}, 60 * 1000) +``` + +## Test +To verify and check if the code is working: + +Run your code: +```bash +$ node index.js +``` + +## Grabbing the Conversation ID + +The Conversation ID is very useful for our other APIs such as the [Conversation API](/docs/conversation-api/introduction). We don't use it in this example because it's mainly used for non-real-time data gathering, but it's good to know how to grab it as you can use the Conversation ID later to extract the conversation insights again. + + +```js + const conversationId = connection.conversationId +``` + +With the Conversation ID you can do each of the following (and more!): + +**[View conversation topics](/docs/conversation-api/get-topics)**
+Summary topics provide a quick overview of the key things that were talked about in the conversation. + +**[View action items](/docs/conversation-api/action-items)**
+An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to take a specific action, e.g. set up a meeting, share a file, complete a task, etc. + +**[View follow-ups](/docs/conversation-api/follow-ups)**
+This is a category of action items with a connotation to follow-up a request or a task like sending an email or making a phone call or booking an appointment or setting up a meeting. + +## Full Code Sample + +Here's the full sample below which you can also [view on Github](https://github.com/symblai/receive-ai-insights-with-real-time-websockets): + +```js +const {sdk} = require('@symblai/symbl-js') +const uuid = require('uuid').v4 + +// For demo purposes, we're using mic to simply get audio from the microphone and pass it on to the WebSocket connection +const mic = require('mic') + +const sampleRateHertz = 16000 + +const micInstance = mic({ + rate: sampleRateHertz, + channels: '1', + debug: false, + exitOnSilence: 6, +}); + +(async () => { + try { + // Initialize the SDK + await sdk.init({ + appId: appId, + appSecret: appSecret, + basePath: 'https://api.symbl.ai', + }) + + // Need unique Id + const id = uuid() + + // Start Real-time Request (Uses Real-time WebSocket API behind the scenes) + const connection = await sdk.startRealtimeRequest({ + id, + insightTypes: ['action_item', 'question'], + config: { + meetingTitle: 'My Test Meeting', + confidenceThreshold: 0.7, + timezoneOffset: 480, // Offset in minutes from UTC + languageCode: 'en-US', + sampleRateHertz + }, + speaker: { + // Optional, if not specified, will simply not send an email in the end. + userId: 'emailAddress', // Update with valid email + name: 'My name' + }, + handlers: { + /** + * This will return live speech-to-text transcription of the call. + */ + onSpeechDetected: (data) => { + if (data) { + const {punctuated} = data + console.log('Live: ', punctuated && punctuated.transcript) + console.log(''); + } + console.log('onSpeechDetected ', JSON.stringify(data, null, 2)); + }, + /** + * When processed messages are available, this callback will be called. + */ + onMessageResponse: (data) => { + console.log('onMessageResponse', JSON.stringify(data, null, 2)) + }, + /** + * When Symbl detects an insight, this callback will be called. + */ + onInsightResponse: (data) => { + console.log('onInsightResponse', JSON.stringify(data, null, 2)) + }, + /** + * When Symbl detects a topic, this callback will be called. + */ + onTopicResponse: (data) => { + console.log('onTopicResponse', JSON.stringify(data, null, 2)) + } + } + }); + console.log('Successfully connected. Conversation ID: ', connection.conversationId); + + const micInputStream = micInstance.getAudioStream() + /** Raw audio stream */ + micInputStream.on('data', (data) => { + // Push audio from Microphone to websocket connection + connection.sendAudio(data) + }) + + micInputStream.on('error', function (err) { + console.log('Error in Input Stream: ' + err) + }) + + micInputStream.on('startComplete', function () { + console.log('Started listening to Microphone.') + }) + + micInputStream.on('silence', function () { + console.log('Got SIGNAL silence') + }) + + micInstance.start() + + setTimeout(async () => { + // Stop listening to microphone + micInstance.stop() + console.log('Stopped listening to Microphone.') + try { + // Stop connection + await connection.stop() + console.log('Connection Stopped.') + } catch (e) { + console.error('Error while stopping the connection.', e) + } + }, 60 * 1000) // Stop connection after 1 minute i.e. 60 secs + } catch (e) { + console.error('Error: ', e) + } +})(); +``` diff --git a/docusaurus-staging.config.js b/docusaurus-staging.config.js index c6481314..e8659ad0 100644 --- a/docusaurus-staging.config.js +++ b/docusaurus-staging.config.js @@ -259,30 +259,35 @@ module.exports = { label: "API Reference", to: '/api-reference/getting-started/', position: "left", + activeBaseRegex: "docs/(api-reference|async-api/(overview|introduction|reference)|streamingapi/introduction|streaming-api/api-reference|subscribe-api|telephony/introduction|telephony-api|conversation-api/api-reference|management-api|developer-tools/(authentication|error|postman|sample-apps))", }, { label: "SDKs", - href: '/sdk-intro/', + to: '/sdk-intro/', position: "left", + activeBaseRegex: "docs/(javascript-sdk|python-sdk|sdk-intro)" }, { label: "Tutorials", - href: '/tutorials/', + to: '/tutorials/', position: "left", + activeBaseRegex: "docs/(tutorials|streamingapi/(code-snippets|tutorials)|async-api/(code-snippets|tutorials)|telephony/(code-snippets|tutorials)|best-practices/best-practices-trackers)|pre-built-ui/(tuning-summary-page|custom-domain|user-engagement-analytics|supported-tracking-events)|concepts/(websockets|pstn-and-sip)" }, { label: "Integrations", - href: '/integrations/integrations-intro/', + to: '/integrations/integrations-intro/', position: "left", + activeBaseRegex: 'docs/integrations' }, { label: "Labs", - href: '/labs/', + to: '/labs/', position: "left", + activeBaseRegex: "docs/(labs|conversation-api/comprehensive-action-items|concepts/redaction-pii|guides/abstract-topics-labs)" }, { label: "Support", - href: '/support/', + to: '/support/', position: "left", }, { @@ -292,13 +297,13 @@ module.exports = { }, { label: 'Free Sign Up', - to: 'https://platform.symbl.ai/#/signup', + href: 'https://platform.symbl.ai/#/signup', position: "right", }, { label: "πŸ†•Changelog", ImageData: "/img/tick-mark.png", - href: '/changelog', + to: '/changelog/', position: "right", }, ], diff --git a/docusaurus.config.js b/docusaurus.config.js index cd61c494..53b9b438 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -259,30 +259,35 @@ module.exports = { label: "API Reference", to: '/api-reference/getting-started/', position: "left", + activeBaseRegex: "docs/(api-reference|async-api/(overview|introduction|reference)|streamingapi/introduction|streaming-api/api-reference|subscribe-api|telephony/introduction|telephony-api|conversation-api/api-reference|management-api|developer-tools/(authentication|error|postman|sample-apps))", }, { label: "SDKs", - href: '/sdk-intro/', + to: '/sdk-intro/', position: "left", + activeBaseRegex: "docs/(javascript-sdk|python-sdk|sdk-intro)" }, { label: "Tutorials", - href: '/tutorials/', + to: '/tutorials/', position: "left", + activeBaseRegex: "docs/(tutorials|streamingapi/(code-snippets|tutorials)|async-api/(code-snippets|tutorials)|telephony/(code-snippets|tutorials)|best-practices/best-practices-trackers)|pre-built-ui/(tuning-summary-page|custom-domain|user-engagement-analytics|supported-tracking-events)|concepts/(websockets|pstn-and-sip)" }, { label: "Integrations", - href: '/integrations/integrations-intro/', + to: '/integrations/integrations-intro/', position: "left", + activeBaseRegex: 'docs/integrations' }, { label: "Labs", - href: '/labs/', + to: '/labs/', position: "left", + activeBaseRegex: "docs/(labs|conversation-api/comprehensive-action-items|concepts/redaction-pii|guides/abstract-topics-labs)" }, { label: "Support", - href: '/support/', + to: '/support/', position: "left", }, { @@ -292,13 +297,13 @@ module.exports = { }, { label: 'Free Sign Up', - to: 'https://platform.symbl.ai/#/signup', + href: 'https://platform.symbl.ai/#/signup', position: "right", }, { label: "πŸ†• Changelog", ImageData: "/img/tick-mark.png", - href: '/changelog', + to: '/changelog/', position: "right", }, ], diff --git a/sidebars.js b/sidebars.js index 2fcd42ab..71399776 100644 --- a/sidebars.js +++ b/sidebars.js @@ -400,7 +400,7 @@ id: 'developer-tools/postman', items: [ 'streamingapi/code-snippets/start-and-stop-streaming-api-connection', 'streamingapi/tutorials/get-real-time-transcription', - 'javascript-sdk/tutorials/push-audio-get-real-time-data', + 'streamingapi/tutorials/get-real-time-data', 'streamingapi/tutorials/get-real-time-sentiment-analysis', 'streamingapi/code-snippets/detect-key-phrases', 'streamingapi/code-snippets/receive-live-captioning', @@ -512,11 +512,17 @@ id: 'developer-tools/postman', "telephony/concepts/concepts", ], }, -// this is a duplicate as the code is buggy +// this is a duplicate code because the code is buggy { -type: "doc", -id: "faq" + label: 'Concepts', + type: 'category', + collapsed: true, + items: [ + "streamingapi/concepts", + "telephony/concepts/concepts", + ], }, + ], // Integrations Tab @@ -658,10 +664,10 @@ SDKsidebar: [{ }, ] }, - { - type: 'doc', - id: 'developer-tools/postman', - }, + { + type: 'doc', + id: 'python-sdk/python-sdk-reference', // added because of buggy code + }, ], diff --git a/src/css/custom.css b/src/css/custom.css index c5ae6524..822a557f 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -75,6 +75,7 @@ html[data-theme='dark'] .menu>ul>li>a.menu__link.menu__link--sublist { background-size: 23px; } + /* .menu .menu__link--sublist:after { background-size: 1.5rem 1.5rem; @@ -435,4 +436,56 @@ th:empty { .button2 {background-color: #c1e7f3;} /* Blue */ .button3 {background-color: #d0d8e2;} /* Blue */ .button4 {background-color: #acc7f0;} /* Blue */ - \ No newline at end of file + + .sdk-card-container { + display:grid; + grid-template-columns: repeat(auto-fill, 400px); + } + + .sdk-card { + width:360px; + margin: 1.25rem; + padding: 1.5rem; + display:flex; + align-items: center; + flex-direction: row; + text-decoration: none; + font-size: var(--font-size-normal); + color:black; + border-radius: 10px; +} + +.sdk-card1-bg { + background-color: #e0ebe0; +} + +.sdk-card2-bg { + background-color: #c1e7f3; +} + +.sdk-card3-bg { + background-color: #acc7f0; +} + +.sdk-card4-bg { + background-color: #d0d8e2; +} + +.sdk-card:hover { + text-decoration: none; + color:black; + box-shadow: 0px 5px 40px rgba(0, 0, 0, 0.1); +} + +html[data-theme='dark'] .sdk-card:hover { + box-shadow: 0px 5px 40px rgba(11, 69, 146, 0.1); +} + +.sdk-card-logo { + height: 50px; +} + +.sdk-card-header{ + margin-left:25px; + margin-bottom: 0px; +} \ No newline at end of file diff --git a/static/img/csharp-logo.png b/static/img/csharp-logo.png new file mode 100644 index 00000000..760906b8 Binary files /dev/null and b/static/img/csharp-logo.png differ diff --git a/static/img/javascript-logo.png b/static/img/javascript-logo.png new file mode 100644 index 00000000..4637ac92 Binary files /dev/null and b/static/img/javascript-logo.png differ diff --git a/static/img/python-logo.png b/static/img/python-logo.png new file mode 100644 index 00000000..49ea8f5b Binary files /dev/null and b/static/img/python-logo.png differ diff --git a/static/img/websdk-logo.png b/static/img/websdk-logo.png new file mode 100644 index 00000000..47cb8df0 Binary files /dev/null and b/static/img/websdk-logo.png differ