diff --git a/docs/async-api/overview/audio/post-audio-url.md b/docs/async-api/overview/audio/post-audio-url.md index f189e38f..1a16e5a7 100644 --- a/docs/async-api/overview/audio/post-audio-url.md +++ b/docs/async-api/overview/audio/post-audio-url.md @@ -298,25 +298,25 @@ Header Name | Required | Description ### Request Body Parameters -Parameters | Required | Type | Description ----------- | ------- | ------- | ------- -```url``` | Mandatory | String | A valid url string. The URL must be a publicly accessible url. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. It's a boolean value where the default value is `false`. -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be after the API call is made. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. - ```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. - ``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```enableSeparateRecognitionPerChannel``` | Optional | Boolean | Enables Speaker Separated Channel audio processing. Accepts `true` or `false`. -```channelMetadata``` | Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below. -```trackers``` BETA | No | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```url``` | String, mandatory

A valid url string. The URL must be a publicly accessible url.

Exampple: `"url": "https://symbltestdata.s3.us-east-2.amazonaws.com/sample_audio_file.wav"` +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` #### Channel Metadata @@ -346,17 +346,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Yes | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Yes | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | No | String | Name of the speaker. -```email``` | No | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Response @@ -369,9 +369,9 @@ Field | Required | Type | Description Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. - +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` + ### Speaker Separation --- @@ -413,8 +413,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]) +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error --- diff --git a/docs/async-api/overview/audio/post-audio.md b/docs/async-api/overview/audio/post-audio.md index 85f43718..5695b830 100644 --- a/docs/async-api/overview/audio/post-audio.md +++ b/docs/async-api/overview/audio/post-audio.md @@ -206,24 +206,25 @@ Header Name | Required | Value ### Query Parameters -Parameters | Required | Type | Description ----------- | ------- | ------- | -------- -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be after the API call is made. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). See sample request under [Custom Entity](#custom-entity) section. -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. -```detectPhrases```| Optional | Boolean | Accepted values are `true` & `false`. It shows Actionable Phrases in each sentence of conversation. These sentences can be found in the Conversation's Messages API. -```enableSeparateRecognitionPerChannel``` | Optional | boolean | Enables Speaker Separated Channel audio processing. Accepts `true` or `false`. - ```channelMetadata``` | Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` query param is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below. - ```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. -``` mode``` | Optional | String | Accepts `phone` or `default`.`phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```trackers``` BETA | Optional | List | A `tracker` entity containing name and vocabulary (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` #### Custom Entity ```json @@ -271,17 +272,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Yes | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Yes | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | No | String | Name of the speaker. -```email``` | No | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Response @@ -294,8 +295,8 @@ Field | Required | Type | Description Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` ### Speaker Separation --- @@ -338,8 +339,9 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]) +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` + ### API Limit Error --- diff --git a/docs/async-api/overview/audio/put-audio-url.md b/docs/async-api/overview/audio/put-audio-url.md index c3b88f22..a2b4d5a2 100644 --- a/docs/async-api/overview/audio/put-audio-url.md +++ b/docs/async-api/overview/audio/put-audio-url.md @@ -311,25 +311,25 @@ Parameter | Description ``` ### Request Body Parameters -Parameters | Required | Type | Description ----------- | ------- | ------- | ----- -```url``` | Mandatory | String | A valid url string. The URL must be a publicly accessible url. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. It's a boolean value where the default value is `false`. -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be post making the API call. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```customEntities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. - ```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. -``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```enableSeparateRecognitionPerChannel``` | Optional | Boolean | Enables Speaker Separated Channel audio processing. Accepts `true` or `false`. -```channelMetadata``` | Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below. -```trackers``` BETA | Optional | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```url``` | String, mandatory

A valid url string. The URL must be a publicly accessible url.

Exampple: `"url": "https://symbltestdata.s3.us-east-2.amazonaws.com/sample_audio_file.wav"` +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` #### Channel Metadata @@ -359,17 +359,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Mandatory | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Mandatory | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | Optional | String | Name of the speaker. -```email``` | Optional | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Response @@ -381,8 +381,8 @@ Field | Required | Type | Description ``` Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` ### Speaker Separation --- @@ -425,8 +425,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]). +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error diff --git a/docs/async-api/overview/audio/put-audio.md b/docs/async-api/overview/audio/put-audio.md index b99dbdb7..ed98457f 100644 --- a/docs/async-api/overview/audio/put-audio.md +++ b/docs/async-api/overview/audio/put-audio.md @@ -210,24 +210,25 @@ Parameter | Description ### Query Parameters -Parameters | Required | Type | Description ----------- | ------- | ------- | ------ -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook url on which job updates to be sent. This should be post making the API call. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. - ```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. - ```detectPhrases```| Optional | Boolean | Accepted values are `true` & `false`. It shows Actionable Phrases in each sentence of conversation. These sentences can be found in the Conversation's Messages API. - ```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). See sample request under [Custom Entity](#custom-entity) section below. - ```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. - ```enableSeparateRecognitionPerChannel``` | Optional | Boolean | Enables Speaker Separated Channel audio processing. Accepts `true` or `false`. - ```channelMetadata``` | Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` query param is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below. - ```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. - ``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. - ```trackers``` BETA | Optional | List | A `tracker` entity containing the `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. - ```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). - ```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` #### Custom Entity @@ -276,17 +277,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Yes | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Yes | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | No | String | Name of the speaker. -```email``` | No | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Response @@ -299,8 +300,8 @@ Field | Required | Type | Description Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` ### Speaker Separation --- @@ -343,8 +344,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]) +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error --- diff --git a/docs/async-api/overview/text/post-text.md b/docs/async-api/overview/text/post-text.md index e64403ab..82255778 100644 --- a/docs/async-api/overview/text/post-text.md +++ b/docs/async-api/overview/text/post-text.md @@ -295,25 +295,26 @@ Header Name | Required | Description ``` ### Request Body Parameters -Field | Required | Type | Description ----------- | ------- | ------- | ------- | -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```detectPhrases```| Optional | Boolean | It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```entities``` | Optional | List | Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities). -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. -```messages``` | Mandatory | list | Input Messages to look for insights. [See the messages section below for more details.](#messages) -```trackers``` BETA | Optional | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more. +Parameter | Description +---------- | ------- | +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` +```messages``` | List, mandatory

Input Messages to look for insights. [See the messages section below for more details.](#messages)

Example: `"messages": "payload": "content": "Hi Mike, Natalia here..."` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` + #### messages -Field | Required | Type | Description ----------- | ------- | ------- | ------- -```payload``` | Yes | Object | Input Messages to look for insights. [See the payload section below for more details.](#payload) -```from``` | No | Object | Information about the User information produced the content of this message. -```duration``` | No | Object | Duration object containing `startTime` and `endTime` for the transcript. +Field | Description +---------- | ------- +```payload``` | Object, mandatory

Input Messages to look for insights. [See the payload section below for more details.](#payload)

Example: `"payload": "content": "Hi Mike, Natalia here...` +```from``` | Object, optional

Information about the User information produced the content of this message.

Example: `"from": "userId": "natalia@example.com", "name": "Natalia"` +```duration``` | Object, optional

Duration object containing `startTime` and `endTime` for the transcript.

Example: `"duration": "startTime":"2020-07-21T16:02:19.01Z", "endTime":"2020-07-21T16:04:19.99Z"` ```js { @@ -347,9 +348,9 @@ Field | Required | Type | Description #### payload -Field | Required | Type | Default | Description ----------- | ------- | ------- | ------- | ------- -```content``` | Mandatory | String | | The text content that you want the API to parse. +Field | Description +| ------- | ------- +```content``` | String, mandatory

The text content that you want the API to parse.

Example: `"content": "Hi Mike, Natalia here...` ```js { @@ -361,10 +362,10 @@ Field | Required | Type | Default | Description #### from(user) -Field | Required | Type | Description ----------- | ------- | ------- | ------- -```name``` | Optional | String | Name of the user. -```userId``` | Optional | String | A unique identifier of the user. E-mail ID is usually a preferred identifier for the user. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the user.

Example: `"name": "Mike"` +```userId``` | String, optional

A unique identifier of the user. E-mail ID is usually a preferred identifier for the user.

Example: `"userId": "mike@abccorp.com"` ```js { @@ -377,10 +378,10 @@ Field | Required | Type | Description #### duration -Field | Required | Type | Description ----------- | ------- | ------- | ------- -```startTime``` | Optional | DateTime | The start time for the particular text content. -```endTime``` | Optional | DateTime | The end time for the particular text content. +Field | Description +| ------- | ------- +```startTime``` | DateTime, optional

The start time for the particular text content.

Example: `"startTime":"2020-07-21T16:04:19.99Z"` +```endTime``` | DateTime, optional

The end time for the particular text content.

Example: `"endTime":"2020-07-21T16:04:20.99Z"` ```js { @@ -403,9 +404,10 @@ WebhookUrl will be used to send the status of job created. Every time the status ``` Field | Description ----------- | ------- | -```jobId``` | ID to be used with Job API. -```status``` | Current status of the job. Valid statuses: [ `scheduled`, `in_progress`, `completed` ]. +| ------- | ------- +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` + ### Response @@ -418,8 +420,8 @@ Field | Description Field | Description ---------- | ------- | -```conversationId``` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` ### API Limit Error diff --git a/docs/async-api/overview/text/put-text.md b/docs/async-api/overview/text/put-text.md index b4bf467a..f4bfaadb 100644 --- a/docs/async-api/overview/text/put-text.md +++ b/docs/async-api/overview/text/put-text.md @@ -332,18 +332,18 @@ Parameter | Value ### Request Body Parameters -Field | Required | Type | Description ----------- | ------- | ------- | ------- | -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```messages``` | Mandatory | List | Input Messages to look for insights. [See the messages section below for more details.](#messages) -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | It shows Actionable Phrases in each sentence of a conversation. These sentences can be found using the Conversation's Messages API. The default value is `false`. -```entities``` | Optional | List | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. -```trackers``` BETA | Optional | String | A `tracker` entity containing name and vocabulary (a list of key words and/or phrases to be tracked). Read more in the[Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be post API. See [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) below. +Parameter | Description +---------- | ------- | +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` +```messages``` | List, mandatory

Input Messages to look for insights. [See the messages section below for more details.](#messages)

Example: `"messages": "payload": "content": "Hi Mike, Natalia here..."` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` #### messages @@ -379,17 +379,18 @@ Field | Required | Type | Description } ``` -Field | Required | Type | Description ----------- | ------- | ------- | ------- -```payload``` | Mandatory | Object | Input Messages to look for insights. [See the payload section below for more details.](#payload) -```from``` | Optional | Object | Information about the User information produced the content of this message. -```duration``` | Optional | Object | Duration object containing `startTime` and `endTime` for the transcript. +Field | Description +---------- | ------- +```payload``` | Object, mandatory

Input Messages to look for insights. [See the payload section below for more details.](#payload)

Example: `"payload": "content": "Hi Mike, Natalia here...` +```from``` | Object, optional

Information about the User information produced the content of this message.

Example: `"from": "userId": "natalia@example.com", "name": "Natalia"` +```duration``` | Object, optional

Duration object containing `startTime` and `endTime` for the transcript.

Example: `"duration": "startTime":"2020-07-21T16:02:19.01Z", "endTime":"2020-07-21T16:04:19.99Z"` #### payload -Field | Required | Type | Default | Description ----------- | ------- | ------- | ------- | ------- -```content``` | Mandatory | String | | The text content that you want the API to parse. +Field | Description +| ------- | ------- +```content``` | String, mandatory

The text content that you want the API to parse.

Example: `"content": "Hi Mike, Natalia here...` + ##### Code Example @@ -402,10 +403,10 @@ Field | Required | Type | Default | Description ``` #### from(user) -Field | Required | Type | Description ----------- | ------- | ------- | ------- -```name``` | Optional | String | Name of the user. -```userId``` | Optional | String | A unique identifier of the user. E-mail ID is usually a preferred identifier for the user. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the user.

Example: `"name": "Mike"` +```userId``` | String, optional

A unique identifier of the user. E-mail ID is usually a preferred identifier for the user.

Example: `"userId": "mike@abccorp.com"` ##### Code Example @@ -420,10 +421,11 @@ Field | Required | Type | Description #### duration -Field | Required | Type | Description ----------- | ------- | ------- | ------- -```StartTime``` | Optional | DateTime | The start time for the particular text content. -```endTime``` | Optional | DateTime | The end time for the particular text content. +Field | Description +| ------- | ------- +```startTime``` | DateTime, optional

The start time for the particular text content.

Example: `"startTime":"2020-07-21T16:04:19.99Z"` +```endTime``` | DateTime, optional

The end time for the particular text content.

Example: `"endTime":"2020-07-21T16:04:20.99Z"` + ##### Code Example @@ -441,10 +443,12 @@ Field | Required | Type | Description `webhookUrl` will be used to send the status of job created. Every time the status of the job changes it will be notified on the `webhookUrl`. #### webhook Payload + Field | Description ----------- | ------- | -`jobId` | ID to be used with Job API. -`status` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed` ]) +| ------- | ------- +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` + ##### Code Example @@ -467,8 +471,8 @@ Field | Description Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` ### API Limit Error diff --git a/docs/async-api/overview/video/post-video-url.md b/docs/async-api/overview/video/post-video-url.md index bf4eda6e..0e580538 100644 --- a/docs/async-api/overview/video/post-video-url.md +++ b/docs/async-api/overview/video/post-video-url.md @@ -273,25 +273,25 @@ Header Name | Required | Description ``` ### Request Body Parameters -Field | Required | Type | Description ------ | ------- | -------- | ------- | -```url``` | Mandatory | String | A valid URL string. The URL must be a publicly accessible URL. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | It shows [Actionable Phrases](/docs/concepts/action-items) in each sentence of conversation. These sentences can be found using the [Conversation's Messages API](/docs/conversation-api/messages). Accepts `true` or `false`. -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be after the API call is made. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. -```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. -``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```enableSeparateRecognitionPerChannel```| Optional | Boolean | Enables Speaker Separated Channel video processing. Accepts `true` or `false` values. -```channelMetadata```| Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specify which speaker corresponds to which channel. This object only works when `enableSeparateRecognitionPerChannel` query param is set to `true`. Read more in the [Channel Metadata](#channel-metadata) section below. -```trackers``` BETA| Optional | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```url``` | String, mandatory

A valid url string. The URL must be a publicly accessible url.

Exampple: `"url": "https://symbltestdata.s3.us-east-2.amazonaws.com/sample_audio_file.wav"` +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` ### Response @@ -303,8 +303,8 @@ Field | Required | Type | Description ``` Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` #### Channel Metadata @@ -334,17 +334,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Mandatory | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Mandatory | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | Optional | String | Name of the speaker. -```email``` | Optional | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Speaker Separation --- @@ -387,8 +387,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]). +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error diff --git a/docs/async-api/overview/video/post-video.md b/docs/async-api/overview/video/post-video.md index e65d9c8c..de3fd2c8 100644 --- a/docs/async-api/overview/video/post-video.md +++ b/docs/async-api/overview/video/post-video.md @@ -183,24 +183,25 @@ Header Name | Required | Description ### Query Parameters -Parameter | Required | Type | Description ---------- | --------- | ------- | ------- -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be post API. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | Accepted values are `true` & `false`. It shows [Actionable Phrases](/docs/conversation-api/action-items) in each sentence of conversation. These sentences can be found in the [Conversation's Messages API](/docs/conversation-api/messages). -```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). Sample request for Custom Entity is given in the [Custom Entity](#custom-entity) section below. -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. - ```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. -``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```enableSeparateRecognitionPerChannel```| Optional | Boolean | Enables Speaker Separated Channel video processing. Accepts `true` or `false` values. -```channelMetadata```| Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specify which speaker corresponds to which channel. This object only works when `enableSeparateRecognitionPerChannel` query param is set to `true`. Read more in the [Channel Metadata](#channel-metadata) section below. -```trackers``` BETA | Optional | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` ### Response @@ -213,8 +214,8 @@ Parameter | Required | Type | Description Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` #### Custom Entity @@ -262,17 +263,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Yes | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Yes | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | No | String | Name of the speaker. -```email``` | No | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Speaker Separation --- @@ -315,8 +316,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]) +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error --- diff --git a/docs/async-api/overview/video/put-video-url.md b/docs/async-api/overview/video/put-video-url.md index 28ac10bc..7ec8427c 100644 --- a/docs/async-api/overview/video/put-video-url.md +++ b/docs/async-api/overview/video/put-video-url.md @@ -292,25 +292,25 @@ Parameter | value ``` ### Request Body Parameters -Field | Required | Type | Description ------ | ------- | -------- | ------- | -```url``` | Mandatory | String | A valid URL string. The URL must be a publicly accessible URL. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | It shows [Actionable Phrases](/docs/concepts/action-items) in each sentence of conversation. These sentences can be found using the [Conversation's Messages API](/docs/conversation-api/messages). Accepts `true` or `false`. -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be after the API call is made. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. -```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. -``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```enableSeparateRecognitionPerChannel```| Optional | Boolean | Enables Speaker Separated Channel video processing. Accepts `true` or `false` values. -```channelMetadata```| Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specify which speaker corresponds to which channel. This object only works when `enableSeparateRecognitionPerChannel` query param is set to `true`. Read more in the [Channel Metadata](#channel-metadata) section below. -```trackers``` BETA| Optional | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```url``` | String, mandatory

A valid url string. The URL must be a publicly accessible url.

Exampple: `"url": "https://symbltestdata.s3.us-east-2.amazonaws.com/sample_audio_file.wav"` +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` ### Response @@ -322,8 +322,8 @@ Field | Required | Type | Description ``` Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` #### Channel Metadata @@ -353,17 +353,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Mandatory | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Mandatory | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | Optional | String | Name of the speaker. -```email``` | Optional | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com" ### Speaker Separation --- @@ -406,8 +406,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]). +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error diff --git a/docs/async-api/overview/video/put-video.md b/docs/async-api/overview/video/put-video.md index efbc219b..4f5f122d 100644 --- a/docs/async-api/overview/video/put-video.md +++ b/docs/async-api/overview/video/put-video.md @@ -198,24 +198,26 @@ Parameter | value ### Query Parameters -Parameter | Required | Type | Description ---------- | --------- | ------- | ------- -```name``` | Optional | String | Your meeting name. Default name set to `conversationId`. -```webhookUrl``` | Optional | String | Webhook URL on which job updates to be sent. This should be after the API call is made. For Webhook payload, refer to the [Using Webhook](#using-webhook) section below. -```customVocabulary``` | Optional | String[] | Contains a list of words and phrases that provide hints to the speech recognition task. -```confidenceThreshold``` | Optional | Double | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range >=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`. -```detectPhrases```| Optional | Boolean | Accepted values are `true` & `false`. It shows [Actionable Phrases](/docs/conversation-api/action-items) in each sentence of conversation. These sentences can be found in the [Conversation's Messages API](/docs/conversation-api/messages). -```entities``` | Optional | Object[] | Input custom entities which can be detected in your conversation using [Entities API](/docs/conversation-api/entities). Sample request for Custom Entity is given in the [Custom Entity](#custom-entity) section below. -```detectEntities``` | Optional | Boolean | Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation. - ```languageCode```| Optional | String | We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement. -``` mode``` | Optional | String | Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically. -```enableSeparateRecognitionPerChannel```| Optional | Boolean | Enables Speaker Separated Channel video processing. Accepts `true` or `false` values. -```channelMetadata```| Optional | Object[] | This object parameter contains two variables `speaker` and `channel` to specify which speaker corresponds to which channel. This object only works when `enableSeparateRecognitionPerChannel` query param is set to `true`. Read more in the [Channel Metadata](#channel-metadata) section below. -```trackers``` BETA | Optional | List | A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section. -```enableAllTrackers``` BETA | Optional | Boolean | Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag). -```enableSummary``` ALPHA | Optional | Boolean | Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL. -```enableSpeakerDiarization``` | Optional | Boolean | Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. -```diarizationSpeakerCount``` | Optional | String | The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below. +Parameter | Description +---------- | ------- | +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```customVocabulary``` | String, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

Exampple: `"customVocabulary": "Platform", "Discussion"` +```confidenceThreshold``` | Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to `0.5` and less than or equal to `1.0`.). The default value is `0.5`.

Example: `"confidenceThreshold": 0.6` +```detectPhrases```| Boolean, optional

It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. Default value is `false`.

Example: `"detectPhrases": true` +```name``` | String, optional

Your meeting name. Default name set to `conversationId`.

Example: `name: "Sales call"`, `name: "Customer call"`. +```webhookUrl``` | String, optional

Webhook URL on which job updates to be sent. This should be after making the API request. See the [Webhook section](/docs/async-api/overview/text/post-text#webhookurl) for more.

Example: `"""jobId"": ""9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"", ""status"": ""in_progress"""` +```entities``` | Object, optional

Input custom entities which can be detected in conversation using [Entities API](/docs/conversation-api/entities).

Example: `"entities": "customType": "Company Executives", "value": "Marketing director", "text": "Marketing director"` +```detectEntities``` | Boolean, optional

Default value is `false`. If not set the [Entities API](/docs/conversation-api/entities) will not return any entities from the conversation.

Example: `"detectEntities": true` + ```languageCode```| String, optional

We accept different languages. Please [check language Code](/docs/async-api/overview/async-api-supported-languages) as per your requirement.

Example: `"languageCode": "en-US"` + ``` mode``` | String, optional

Accepts `phone` or `default`. `phone` mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
`default` mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter `default` is selected automatically.

Example: `"mode": "phone"` +```enableSeparateRecognitionPerChannel``` | Boolean, optional

Enables Speaker Separated Channel audio processing. Accepts `true` or `false`.

Example: `"enableSeparateRecognitionPerChannel": true` +```channelMetadata``` | Object, optional

This object parameter contains two variables `speaker` and `channel` to specific which speaker corresponds to which channel. This object **only** works when `enableSeparateRecognitionPerChannel` is set to `true`. Learn more in the [Channel Metadata](#channel-metadata) section below.

Example: `"channelMetadata": "channel": 1, "speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` +```trackers``` BETA | List, optional

A `tracker` entity containing `name` and `vocabulary` (a list of key words and/or phrases to be tracked). Read more in the [Tracker API](/docs/management-api/trackers/overview) section.

Example: `"trackers": "name": "Promotion Mention", "vocabulary": "We have a special promotion going on if you book this before"` +```enableAllTrackers``` BETA | Boolean, optional

Default value is `false`. Setting this parameter to `true` will enable detection of all the Trackers maintained for your account by the Management API. This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter [here](/docs/management-api/trackers/overview#step-2-submit-files-using-async-api-with-enablealltrackers-flag).

Example: `"enableAllTrackers": true` +```enableSummary``` ALPHA | Boolean, optional

Setting this parameter to `true` allows you to generate Summaries using [Summary API](/conversation-api/summary). Ensure that you use `https://api.symbl.ai/` as the base URL.

Example: `"enableSummary": true` +```enableSpeakerDiarization``` | Boolean, optional

Whether the diarization should be enabled for this conversation. Pass this as `true` to enable Speaker Separation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: ` "enableSeparateRecognitionPerChannel": true` +```diarizationSpeakerCount``` | Integer, optional

The number of unique speakers in this conversation. To learn more, refer to the [Speaker Separation](#speaker-separation) section below.

Example: `diarizationSpeakerCount=$NUMBER_OF_UNIQUE_SPEAKERS"` + ### Response @@ -228,8 +230,8 @@ Parameter | Required | Type | Description Field | Description ---------- | ------- | -`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction). -`jobId` | ID to be used with Job API. +`conversationId` | ID to be used with [Conversation API](/docs/conversation-api/introduction).

Example: `"conversationId": "5815170693595136"` +`jobId` | ID to be used with Job API.

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` #### Custom Entity @@ -278,17 +280,17 @@ Given below is an example of a `channelMetadata` object: `channelMetadata` object has following members: -Field | Required | Type | Description -| ------- | ------- | ------- | -------- -```channel``` | Yes | Integer | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. -```speaker``` | Yes | String | This is the wrapper object which defines the speaker for this channel. +Field | Description +| ------- | ------- +```channel``` | Integer, mandatory

This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.

Example: `"channel": 1` +```speaker``` | String, mandatory

This is the wrapper object which defines the speaker for this channel.

Example: `"speaker": "name": "Robert Bartheon", "email": "robertbartheon@example.com"` `speaker` has the following members: -Field | Required | Type | Description -| ------- | ------- | ------- | ------ -```name``` | No | String | Name of the speaker. -```email``` | No | String | Email address of the speaker. +Field | Description +| ------- | ------- +```name``` | String, optional

Name of the speaker.

Example: `"name": "Robert Bartheon"` +```email``` | String, optional

Email address of the speaker.

Example: `"email": "robertbartheon@example.com"` ### Speaker Separation --- @@ -331,8 +333,8 @@ The `webhookUrl` will be used to send the status of job created for uploaded aud Field | Description | ------- | ------- -```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api). -```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ]) +```jobId``` | ID to be used with [Job API](/docs/async-api/overview/jobs-api).

Example: `"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"` +```status``` | Current status of the job. (Valid statuses: [ `scheduled`, `in_progress`, `completed`, `failed` ])

Example: `"status": "in_progress"` ### API Limit Error --- diff --git a/docs/telephony/reference/reference.md b/docs/telephony/reference/reference.md index 558e21ed..7e8f311d 100644 --- a/docs/telephony/reference/reference.md +++ b/docs/telephony/reference/reference.md @@ -181,13 +181,13 @@ Here is a breakdown of the request options for the Telephony API endpoint: #### Main Request Body -Field | Type | Description ----------- | ------- | ------- -```operation``` | string | enum([start, stop]) - Start or Stop connection -```endpoint``` | object | Object containing Type of the session - either pstn or sip, phoneNumber which is the meeting number symbl should call with country code prepended and dtmf which is the conference passcode. [See endpoint section below](#endpoint-config). -```actions``` | array | actions that should be performed while this connection is active. Currently only one action is supported - sendSummaryEmail. [See actions section below](#actions). -```data``` | object | Object containing a session object which has a field name corresponding to the name of the meeting. [See data section below](#data). -```timezone``` | string | The timezone name which comes from the [IANA TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). [See timezone section below](#timezone). +Field | Description +---------- | ------- +```operation``` | string

enum([start, stop]) - Start or Stop connection

Example: `"operation": "start"` +```endpoint``` | object

Object containing Type of the session - either pstn or sip, phoneNumber which is the meeting number symbl should call with country code prepended and dtmf which is the conference passcode. [See endpoint section below](#endpoint-config).

Example: `"endpoint": "type" : "pstn", "phoneNumber": phoneNumber, "dtmf": dtmfSequence` +```actions``` | array

actions that should be performed while this connection is active. Currently only one action is supported - sendSummaryEmail. [See actions section below](#actions).

Example: `"actions": "invokeOn": "stop", "name": "sendSummaryEmail", "parameters": "emails": "user@example.com"` +```data``` | object

Object containing a session object which has a field name corresponding to the name of the meeting. [See data section below](#data).

Example: `"data" : "session": "name" : "My Meeting"` +```timezone``` | string

The timezone name which comes from the [IANA TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). [See timezone section below](#timezone).

Example: `"timezone": "Asia/Tokyo"` ##### Code Example @@ -204,11 +204,11 @@ Field | Type | Description #### Endpoint Config -Field | Required | Supported Value | Description ----------- | ------- | ------- | ------- -`type` | Yes | enum(["sip", "pstn"]) | Defines the type of connection. Only [SIP](/docs/concepts/pstn-and-sip#sip-session-initiation-protocol) and [PSTN](/docs/concepts/pstn-and-sip#pstn-public-switched-telephone-networks) supported. -`phoneNumber` | Yes | String | Phone number to be used to dial in to in E.164 format i.e. special characters like () or - and leading + or international access codes like 001 or 00 must be omitted. For e.g. - US number should look like 14082924837, whereas UK number should look like 447082924837. -`dtmf` | No | String | DTMF sequence to be sent after call is received (ex: `939293#`) +Field | Description +---------- | ------- +`type` | enum(["sip", "pstn"], mandatory

Defines the type of connection. Only [SIP](/docs/concepts/pstn-and-sip#sip-session-initiation-protocol) and [PSTN](/docs/concepts/pstn-and-sip#pstn-public-switched-telephone-networks) supported.

Example: `"type" : "pstn"` +`phoneNumber` | String, mandatory

Phone number to be used to dial in to in E.164 format i.e. special characters like () or - and leading + or international access codes like 001 or 00 must be omitted. For e.g. - US number should look like 14082924837, whereas UK number should look like 447082924837.

Example: `"phoneNumber": phoneNumber` +`dtmf` | String, optional

DTMF sequence to be sent after call is received (ex: `939293#`)

Example: `"dtmf": dtmfSequence` ##### Code Example @@ -224,12 +224,12 @@ Field | Required | Supported Value | Description #### Actions -Field | Required | Supported Value | Description ----------- | ------- | ------- | ------- -`invokeOn` | Yes | enum(["start", "stop"]) | Event type on which the action should be performed. -`name` | Yes | String | Name of the action that needs to be invoked. Only `sendSummaryEmail` is currently supported. -`parameters` | Yes | Object | Object with required input parameter data for invocation of the specified action. -`parameters.emails` | Yes | String[] | An array of emails. +Field | Description +---------- | ------- +`invokeOn` | enum(["start", "stop"]) mandatory

Event type on which the action should be performed.

Example: `"invokeOn": "stop"` +`name` | String, mandatory

Name of the action that needs to be invoked. Only `sendSummaryEmail` is currently supported.

Example: `"name": "sendSummaryEmail"` +`parameters` | Object, mandatory

Object with required input parameter data for invocation of the specified action.

Example: `"parameters": "emails": "user@example.com"` +`parameters.emails` | String[], mandatory

An array of emails.

Example: `"emails": "user@example.com"` ##### Code Example @@ -250,10 +250,10 @@ Field | Required | Supported Value | Description #### Data -Field | Required | Supported Value | Description ----------- | ------- | ------- | ------- -`session` | No | String | Contains information about the meeting. -`session.name` | No | String | The name of the meeting. +Field | Description +---------- | ------- +`session` | String, optional

Contains information about the meeting.

Example: `session": "name" : "My Meeting"` +`session.name` | String, optional

The name of the meeting.

Example: `"name" : "My Meeting"` ##### Code Example @@ -270,10 +270,10 @@ Field | Required | Supported Value | Description #### Timezone -Field | Required | Supported Value | Description ----------- | ------- | ------- | ------- -`timezone` | No | String | The timezone name which comes from the [IANA TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). - +Field | Description +---------- | ------- +`timezone` | String, optional

The timezone name which comes from the [IANA TZ database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).

Example: `"timezone": "Asia/Tokyo"` + ##### Code Example ```js @@ -320,7 +320,7 @@ Field | Description ```eventUrl``` | REST API to push speaker events as the conversation is in progress, to add additional speaker context in the conversation. Example - In an on-going meeting, you can push speaker events ```resultWebSocketUrl``` | Same as eventUrl but over WebSocket. The latency of events is lower with a dedicated WebSocket connection.ct ```connectionId``` | Ephemeral connection identifier of the request, to uniquely identify the telephony connection. Once the connection is stopped using “stop” operation, or is closed due to some other reason, the connectionId is no longer valid -```conversationId``` | Represents the conversation - this is the ID that needs to be used in conversation api to access the conversation +```conversationId``` | Represents the conversation - this is the ID that needs to be used in conversation api to access the conversation #### Code Example