Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DC 356 implement the title tags for seo #435

Merged
merged 4 commits into from
Mar 22, 2022
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 32 additions & 35 deletions docs/async-api/introduction.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
---
id: introduction
title: Async API Documentation
title: Async API
description: Symbl.ai Async APIs provides a REST interface for submitting any recorded or saved conversations for transcription. Check out our Async APIs documentation to get started.
sidebar_label: Introduction
slug: /async-api/introduction/
---

<head>
<title>Async API Documentation</title>
</head>

---

The Async API provides a REST interface that helps you to submit any recorded or saved conversations to Symbl. When you submit a conversation, you'll receive a Conversation ID (`conversationId`), which is unique to your conversation.
Expand All @@ -20,23 +24,19 @@ You must wait for the job to complete processing before you proceed with getting

#### `conversationId` helps you with:

1. Helps you append the transcription of an existing file using `PUT` (also known as `append file`) Async APIs.
1. Helps you append the transcription of an existing file using `PUT` (also known as `append file`) Async APIs.
2. Using [Conversation API](/docs/conversation-api/introduction) you can receive Speech to Text data and conversational insights.



## Async API Types


### Text API

The Async Text API allows you to process any text payload.

It can be useful for any use case where you have access to textual content and want to extract insights and other conversational attributes supported by Symbl's [Conversation API](/docs/conversation-api/introduction).

* [Submit Text File](/docs/async-api/overview/text/post-text)
* [Append Text File To Existing Conversation](/docs/async-api/overview/text/put-text)

- [Submit Text File](/docs/async-api/overview/text/post-text)
- [Append Text File To Existing Conversation](/docs/async-api/overview/text/put-text)

### Audio API

Expand All @@ -46,13 +46,13 @@ It can be useful for any use case where you have access to recorded audio and wa

#### Audio File Endpoints

* [Submit Audio File](/docs/async-api/overview/audio/post-audio)
* [Append Audio File To Existing Conversation](/docs/async-api/overview/audio/post-audio)
- [Submit Audio File](/docs/async-api/overview/audio/post-audio)
- [Append Audio File To Existing Conversation](/docs/async-api/overview/audio/post-audio)

#### Audio URL Endpoints

* [Submit Audio URL](/docs/async-api/overview/audio/post-audio-url)
* [Append Audio URL To Existing Conversation](/docs/async-api/overview/audio/put-audio-url)
- [Submit Audio URL](/docs/async-api/overview/audio/post-audio-url)
- [Append Audio URL To Existing Conversation](/docs/async-api/overview/audio/put-audio-url)

### Video API

Expand All @@ -62,40 +62,37 @@ It can be useful in any use case where you have access to a video file of any ty

#### Video File Endpoints

* [Submit Video File](/docs/async-api/overview/video/post-video)
* [Append Video File To Existing Conversation](/docs/async-api/overview/video/post-video)
- [Submit Video File](/docs/async-api/overview/video/post-video)
- [Append Video File To Existing Conversation](/docs/async-api/overview/video/post-video)

#### Video URL Endpoints

* [Submit Video URL](/docs/async-api/overview/video/post-video-url)
* [Append Video URL To Existing Conversation](/docs/async-api/overview/video/put-video-url)

- [Submit Video URL](/docs/async-api/overview/video/post-video-url)
- [Append Video URL To Existing Conversation](/docs/async-api/overview/video/put-video-url)

## Endpoints

### Text API

| Method | Endpoint | |
|--------|----------|-|
|`POST` | `https://api.symbl.ai/v1/process/text` | [Reference](/docs/async-api/overview/text/post-text)
|`PUT` | `https://api.symbl.ai/v1/process/text/{conversationId}` | [Reference](/docs/async-api/overview/text/put-text)

| Method | Endpoint | |
| ------ | ------------------------------------------------------- | ---------------------------------------------------- |
| `POST` | `https://api.symbl.ai/v1/process/text` | [Reference](/docs/async-api/overview/text/post-text) |
| `PUT` | `https://api.symbl.ai/v1/process/text/{conversationId}` | [Reference](/docs/async-api/overview/text/put-text) |

### Audio API

| Method | Endpoint | |
|--------|----------|-|
|`POST` | `https://api.symbl.ai/v1/process/audio` | [Reference](/docs/async-api/overview/audio/post-audio)
|`POST` | `https://api.symbl.ai/v1/process/audio/url` | [Reference](/docs/async-api/overview/audio/post-audio-url)
|`PUT` | `https://api.symbl.ai/v1/process/audio/{conversationId}` | [Reference](/docs/async-api/overview/audio/put-audio)
|`PUT` | `https://api.symbl.ai/v1/process/audio/url/{conversationId}` | [Reference](/docs/async-api/overview/audio/put-audio-url)

| Method | Endpoint | |
| ------ | ------------------------------------------------------------ | ---------------------------------------------------------- |
| `POST` | `https://api.symbl.ai/v1/process/audio` | [Reference](/docs/async-api/overview/audio/post-audio) |
| `POST` | `https://api.symbl.ai/v1/process/audio/url` | [Reference](/docs/async-api/overview/audio/post-audio-url) |
| `PUT` | `https://api.symbl.ai/v1/process/audio/{conversationId}` | [Reference](/docs/async-api/overview/audio/put-audio) |
| `PUT` | `https://api.symbl.ai/v1/process/audio/url/{conversationId}` | [Reference](/docs/async-api/overview/audio/put-audio-url) |

### Video API

| Method | Endpoint | |
|--------|----------|-|
|`POST` | `https://api.symbl.ai/v1/process/video` | [Reference](/docs/async-api/overview/video/post-video)
|`POST` | `https://api.symbl.ai/v1/process/video/url` | [Reference](/docs/async-api/overview/video/post-video-url)
|`PUT` | `https://api.symbl.ai/v1/process/video/{conversationId}` | [Reference](/docs/async-api/overview/video/put-video)
|`PUT` | `https://api.symbl.ai/v1/process/video/url/{conversationId}` | [Reference](/docs/async-api/overview/video/put-video-url)
| Method | Endpoint | |
| ------ | ------------------------------------------------------------ | ---------------------------------------------------------- |
| `POST` | `https://api.symbl.ai/v1/process/video` | [Reference](/docs/async-api/overview/video/post-video) |
| `POST` | `https://api.symbl.ai/v1/process/video/url` | [Reference](/docs/async-api/overview/video/post-video-url) |
| `PUT` | `https://api.symbl.ai/v1/process/video/{conversationId}` | [Reference](/docs/async-api/overview/video/put-video) |
| `PUT` | `https://api.symbl.ai/v1/process/video/url/{conversationId}` | [Reference](/docs/async-api/overview/video/put-video-url) |
34 changes: 18 additions & 16 deletions docs/conversation-api/concepts/sentiment.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
---
id: sentiment
title: Sentiment API- Analysing Texts in Real-time (Beta)
title: Sentiment Analysis (Beta)
sidebar_label: Introduction
description: Sentiment API enables developers to detect positive or negative sentiment from conversations in real-time. Learn more.
slug: /concepts/sentiment-analysis/
---

<head>
<title>Sentiment API- Analysing Texts in Real-time (Beta)</title>
</head>

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

Expand All @@ -25,7 +29,6 @@ Symbl's Sentiment API works over Speech-to-Text sentences and Topics (or aspect)

</div>


## Sentiment API

To see Sentiment API in action, you need to process a conversation using Symbl. After you process a conversation, you'll receive a **conversation Id** which can be passed in below-mentioned Conversation APIs. All you need to do is pass query parameters `sentiment=true`.
Expand All @@ -40,17 +43,16 @@ Each continuous sentence spoken by a speaker in conversation is referred to as a
For topic level, the sentiment is calculated over the topic messages scope i.e. it factors in the sentiment of messages where the topic was talked about.
:::


### 👉[Topics API](/docs/conversation-api/get-topics)

### API Response

<Tabs
defaultValue="javascript"
values={[
{ label: 'Speech to Text', value: 'javascript', },
{ label: 'Topics', value: 'topics', }
]
defaultValue="javascript"
values={[
{ label: 'Speech to Text', value: 'javascript', },
{ label: 'Topics', value: 'topics', }
]
}>

<TabItem value="javascript">
Expand Down Expand Up @@ -80,6 +82,7 @@ For topic level, the sentiment is calculated over the topic messages scope i.e.
} ]
}
```

</TabItem>
<TabItem value="topics">

Expand Down Expand Up @@ -112,10 +115,10 @@ For topic level, the sentiment is calculated over the topic messages scope i.e.

#### Object

| Field | Description |
|------------------|--------------------------------------------------------------------|
| ```polarity``` | Shows the intensity of the sentiment. It ranges from -1.0 to 1.0, where -1.0 is the most negative sentiment and 1.0 is the most positive sentiment. |
| ```suggested``` | display suggested sentiment type (negative, neutral and positive). |
| Field | Description |
| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `polarity` | Shows the intensity of the sentiment. It ranges from -1.0 to 1.0, where -1.0 is the most negative sentiment and 1.0 is the most positive sentiment. |
| `suggested` | display suggested sentiment type (negative, neutral and positive). |

#### suggested object

Expand All @@ -124,14 +127,13 @@ We have chosen the below polarity ranges wrt sentiment type which covers a wide
Polarity Sentiment may vary for your use case. We recommend that you define a threshold that works for you, and then adjust the threshold after testing and verifying the results.
:::


| polarity | Suggested Sentiment |
|------------------|---------------------|
| ---------------- | ------------------- |
| -1.0 => x > -0.3 | negative |
| -0.3 => x <= 0.3 | neutral |
| -0.3 => x <= 0.3 | neutral |
| 0.3 > x <= 1.0 | positive |

### Tutorials

- View tutorial on Sentiment Analysis on Messages [here](/docs/async-api/code-snippets/sentiment-analysis-on-messages)
- View tutorial on Sentiment Analysis on Topics [here](/docs/async-api/code-snippets/sentiment-analysis-on-topics)
- View tutorial on Sentiment Analysis on Topics [here](/docs/async-api/code-snippets/sentiment-analysis-on-topics)
59 changes: 33 additions & 26 deletions docs/conversation-api/concepts/speech-to-text.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
---
id: speech-to-text
title: Transcribe Speech-to-Text in Real-Time
title: Speech-to-Text
description: Get real-time speech-to-text data and analytics from your conversations with Symbl.ai APIs. Learn more.
sidebar_label: Introduction
slug: /concepts/speech-to-text/
---

<head>
<title>Transcribe Speech-to-Text in Real-Time</title>
</head>

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

Expand All @@ -20,7 +24,7 @@ Symbl offers state-of-the-art Speech-to-Text capability (also called transcripti
- **Domain specific**: Symbl's recognizes Speech-to-Text models for mobile call and video calls for state-of-the-art accuracy.

- **Multi-language Support**: We support 20+ languages including English, Russian, French, Italian, Hindi, Japanese, Spanish, etc. We also support models for different accents. For example, the way American and British English are spoken are different and we have Speech Recognition Models that are fine-tuned for different accents. <br/>
[Languages Supported](/docs/streaming-api/api-reference#supported-languages)
[Languages Supported](/docs/streaming-api/api-reference#supported-languages)

- **Custom Vocabulary**: We support Custom Vocabulary which help Speech-to-Text recognize specific words or phrases that are more frequently used within a context. For example, suppose that your audio data often includes the word "sell". When Speech-to-Text encounters the word "sell," you want it to transcribe the word as "sell" more often than "cell." In this case, you might use speech adaptation to bias Speech-to-Text toward recognizing "sell."

Expand All @@ -38,9 +42,9 @@ Symbl offers state-of-the-art Speech-to-Text capability (also called transcripti
Each continuous sentence spoken by a speaker in a conversation is referred to as a Message. Hence, we named our Speech to Text API to Messages API. Messages API returns you a list of messages in a conversation.
:::

To see Messages API in action, you need to process a conversation using Symbl. After you process a meeting, you'll receive a **Conversation ID**. A Conversation ID is the key to receiving conversational insights from any conversation. As an example, here's a simple API call which grabs the speech-to-text transcription from the conversation.
To see Messages API in action, you need to process a conversation using Symbl. After you process a meeting, you'll receive a **Conversation ID**. A Conversation ID is the key to receiving conversational insights from any conversation. As an example, here's a simple API call which grabs the speech-to-text transcription from the conversation.

Using the conversation API, you can get a pre-formatted transcript in markdown language or in standard transcription or closed captioning format like SRT. See [Formatted Transcript](/docs/conversation-api/transcript) section for more.
Using the conversation API, you can get a pre-formatted transcript in markdown language or in standard transcription or closed captioning format like SRT. See [Formatted Transcript](/docs/conversation-api/transcript) section for more.

👉 [Messages API](/docs/conversation-api/messages)

Expand All @@ -49,12 +53,12 @@ Using the conversation API, you can get a pre-formatted transcript in markdown l
Remember to replace the `conversationId` in the API call with the Conversation ID you get from the previous API call.

<Tabs
defaultValue="cURL"
values={[
{ label: 'cURL', value: 'cURL', },
{ label: 'Node.js', value: 'nodejs', },
{ label: 'Javascript', value: 'javascript', }
]
defaultValue="cURL"
values={[
{ label: 'cURL', value: 'cURL', },
{ label: 'Node.js', value: 'nodejs', },
{ label: 'Javascript', value: 'javascript', }
]
}>
<TabItem value="cURL">

Expand All @@ -68,17 +72,20 @@ curl "https://api.symbl.ai/v1/conversations/{conversationId}/messages" \
<TabItem value="nodejs">

```js
const request = require('request');
const request = require("request");
const authToken = AUTH_TOKEN;
const conversationId = "conversationId";

request.get({
request.get(
{
url: `https://api.symbl.ai/v1/conversations/${conversationId}/messages`,
headers: { 'Authorization': `Bearer ${authToken}` },
json: true
}, (err, response, body) => {
headers: { Authorization: `Bearer ${authToken}` },
json: true,
},
(err, response, body) => {
console.log(body);
});
}
);
```

</TabItem>
Expand All @@ -91,37 +98,37 @@ const url = `https://api.symbl.ai/v1/conversations/${conversationId}/messages`;

// Set headers
let headers = new Headers();
headers.append('Authorization', `Bearer ${authToken}`);
headers.append("Authorization", `Bearer ${authToken}`);

const data = {
method: "GET",
headers: headers,
}
};

// https://developer.mozilla.org/en-US/docs/Web/API/Request
const request = new Request(url, data);

fetch(request)
.then(response => {
console.log('response', response);
.then((response) => {
console.log("response", response);
if (response.status === 200) {
return response.json();
} else {
throw new Error('Something went wrong on api server!');
throw new Error("Something went wrong on api server!");
}
})
.then(response => {
console.log('Success');
.then((response) => {
console.log("Success");
// ...
}).catch(error => {
})
.catch((error) => {
console.error(error);
});
```

</TabItem>
</Tabs>


## Our customers love our Speech to Text! ❤️


<iframe width="800" height="315" src="https://twitframe.com/show?url=https://twitter.com/yac/status/1362174456093945857" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowFullScreen></iframe>
Loading