Skip to content

Commit

Permalink
microsoft#2674 Update embed docs
Browse files Browse the repository at this point in the history
  • Loading branch information
corinagum committed Dec 8, 2019
1 parent 96db5b8 commit d8ae54c
Show file tree
Hide file tree
Showing 4 changed files with 79 additions and 89 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
- `bundle`: Webpack will now use `webpack-stats-plugin` instead of `webpack-visualizer-plugin`, by [@compulim](https://github.com/compulim) in PR [#2584](https://github.com/microsoft/BotFramework-WebChat/pull/2584)
- This will fix [#2583](https://github.com/microsoft/BotFramework-WebChat/issues/2583) by not bringing in transient dependency of React
- To view the bundle stats, browse to https://chrisbateman.github.io/webpack-visualizer/ and drop the file `/packages/bundle/dist/stats.json`
- Resolves [#2674](https://github.com/microsoft/BotFramework-WebChat/issues/2674). Update embed docs, by [@corinagum](https://github.com/corinagum), in PR [#2696](https://github.com/microsoft/BotFramework-WebChat/pull/2696)

### Samples

Expand Down
51 changes: 28 additions & 23 deletions DIRECT_LINE_SPEECH.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ We assume you have already set up a bot and have Web Chat running on a page.
## What is Direct Line Speech?

Direct Line Speech is designed for Voice Assistant scenario. For example, smart display, automotive dashboard, navigation system with low-latency requirement on *single-page application* and *progressive web apps* (PWA). These apps usually are made with highly-customized UI and do not show conversation transcripts.
Direct Line Speech is designed for Voice Assistant scenario. For example, smart display, automotive dashboard, navigation system with low-latency requirement on _single-page application_ and _progressive web apps_ (PWA). These apps usually are made with highly-customized UI and do not show conversation transcripts.

You can look at our sample [13.a.customization-speech-ui](https://microsoft.github.io/BotFramework-WebChat/samples/13.a.customization-speech-ui) and [13.b.smart-display](https://microsoft.github.io/BotFramework-WebChat/samples/13.b.customization-smart-display) for target scenarios.

Expand Down Expand Up @@ -247,8 +247,8 @@ Please look at our sample `06.i.direct-line-speech` to embedding Web Chat on you
After setting up Direct Line Speech on Azure Bot Services, there are two steps for using Direct Line Speech:

- [Retrieve your Direct Line Speech credentials](#retrieve-your-direct-line-speech-credentials)
- [Render Web Chat using Direct Line Speech adapters](#render-web-chat-using-direct-line-speech-adapters)
- [Retrieve your Direct Line Speech credentials](#retrieve-your-direct-line-speech-credentials)
- [Render Web Chat using Direct Line Speech adapters](#render-web-chat-using-direct-line-speech-adapters)

### Retrieve your Direct Line Speech credentials

Expand All @@ -260,17 +260,20 @@ In the following code snippets, we assume sending a HTTP POST request to https:/

```js
const fetchCredentials = async () => {
const res = await fetch('https://webchat-mockbot-streaming.azurewebsites.net/speechservices/token', {
method: 'POST'
});
const res = await fetch(
'https://webchat-mockbot-streaming.azurewebsites.net/speechservices/token',
{
method: 'POST'
}
);

if (!res.ok) {
throw new Error('Failed to fetch authorization token and region.');
}
if (!res.ok) {
throw new Error('Failed to fetch authorization token and region.');
}

const { authorizationToken, region } = await res.json();
const { authorizationToken, region } = await res.json();

return { authorizationToken, region };
return { authorizationToken, region };
};
```

Expand All @@ -281,13 +284,15 @@ const fetchCredentials = async () => {
After you have the `fetchCredentials` function set up, you can pass it to `createDirectLineSpeechAdapters` function. This function will return a set of adapters that is used by Web Chat. It includes DirectLineJS adapter and Web Speech adapter.

```js
const adapters = await window.WebChat.createDirectLineSpeechAdapters({ fetchCredentials });
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials
});

window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
{
...adapters
},
document.getElementById('webchat')
);
```

Expand All @@ -301,13 +306,13 @@ window.WebChat.renderWebChat(
You can specify user ID when you instantiate Web Chat.

- If you specify user ID
- `conversationUpdate` activity will be send on connect and every reconnect. And with your user ID specified in the `membersAdded` field.
- All `message` activities will be sent with your user ID in `from.id` field.
- If you do not specify user ID
- `conversationUpdate` activity will be send on connect and every reconnect. The `membersAdded` field will have an user ID of empty string.
- All `message` activities will be sent with a randomized user ID
- The user ID is kept the same across reconnections
- If you specify user ID
- `conversationUpdate` activity will be send on connect and every reconnect. And with your user ID specified in the `membersAdded` field.
- All `message` activities will be sent with your user ID in `from.id` field.
- If you do not specify user ID
- `conversationUpdate` activity will be send on connect and every reconnect. The `membersAdded` field will have an user ID of empty string.
- All `message` activities will be sent with a randomized user ID
- The user ID is kept the same across reconnections

### Connection idle and reconnection

Expand Down
Loading

0 comments on commit d8ae54c

Please sign in to comment.