diff --git a/.gitignore b/.gitignore index 30baf6cf5c..49a8e24056 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,5 @@ /__tests__/__image_snapshots__/**/__diff_output__ +/.env /coverage /debug.log /gh-pages diff --git a/CHANGELOG.md b/CHANGELOG.md index ecf3134326..b9d9adc2f0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -27,6 +27,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0. - Resolves [#2897](https://github.com/microsoft/BotFramework-WebChat/issues/2897). Moved from JUnit to VSTest reporter with file attachments, by [@compulim](https://github.com/compulim) in PR [#2990](https://github.com/microsoft/BotFramework-WebChat/pull/2990) - Added `aria-label` attribute support for default Markdown engine, by [@patniko](https://github.com/patniko) in PR [#3022](https://github.com/microsoft/BotFramework-WebChat/pull/3022) - Resolves [#2969](https://github.com/microsoft/BotFramework-WebChat/issues/2969). Support sovereign cloud for Cognitive Services Speech Services, by [@compulim](https://github.com/compulim) in PR [#3040](https://github.com/microsoft/BotFramework-WebChat/pull/3040) +- Resolves [#2481](https://github.com/microsoft/BotFramework-WebChat/issues/2481). Support selecting different audio input devices for Cognitive Services Speech Services, by [@compulim](https://github.com/compulim) in PR [#3079](https://github.com/microsoft/BotFramework-WebChat/pull/3079) ### Fixed @@ -43,6 +44,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0. - Fixes [#3074](https://github.com/microsoft/BotFramework-WebChat/issues/3074). Keep `props.locale` when sending to the bot, by [@compulim](https://github.com/compulim) in PR [#3095](https://github.com/microsoft/BotFramework-WebChat/issue/3095) - Fixes [#3096](https://github.com/microsoft/BotFramework-WebChat/issues/3096). Use `` instead of `aria-label` for message bubbles, by [@compulim](https://github.com/compulim) in PR [#3097](https://github.com/microsoft/BotFramework-WebChat/issue/3097) - Fixes [#2876](https://github.com/microsoft/BotFramework-WebChat/issues/2876). `messageBack` and `postBack` should send even if both `text` and `value` is falsy or `undefined`, by [@compulim](https://github.com/compulim) in PR [#3120](https://github.com/microsoft/BotFramework-WebChat/issues/3120) +- Fixes [#2668](https://github.com/microsoft/BotFramework-WebChat/issues/2668). Disable Web Audio on insecure connections, by [@compulim](https://github.com/compulim) in PR [#3079](https://github.com/microsoft/BotFramework-WebChat/issue/3079) ### Changed @@ -102,11 +104,13 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0. - [`core-js@3.6.4`](https://npmjs.com/package/core-js) - Bumped Chrome Docker image to `3.141.59-zirconium` (Chrome 80.0.3987.106), by [@compulim](https://github.com/compulim) in PR [#2992](https://github.com/microsoft/BotFramework-WebChat/pull/2992) - Added `4.8.0` to `embed/servicingPlan.json`, by [@compulim](https://github.com/compulim) in PR [#2986](https://github.com/microsoft/BotFramework-WebChat/pull/2986) -- Bumped `microsoft-cognitiveservices-speech-sdk@1.10.1` and `web-speech-cognitive-services@6.1.0`, by [@compulim](https://github.com/compulim) in PR [#3040](https://github.com/BotFramework-WebChat/pull/3040) +- Bumped `microsoft-cognitiveservices-speech-sdk@1.10.1` and `web-speech-cognitive-services@6.1.0`, by [@compulim](https://github.com/compulim) in PR [#3040](https://github.com/microsoft/BotFramework-WebChat/pull/3040) +- Resolved [#2886](https://github.com/microsoft/BotFramework-WebChat/issues/2886) and [#2987](https://github.com/microsoft/BotFramework-WebChat/issue/2987), converged all references of [`microsoft-cognitiveservices-speech-sdk`](https://npmjs.com/package/microsoft-cognitiveservices-speech-sdk) to reduce footprint, by [@compulim](https://github.com/compulim) in PR [#3079](https://github.com/microsoft/BotFramework-WebChat/pull/3079) ## Samples - Resolves [#2806](https://github.com/microsoft/BotFramework-WebChat/issues/2806), added [Single sign-on with On Behalf Of Token Authentication](https://webchat-sample-obo.azurewebsites.net/) sample, by [@tdurnford](https://github.com/tdurnford) in [#2865](https://github.com/microsoft/BotFramework-WebChat/pull/2865) +- Resolves [#2481](https://github.com/microsoft/BotFramework-WebChat/issues/2481), added selectable audio input device sample, by [@compulim](https://github.com/compulim) in PR [#3079](https://github.com/microsoft/BotFramework-WebChat/pull/3079) ## [4.8.0] - 2020-03-05 diff --git a/__tests__/__image_snapshots__/chrome-docker/video-js-video-1-snap.png b/__tests__/__image_snapshots__/chrome-docker/video-js-video-1-snap.png index 14cc8017ae..dd6da4ceed 100644 Binary files a/__tests__/__image_snapshots__/chrome-docker/video-js-video-1-snap.png and b/__tests__/__image_snapshots__/chrome-docker/video-js-video-1-snap.png differ diff --git a/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-authorization-token-with-direct-line-protocol-should-recognize-hello-world-1-snap.png b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-authorization-token-with-direct-line-protocol-should-recognize-hello-world-1-snap.png new file mode 100644 index 0000000000..7227e92ffa Binary files /dev/null and b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-authorization-token-with-direct-line-protocol-should-recognize-hello-world-1-snap.png differ diff --git a/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-authorization-token-with-direct-line-speech-protocol-should-recognize-hello-world-1-snap.png b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-authorization-token-with-direct-line-speech-protocol-should-recognize-hello-world-1-snap.png new file mode 100644 index 0000000000..2606098a4d Binary files /dev/null and b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-authorization-token-with-direct-line-speech-protocol-should-recognize-hello-world-1-snap.png differ diff --git a/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-subscription-key-with-direct-line-protocol-should-recognize-hello-world-1-snap.png b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-subscription-key-with-direct-line-protocol-should-recognize-hello-world-1-snap.png new file mode 100644 index 0000000000..7227e92ffa Binary files /dev/null and b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-subscription-key-with-direct-line-protocol-should-recognize-hello-world-1-snap.png differ diff --git a/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-subscription-key-with-direct-line-speech-protocol-should-recognize-hello-world-1-snap.png b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-subscription-key-with-direct-line-speech-protocol-should-recognize-hello-world-1-snap.png new file mode 100644 index 0000000000..2606098a4d Binary files /dev/null and b/__tests__/__image_snapshots__/html/speech-recognition-simple-js-speech-recognition-using-subscription-key-with-direct-line-speech-protocol-should-recognize-hello-world-1-snap.png differ diff --git a/__tests__/html/__jest__/fetchSpeechServicesAuthorizationToken.js b/__tests__/html/__jest__/fetchSpeechServicesAuthorizationToken.js new file mode 100644 index 0000000000..a2615b16d0 --- /dev/null +++ b/__tests__/html/__jest__/fetchSpeechServicesAuthorizationToken.js @@ -0,0 +1,24 @@ +import fetch from 'node-fetch'; + +export default async function fetchSpeechServicesAuthorizationToken({ region, subscriptionKey, tokenURL }) { + if (!region && !tokenURL) { + throw new Error('Either "region" or "tokenURL" must be specified.'); + } else if (region && tokenURL) { + throw new Error('Only either "region" or "tokenURL" can be specified.'); + } else if (!subscriptionKey) { + throw new Error('"subscriptionKey" must be specified.'); + } + + const res = await fetch(tokenURL || `https://${region}.api.cognitive.microsoft.com/sts/v1.0/issueToken`, { + headers: { + 'Ocp-Apim-Subscription-Key': subscriptionKey + }, + method: 'POST' + }); + + if (!res.ok) { + throw new Error(`Failed to fetch authorization token, server returned ${res.status}`); + } + + return await res.text(); +} diff --git a/__tests__/html/__jest__/runPageProcessor.js b/__tests__/html/__jest__/runPageProcessor.js index 6bc4a5847e..323b4d90d2 100644 --- a/__tests__/html/__jest__/runPageProcessor.js +++ b/__tests__/html/__jest__/runPageProcessor.js @@ -1,5 +1,8 @@ import { join } from 'path'; +import { promisify } from 'util'; +import { tmpdir } from 'os'; import createDeferred from 'p-defer'; +import fs from 'fs'; import { imageSnapshotOptions } from '../../constants.json'; import createJobObservable from './createJobObservable'; @@ -9,6 +12,8 @@ const customImageSnapshotOptions = { customSnapshotsDir: join(__dirname, '../../__image_snapshots__/html') }; +const writeFile = promisify(fs.writeFile); + export default async function runPageProcessor(driver, { ignoreConsoleError = false, ignorePageError = false } = {}) { const webChatLoaded = await driver.executeScript(() => !!window.WebChat); const webChatTestLoaded = await driver.executeScript(() => !!window.WebChatTest); @@ -51,18 +56,25 @@ export default async function runPageProcessor(driver, { ignoreConsoleError = fa }, next: async ({ deferred, job }) => { try { + let result; + if (job.type === 'snapshot') { - try { - expect(await driver.takeScreenshot()).toMatchImageSnapshot(customImageSnapshotOptions); - deferred.resolve(); - } catch (err) { - pageResultDeferred.reject(err); - deferred.reject(err); - } + expect(await driver.takeScreenshot()).toMatchImageSnapshot(customImageSnapshotOptions); + } else if (job.type === 'save file') { + const filename = join(tmpdir(), `${Date.now()}-${job.payload.filename}`); + + await writeFile(filename, Buffer.from(job.payload.base64, 'base64')); + + console.log(`Saved to ${filename}`); + + result = filename; } else { throw new Error(`Unknown job type "${job.type}".`); } + + deferred.resolve(result); } catch (err) { + pageResultDeferred.reject(err); deferred.reject(err); } } diff --git a/__tests__/html/__jest__/setupRunHTMLTest.js b/__tests__/html/__jest__/setupRunHTMLTest.js index f2bf7841e7..9f3c7aa7d8 100644 --- a/__tests__/html/__jest__/setupRunHTMLTest.js +++ b/__tests__/html/__jest__/setupRunHTMLTest.js @@ -34,24 +34,22 @@ global.runHTMLTest = async ( const params = parseURLParams(new URL(url, 'http://webchat2/').hash); try { - // For unknown reason, if we use ?wd=1, it will be removed. - // But when we use #wd=1, it kept. + // We are only parsing the "hash" from "url", the "http://localhost/" is actually ignored. + let { hash } = new URL(url, 'http://localhost/'); - if (global.docker) { - params.wd = 1; + if (hash) { + hash += '&wd=1'; + } else { + hash = '#wd=1'; } - const baseURL = global.docker - ? new URL(url, 'http://webchat2/') - : new URL(url, `http://localhost:${global.webServerPort}/`); - - const hash = - '#' + - Object.entries(params) - .map(([name, value]) => `${encodeURIComponent(name)}=${encodeURIComponent(value)}`) - .join('&'); - - await driver.get(new URL(hash, baseURL)); + // For unknown reason, if we use ?wd=1, it will be removed. + // But when we use #wd=1, it kept. + await driver.get( + global.docker + ? new URL(hash, new URL(url, 'http://webchat2/')) + : new URL(url, `http://localhost:${global.webServerPort}/`) + ); await runPageProcessor(driver, { ignoreConsoleError, ignorePageError }); diff --git a/__tests__/html/offlineUI.fatalError.html b/__tests__/html/offlineUI.fatalError.html index 8534431fe2..9badf57496 100644 --- a/__tests__/html/offlineUI.fatalError.html +++ b/__tests__/html/offlineUI.fatalError.html @@ -15,7 +15,7 @@
- - - - - - - - + + + + + + + + + +
+ + + + diff --git a/__tests__/html/speechRecognition.simple.js b/__tests__/html/speechRecognition.simple.js new file mode 100644 index 0000000000..f6d3207bfe --- /dev/null +++ b/__tests__/html/speechRecognition.simple.js @@ -0,0 +1,102 @@ +/** + * @jest-environment ./__tests__/html/__jest__/WebChatEnvironment.js + */ + +import fetch from 'node-fetch'; + +import fetchSpeechServicesAuthorizationToken from './__jest__/fetchSpeechServicesAuthorizationToken'; + +const { + COGNITIVE_SERVICES_REGION, + COGNITIVE_SERVICES_SUBSCRIPTION_KEY, + DIRECT_LINE_SPEECH_REGION, + DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY +} = process.env; + +describe.each([ + ['authorization token with Direct Line protocol', { useAuthorizationToken: true }], + ['subscription key with Direct Line protocol', {}], + ['authorization token with Direct Line Speech protocol', { useAuthorizationToken: true, useDirectLineSpeech: true }], + ['subscription key with Direct Line Speech protocol', { useDirectLineSpeech: true }] +])('speech recognition using %s', (_, { useAuthorizationToken, useDirectLineSpeech }) => { + test('should recognize "Hello, World!".', async () => { + let queryParams; + + if (useAuthorizationToken) { + if (useDirectLineSpeech && DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY) { + queryParams = { + sa: await fetchSpeechServicesAuthorizationToken({ + region: DIRECT_LINE_SPEECH_REGION, + subscriptionKey: DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY + }), + sr: DIRECT_LINE_SPEECH_REGION, + t: 'dlspeech' + }; + } else if (!useDirectLineSpeech && COGNITIVE_SERVICES_SUBSCRIPTION_KEY) { + queryParams = { + sa: await fetchSpeechServicesAuthorizationToken({ + region: COGNITIVE_SERVICES_REGION, + subscriptionKey: COGNITIVE_SERVICES_SUBSCRIPTION_KEY + }), + sr: COGNITIVE_SERVICES_REGION, + t: 'dl' + }; + } else { + if (useDirectLineSpeech) { + console.warn( + 'No environment variable "DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY" is set, using the authorization token from webchat-waterbottle.' + ); + } else { + console.warn( + 'No environment variable "COGNITIVE_SERVICES_SUBSCRIPTION_KEY" is set, using the authorization token from webchat-waterbottle.' + ); + } + + const res = await fetch('https://webchat-mockbot-streaming.azurewebsites.net/speechservices/token', { + headers: { origin: 'http://localhost' }, + method: 'POST' + }); + + if (!res.ok) { + throw new Error( + `Failed to fetch Cognitive Services Speech Services credentials, server returned ${res.status}` + ); + } + + const { region, token: authorizationToken } = await res.json(); + + queryParams = { sa: authorizationToken, sr: region, t: useDirectLineSpeech ? 'dlspeech' : 'dl' }; + } + } else { + if (useDirectLineSpeech) { + if (DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY) { + queryParams = { + sr: DIRECT_LINE_SPEECH_REGION, + ss: DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY, + t: 'dlspeech' + }; + } else { + return console.warn( + 'No environment variable "DIRECT_LINE_SPEECH_SUBSCRIPTION_KEY" is set, skipping this test.' + ); + } + } else { + if (COGNITIVE_SERVICES_SUBSCRIPTION_KEY) { + queryParams = { + sr: COGNITIVE_SERVICES_REGION, + ss: COGNITIVE_SERVICES_SUBSCRIPTION_KEY, + t: 'dl' + }; + } else { + return console.warn( + 'No environment variable "COGNITIVE_SERVICES_SUBSCRIPTION_KEY" is set, skipping this test.' + ); + } + } + } + + return runHTMLTest(`speechRecognition.simple.html#${new URLSearchParams(queryParams)}`, { + ignoreConsoleError: true + }); + }); +}); diff --git a/__tests__/html/toast.html b/__tests__/html/toast.html index 99f11ed9a6..a9e1cf0ff8 100644 --- a/__tests__/html/toast.html +++ b/__tests__/html/toast.html @@ -15,7 +15,7 @@
- + + + +
+ + + +``` + + +# Further reading + +- [Cognitive Speech Speech Services website](https://azure.microsoft.com/en-us/services/cognitive-services/speech-services/) + +## Full list of Web Chat Hosted Samples + +View the list of [available Web Chat samples](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples) diff --git a/samples/03.speech/h.select-audio-input-device/comprehensive.css b/samples/03.speech/h.select-audio-input-device/comprehensive.css new file mode 100644 index 0000000000..654e995207 --- /dev/null +++ b/samples/03.speech/h.select-audio-input-device/comprehensive.css @@ -0,0 +1,64 @@ +html, +body { + height: 100%; +} + +body { + margin: 0; +} + +#app { + display: flex; + flex-direction: column; + height: 100%; + width: 100%; +} + +#app .webchat { + overflow: hidden; +} + +#app .app__settings-panel { + background-color: #ffc; + border-radius: 4px; + border: solid 1px rgba(0, 0, 0, 0.1); + box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); + flex-shrink: 0; + font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', + 'Helvetica Neue', sans-serif; + font-size: 14px; + margin: 10px; + overflow: hidden; + padding: 10px; + position: relative; +} + +#app .app__settings-panel__header { + background-color: #090; + border-radius: 0 0 0 4px; + color: White; + font-size: 11px; + padding: 0px 4px 2px; + position: absolute; + right: 0; + top: 0; +} + +#app .app__settings-panel__list { + list-style-type: none; + margin: 0; + padding: 0; +} + +#app .app__settings-panel__list-item:not(:first-child) { + padding-top: 0.5em; +} + +#app .app__settings-panel__row { + display: flex; +} + +#app .app__settings-panel__radio-button { + margin-left: 0; + margin-right: 0.5em; +} diff --git a/samples/03.speech/h.select-audio-input-device/comprehensive.html b/samples/03.speech/h.select-audio-input-device/comprehensive.html new file mode 100644 index 0000000000..e1b6ca0057 --- /dev/null +++ b/samples/03.speech/h.select-audio-input-device/comprehensive.html @@ -0,0 +1,25 @@ + + + + Web Chat: Cognitive Services Speech Services with selectable audio input device + + + + + + + + + + + + +
+ + + diff --git a/samples/03.speech/h.select-audio-input-device/comprehensive.js b/samples/03.speech/h.select-audio-input-device/comprehensive.js new file mode 100644 index 0000000000..274ebe5db1 --- /dev/null +++ b/samples/03.speech/h.select-audio-input-device/comprehensive.js @@ -0,0 +1,239 @@ +'use strict'; + +// Fetch the Direct Line Speech credentials. +async function fetchDirectLineSpeechCredentials() { + const res = await fetch('https://webchat-mockbot-streaming.azurewebsites.net/speechservices/token', { + method: 'POST' + }); + + if (!res.ok) { + throw new Error('Failed to fetch authorization token and region.'); + } + + const { region, token: authorizationToken } = await res.json(); + + return { authorizationToken, region }; +} + +// Create a function to fetch the Cognitive Services Speech Services credentials. +// The async function created will hold expiration information about the token and will return cached token when possible. +function createFetchSpeechServicesCredentials() { + let expireAfter = 0; + let lastPromise; + + return () => { + const now = Date.now(); + + // Fetch a new token if the existing one is expiring. + // The following article mentioned the token is only valid for 10 minutes. + // We will invalidate the token after 5 minutes. + // https://docs.microsoft.com/en-us/azure/cognitive-services/authentication#authenticate-with-an-authentication-token + if (now > expireAfter) { + expireAfter = now + 300000; + lastPromise = fetch('https://webchat-mockbot.azurewebsites.net/speechservices/token', { + method: 'POST' + }).then( + res => res.json(), + err => { + expireAfter = 0; + + return Promise.reject(err); + } + ); + } + + return lastPromise; + }; +} + +(async function() { + // In this demo, we are using Direct Line token from MockBot. + // Your client code must provide either a secret or a token to talk to your bot. + // Tokens are more secure. To learn about the differences between secrets and tokens. + // and to understand the risks associated with using secrets, visit https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-direct-line-3-0-authentication?view=azure-bot-service-4.0 + + const res = await fetch('https://webchat-mockbot.azurewebsites.net/directline/token', { method: 'POST' }); + const { token } = await res.json(); + + // Imports + const { + navigator: { mediaDevices }, + React: { useCallback, useEffect, useMemo, useState }, + ReactDOM, + WebChat: { + createCognitiveServicesSpeechServicesPonyfillFactory, + createDirectLine, + createDirectLineSpeechAdapters, + ReactWebChat + } + } = window; + + const App = () => { + // List of audio input devices. The list is initially empty until the async function call finished. + // If the browser does not support media devices, we will set it to false to show an error message. + const [audioInputDevices, setAudioInputDevices] = useState(mediaDevices ? [] : false); + + // Channel to use, either "directline" or "directlinespeech". + const [channel, setChannel] = useState('directline'); + + // Time when the device list change. This is used to trigger a re-enumeration of devices. + const [lastDeviceChangeAt, setLastDeviceChangeAt] = useState(0); + + // Device ID of the audio input device selected by the user. + const [selectedAudioInputDeviceId, setSelectedAudioInputDeviceId] = useState('default'); + + // Direct Line adapter instance will be kept across render if the channel does not changed. + const directLine = useMemo(() => channel !== 'directlinespeech' && createDirectLine({ token }), [channel, token]); + + // Function instance to fetch credentials for Cognitive Services Speech Services. The instance is kept across render to cache the credentials. + const fetchSpeechServicesCredentials = useMemo(() => createFetchSpeechServicesCredentials(), []); + + // Create the ponyfill factory function, which can be called to create a concrete implementation of the ponyfill. + // The ponyfill factory will be discarded when the channel changed. + const webSpeechPonyfillFactory = useMemo( + () => + channel !== 'directlinespeech' && + createCognitiveServicesSpeechServicesPonyfillFactory({ + // Device ID of the audio input device to use. + audioInputDeviceId: selectedAudioInputDeviceId, + + // We are passing the Promise function to the "credentials" field. + // This function will be called every time the token is being used. + credentials: fetchSpeechServicesCredentials + }), + [channel, fetchSpeechServicesCredentials, selectedAudioInputDeviceId] + ); + + // When the user select a different channel, we change the "channel" based on their selection. + const handleChannelChange = useCallback(({ target: { value } }) => setChannel(value)); + + // When the user select a different device, we change the "selectedAudioInputDeviceId" based on their selection. + const handleSelectedAudioInputDeviceIdChange = useCallback(({ target: { value } }) => + setSelectedAudioInputDeviceId(value) + ); + + // This is the set of adapters for Web Chat to use. + const [adapters, setAdapters] = useState(false); + + // When channel or selected audio input device changed, we will re-create the set of adapters. + useEffect(() => { + if (channel === 'directlinespeech') { + // Direct Line Speech adapter set creation is asynchronous call, + // we will temporarily disable Web Chat until the adapter set is ready to use. + setAdapters(false); + + let directLine; + + (async function() { + const adapters = await createDirectLineSpeechAdapters({ + audioInputDeviceId: selectedAudioInputDeviceId, + fetchCredentials: fetchDirectLineSpeechCredentials + }); + + directLine = adapters.directLine; + setAdapters(adapters); + })(); + + return () => directLine && directLine.end(); + } else { + // Direct Line adapter and Web Speech ponyfill are created. + // We separate the adapter/ponyfill creation code so we can update either one of them without affecting the other one. + setAdapters({ directLine, webSpeechPonyfillFactory }); + + return () => directLine.end(); + } + }, [channel, selectedAudioInputDeviceId]); + + // When device is plugged/unplugged, we will re-enumerate the device list. + useEffect(() => { + // "mediaDevices" is undefined if the loaded page is not secure: + // 1. The page is not loaded via HTTPS, and; + // 2. The page is not loaded from localhost. + if (mediaDevices) { + const handleDeviceChange = () => setLastDeviceChangeAt(Date.now()); + + mediaDevices.addEventListener('devicechange', handleDeviceChange); + + return () => mediaDevices.removeEventListener('devicechange', handleDeviceChange); + } + }, [setLastDeviceChangeAt]); + + // Enumerate the device list on initial page load or when a device is plugged/unplugged. + useEffect(() => { + (async function() { + mediaDevices && + setAudioInputDevices( + // We will only list "audioinput" device. + (await mediaDevices.enumerateDevices()).filter(({ kind }) => kind === 'audioinput') + ); + })(); + }, [lastDeviceChangeAt, setAudioInputDevices]); + + return ( + +
+
Channel
+
    +
  • + +
  • +
  • + +
  • +
+
+
+
Audio input devices
+ {audioInputDevices ? ( +
    + {audioInputDevices.map(({ deviceId, label }) => ( +
  • + +
  • + ))} +
+ ) : ( +
Your browser does not support Web Audio or this page is not loaded via HTTPS or from localhost.
+ )} +
+ {/* We are recreating on channel change */} + {adapters && } +
+ ); + }; + + // Pass a Web Speech ponyfill factory to renderWebChat. + // You can also use your own speech engine given it is compliant to W3C Web Speech API: https://w3c.github.io/speech-api/. + // For implementor, look at createBrowserWebSpeechPonyfill.js for details. + ReactDOM.render(, document.getElementById('app')); +})().catch(err => console.error(err)); diff --git a/samples/03.speech/h.select-audio-input-device/index.html b/samples/03.speech/h.select-audio-input-device/index.html new file mode 100644 index 0000000000..b01a514a15 --- /dev/null +++ b/samples/03.speech/h.select-audio-input-device/index.html @@ -0,0 +1,103 @@ + + + + Web Chat: Cognitive Services Speech Services with selectable audio input device + + + + + + + + +
+ + + diff --git a/samples/README.md b/samples/README.md index 61cc9701aa..b7949db733 100644 --- a/samples/README.md +++ b/samples/README.md @@ -37,6 +37,7 @@ Here you can find all hosted samples of [Web Chat](https://github.com/microsoft/ | [`03.speech/e.select-voice`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/03.speech/e.select-voice) | Demonstrates how to select speech synthesis voice based on activity. | [Select Voice Demo](https://microsoft.github.io/BotFramework-WebChat/03.speech/e.select-voice) | | [`03.speech/f.web-browser-speech`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/03.speech/f.web-browser-speech) | Demonstrates how to implement text-to-speech using Web Chat's browser-based Web Speech API. (link to W3C standard in the sample) | [Web Speech API Demo](https://microsoft.github.io/BotFramework-WebChat/03.speech/f.web-browser-speech) | | [`03.speech/g.hybrid-speech`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/03.speech/g.hybrid-speech) | Demonstrates how to use both browser-based Web Speech API for speech-to-text, and Cognitive Services Speech Services API for text-to-speech. | [Hybrid Speech Demo](https://microsoft.github.io/BotFramework-WebChat/03.speech/g.hybrid-speech) | +| [`03.speech/h.select-audio-input-device`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/03.speech/h.select-audio-input-device) | Demonstrates how to select an audio input device. | [Select Audio Input Device Demo](https://microsoft.github.io/BotFramework-WebChat/03.speech/h.select-audio-input-device) [(Comprehensive)](https://microsoft.github.io/BotFramework-WebChat/03.speech/h.select-audio-input-device) | | **API** | | | | [`04.api/a.welcome-event`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/04.api/a.welcome-event) | Advanced tutorial: Demonstrates how to send welcome event with client capabilities such as browser language. | [Welcome Event Demo](https://microsoft.github.io/BotFramework-WebChat/04.api/a.welcome-event) | | [`04.api/b.piggyback-on-outgoing-activities`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/04.api/b.piggyback-on-outgoing-activities) | Advanced tutorial: Demonstrates how to add custom data to every outgoing activities. | [Backchannel Piggybacking Demo](https://microsoft.github.io/BotFramework-WebChat/04.api/b.piggyback-on-outgoing-activities) | @@ -71,4 +72,4 @@ Here you can find all hosted samples of [Web Chat](https://github.com/microsoft/ | [`07.advanced-web-chat-apps/b.sso-for-enterprise`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/07.advanced-web-chat-apps/b.sso-for-enterprise) | Demonstrates how to use single sign-on for enterprise single-page applications using OAuth | [Single Sign-On for Enterprise Single-Page Applications Demo](https://webchat-sample-sso.azurewebsites.net/) | | [`07.advanced-web-chat-apps/c.sso-for-intranet`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/07.advanced-web-chat-apps/c.sso-for-intranet) | Demonstrates how to use single sign-on for Intranet apps using Azure Active Directory | [Single Sign-On for Intranet Apps Demo](https://webchat-sample-sso-intranet.azurewebsites.net/) | | [`07.advanced-web-chat-apps/d.sso-for-teams`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/07.advanced-web-chat-apps/d.sso-for-teams) | Demonstrates how to use single sign-on for Microsoft Teams apps using Azure Active Directory | [Single Sign-On for Microsoft Teams Apps Demo](https://webchat-sample-sso-teams.azurewebsites.net/) | -| [`07.advanced-web-chat-apps/e.sso-on-behalf-of-authentication`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/07.advanced-web-chat-apps/e.sso-on-behalf-of-authentications) | Demonstrates how to use on-behalf-of authentication in enterprise application | [Single Sign-On On-Behalf-Of for Enterprise Apps Demo](https://webchat-sample-sso-teams.azurewebsites.net/) | +| [`07.advanced-web-chat-apps/e.sso-on-behalf-of-authentication`](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/07.advanced-web-chat-apps/e.sso-on-behalf-of-authentications) | Demonstrates how to use on-behalf-of authentication in an enterprise application | [Single Sign-On On-Behalf-Of for Enterprise Apps Demo](https://webchat-sample-sso-teams.azurewebsites.net/) |