Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support SpeechRecognition input from audio files and Float32Array and ArrayBuffer #70

Open
guest271314 opened this issue Oct 8, 2019 · 3 comments

Comments

@guest271314
Copy link

Support .wav, .webm, .ogg, .mp3 files (file types supported by the implementation decoders) and Float32Array and ArrayBuffer input to SpeechRecognition.

Use cases for static audio file and ArrayBuffer (non-"real-time") input to SpeechRecognition, includem but are not limited to:

  • TTS to audio file, audio file to SST, audio output to TTS (document reader to audio output)
  • Research, development, testing and analysis of speech recognition technologies in general and the accuacy of the application itself
  • Editing and modifying existing static audio files pre-SpeechRecognition input to achieve expected text output

AudioWorkletNode can be used to stream Float32Array input.

Related #66

@Pehrsons
Copy link

Pehrsons commented Oct 9, 2019

There are already several means of getting from audio files and buffers to audio MediaStreamTracks. Most of your example use cases are solvable by #66 and #69.

The only thing this proposal would solve compared to those proposals is that it could process audio faster than real-time, i.e., faster than it'd take to play them out.

Personally I think that particular problem is better solved by integrating with something like WebCodecs if/when it becomes mature and available.

@guest271314
Copy link
Author

@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases.

Internally the STT engine, unless specifically designed for MediaStreamTrack input, would need to convert the audio stream to one of the representations of the file listed at this issue, in general, a WAV file.

It is not clear how either #66 or #69 solve the use cases in this issue without converting a file or buffer to MediaStreamTrack instead of simply using the file or buffer as input?

@Pehrsons
Copy link

@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases.

WebCodecs has not settled yet so I cannot say, but it's the only ongoing effort I'm aware of that would allow processing media data in non-realtime and be passed around. There's OfflineAudioContext, but it doesn't really pipe into things. With WebCodecs it sounds like you'd get a ReadableStream of DecodedAudioPacket, which could be an input to SpeechRecognition, for instance.

Internally the STT engine, unless specifically designed for MediaStreamTrack input, would need to convert the audio stream to one of the representations of the file listed at this issue, in general, a WAV file.

To analyze any audio data you have to decode it first so that seems reasonable. When UAs ship with STT engines that are local, it wouldn't make sense to hand them an encoded file.

It is not clear how either #66 or #69 solve the use cases in this issue without converting a file or buffer to MediaStreamTrack instead of simply using the file or buffer as input?

Of course they'd solve it by decoding the file or buffer into a MediaStreamTrack. A fine solution as different tools are good at different things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants