-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support SpeechRecognition input from audio files and Float32Array and ArrayBuffer #70
Comments
There are already several means of getting from audio files and buffers to audio MediaStreamTracks. Most of your example use cases are solvable by #66 and #69. The only thing this proposal would solve compared to those proposals is that it could process audio faster than real-time, i.e., faster than it'd take to play them out. Personally I think that particular problem is better solved by integrating with something like WebCodecs if/when it becomes mature and available. |
@Pehrsons How exactly would WebCodecs solve the problem of processing audio (or video) faster than "real-time" from a static file? WebCodecs appears to be based more on bring-your-own-codec than an all-encompossing API intent on being an adapter for all possible audio and video use cases. Internally the STT engine, unless specifically designed for It is not clear how either #66 or #69 solve the use cases in this issue without converting a file or buffer to |
WebCodecs has not settled yet so I cannot say, but it's the only ongoing effort I'm aware of that would allow processing media data in non-realtime and be passed around. There's OfflineAudioContext, but it doesn't really pipe into things. With WebCodecs it sounds like you'd get a ReadableStream of DecodedAudioPacket, which could be an input to SpeechRecognition, for instance.
To analyze any audio data you have to decode it first so that seems reasonable. When UAs ship with STT engines that are local, it wouldn't make sense to hand them an encoded file.
Of course they'd solve it by decoding the file or buffer into a MediaStreamTrack. A fine solution as different tools are good at different things. |
Support
.wav
,.webm
,.ogg
,.mp3
files (file types supported by the implementation decoders) andFloat32Array
andArrayBuffer
input toSpeechRecognition
.Use cases for static audio file and
ArrayBuffer
(non-"real-time") input toSpeechRecognition
, includem but are not limited to:SpeechRecognition
input to achieve expected text outputAudioWorkletNode
can be used to streamFloat32Array
input.Related #66
The text was updated successfully, but these errors were encountered: