You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my limited testing on two macbooks, it seems received audio frames are basically packets of 10ms worth of samples at 48kHz rate and 2 channels. Is the time interval of 10ms standard in libwebrtc or should a client-side app handle any interval and any number of channel and any rate - basically resample each packet to the output device sample rate and mix channels as well.
If it's always fixed at 10ms intervals, is this because of libwebrtc's handling of audio data itself and is there any documentation on it? Curious.
What about voice capture? Does LiveKit's server side normalize each AudioFrame sent to it using native_audio_source.capture_frame(&audio_frame) to 2 channels and 48kHz rate before sending it back to subscribed clients?
The text was updated successfully, but these errors were encountered:
I have few questions regarding audio handling.
From my limited testing on two macbooks, it seems received audio frames are basically packets of 10ms worth of samples at 48kHz rate and 2 channels. Is the time interval of 10ms standard in libwebrtc or should a client-side app handle any interval and any number of channel and any rate - basically resample each packet to the output device sample rate and mix channels as well.
If it's always fixed at 10ms intervals, is this because of libwebrtc's handling of audio data itself and is there any documentation on it? Curious.
What about voice capture? Does LiveKit's server side normalize each AudioFrame sent to it using
native_audio_source.capture_frame(&audio_frame)
to 2 channels and 48kHz rate before sending it back to subscribed clients?The text was updated successfully, but these errors were encountered: