Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation regarding audio data handling #489

Open
nrawrx3 opened this issue Nov 18, 2024 · 0 comments
Open

Documentation regarding audio data handling #489

nrawrx3 opened this issue Nov 18, 2024 · 0 comments

Comments

@nrawrx3
Copy link

nrawrx3 commented Nov 18, 2024

I have few questions regarding audio handling.

  1. From my limited testing on two macbooks, it seems received audio frames are basically packets of 10ms worth of samples at 48kHz rate and 2 channels. Is the time interval of 10ms standard in libwebrtc or should a client-side app handle any interval and any number of channel and any rate - basically resample each packet to the output device sample rate and mix channels as well.

  2. If it's always fixed at 10ms intervals, is this because of libwebrtc's handling of audio data itself and is there any documentation on it? Curious.

  3. What about voice capture? Does LiveKit's server side normalize each AudioFrame sent to it using native_audio_source.capture_frame(&audio_frame) to 2 channels and 48kHz rate before sending it back to subscribed clients?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant