Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose functions from AudioStreamPlaybackMicrophone to access the microphone AudioDriver::input_buffer indepedently of the rest of the Audio system #100508

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

goatchurchprime
Copy link

This solves all the issues raised in the proposal: godotengine/godot-proposals#11347

Changes made:

  1. My new AudioDriver::input_start_count counter is to prevent multiple calls to AudioDriver::input_start() from different AudioStreamPlaybackMicrophone instances that result in multiple conflicting processes to popping data from the same buffer.

  2. Change the return type of AudioDriver::get_input_buffer() to Vector<int32_t>& to avoid copying out entire buffer
    to access it.

  3. GDREGISTER_CLASS(AudioStreamPlaybackMicrophone) was missing from register_server_types.cpp

  4. Expose functions start(), stop() and is_playing() from AudioStreamPlaybackMicrophone to GDScript

  5. New function PackedVector2Array AudioStreamPlaybackMicrophone::get_microphone_buffer(int p_frames). This does the same as PackedVector2Array AudioEffectCapture::get_buffer(frames: int) but it fetches the frames directly from the AudioDriver::input_buffer without going through an AudioStream, AudioBus and an AudioEffect, all of which operate on the duty cycle of the Audio system output frequency.

Need help with:

I would like to expose
int AudioStreamPlaybackMicrophone::mix(AudioFrame *p_buffer, float p_rate_scale, int p_frames)
to GDExtension, but this has an AudioFrame* pointer in its parameters list

This ought to be possible, because there is already one like this with int AudioStreamPlaybackResampled::_mix_resampled(dst_buffer: AudioFrame*, frame_count: int),
but it is created by the special virtual function template GDVIRTUAL2R(int, _mix_resampled, GDExtensionPtr<AudioFrame>, int)

@goatchurchprime goatchurchprime requested a review from a team as a code owner December 17, 2024 13:01
@RedMser
Copy link
Contributor

RedMser commented Dec 17, 2024

Since you modified the scripting API, make sure to update the class reference as well and fill it out: https://docs.godotengine.org/en/stable/contributing/documentation/updating_the_class_reference.html#updating-class-reference-when-working-on-the-engine

@adamscott adamscott changed the title Expose functions from AudioStreamPlaybackMicrophone to access the microphone AudioDriver::input_buffer indepedently of the rest of the Audio system Expose functions from AudioStreamPlaybackMicrophone to access the microphone AudioDriver::input_buffer indepedently of the rest of the Audio system Dec 17, 2024
@adamscott adamscott added this to the 4.x milestone Dec 17, 2024
Copy link
Member

@fire fire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, the design is that instead of the AudioServer using the mix callback somewhere else, I assume a node::_process(delta) will process the mic driver.

I am not convinced by the arguments to bypass the audio server to access the microphone driver directly.

@goatchurchprime goatchurchprime requested a review from a team as a code owner December 17, 2024 15:01
@goatchurchprime
Copy link
Author

TLDR

There is a long and buggy route for getting microphone audio data out of the engine. It would be shorter and more reliable to extract it directly from the AudioDriver's input_buffer.

What happens now

All microphone data goes into the ring-buffer AudioDriver::input_buffer advancing input_position as it is added.

The AudioStreamPlaybackMicrophone object has its own local pointer input_ofs into this same buffer, and its function _mix_internal() locks the audio device thread before returning a slice from this buffer using read access only.

At the moment the only way to extract microphone data from the engine is to wait for the AudioServer to request chunks of data from an AudioBus at its own update rate. This AudioBus then requests the data from an AudioStreamPlaybackMicrophone filtered through an AudioEffectCapture object that copies any audio that passes through into its own little buffer. Then, on a process loop, you extract the data from this buffer by calling AudioEffectCapture.get_buffer()

There are two problems with this.

Firstly, the AudioServer is highly sensitive to data not arriving in at the rate it requires, so if the microphone buffer isn't filled fast enough the stream is terminated. There is a pull-request to fix the problem on the Android platform where the microphone keeps switching off after two minutes by padding the missing frames with zeros. Not surprisingly, this degrades the quality of the microphone audio.

Secondly, we don't generally want any microphone audio data in the AudioServer because it causes feedback and is too laggy to work as realtime amplifier. So if it is too difficult to debug this buffer properly (which it is), it's not worth it. The evidence is that standard operating procedure requires us to create a special bus for the AudioStreamMicrophone to output to that is set to Mute.

The solution

Expose AudioStreamPlaybackMicrophone::_mix_internal() (suitably wrapped) as an external function, so that any process with a local copy of an AudioStreamPlaybackMicrophone can request slices of data from the input_buffer as it becomes available. This makes no structural change to the engine, but allows a new access route to the microphone data avoiding the unnecessary requirement for it to operate in perfect synchronicity to the Audio streams.

goatchurchprime added a commit to goatchurchprime/two-voip-godot-4 that referenced this pull request Dec 18, 2024
@adamscott adamscott self-requested a review December 18, 2024 01:59
@goatchurchprime
Copy link
Author

This fix is working well. It ran on my cheap Android phone for 4 hours without any issues at all.

Previously it would last about 3 minutes on this phone before the AudioStreamMicrophone was switched off, often permanently.

Although there is some skepticism, I would like to push this PR into some place where it can be critically discussed and reviewed, because it actually works, and we have don't even have an outline for fixing the Microphone any other way.

The primary function added into AudioStreamPlaybackMicrophone for accessing the audio data is:

PackedVector2Array get_microphone_buffer(int p_frames)

I would prefer to have the additional function:

bool mix_microphone(AudioFrame* buffer, int p_frames);

which would copy the values directly into an array rather than allocating and returning a temporary array by value.

The main use of this interface is the twovoip addon for doing Voice over IP.

There are a numerous virtual functions like this one:

void _process(src_buffer: const void*, dst_buffer: AudioFrame*, frame_count: int) virtual

where the engine calls out to the GDExtension with an array pointer, but no examples I have found where a GDExtension calls into the engine with an array pointer.

My attempts to implement this function using:

bool AudioStreamPlaybackMicrophone::::mix_microphone(GDExtensionConstPtr<AudioFrame> p_buffer, int p_frames);
ClassDB::bind_method(D_METHOD("mix_microphone", "p_buffer", "frames"), &AudioStreamPlaybackMicrophone::mix_microphone);

do not compile, so I have left the call to the bind_method commented out until it's possible to find an answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants