You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 16, 2024. It is now read-only.
In SDK v1.6, I am seeing a certain level of asynchronous behavior between the two capability agents (AudioPlayer and AIP) which is causing us problems for some user experiences e.g. glowing LED's etc.
Here is the use-case.
User asks to play a tune-in URL. "Alexa, play tunein"
AudioPlayer Play directive is received as expected.
Next as TuneIn is being streamed, user barges in and asks a question
At this point, the AudioInputProcessor Capability Agent informs our observer through the DialogUXStateAggregator that the state has changed. Here, we call our routine to glow the LED's to listening state
However the AudioPlayer capability agent does not call MediaPlayer::Pause immediately as it waits for the focus to arrive so it can move to background.
Sometimes gstreamer takes time to pause the stream and the LED is glowing in the LISTENING phase without the stream being paused.
We are trying to minimize this experience but there is no way for us to actually glow the Listening LED's only when the stream has actually paused. This knowledge is resides in the AIP Capability Agent.
Is there a way to enhance the SDK to have a central controller of all capability agents that can relay the state when things actually happen? i.e. in this example when the music is acutally paused. In effect, we should get a call back or the SDK should open the stream only after step #6 has happened.
Thanks for bringing this up. The central controller you're referring to is supposed to be the FocusManager. At first glance, it looks like we may need to have the AIP acquire foreground focus a bit sooner. I'll investigate this a bit further and get back to you.
I've added this to our internal backlog to investigate and come up with a solution to have the AudioInputProcessor acquire focus sooner (immediately in the AudioInputProcessor::recognize() or AudioInputProcessor::executeRecognize() calls).
I think that should be able to solve your issue (in case you want to try it out locally). We will update this thread once we come up with a solution.
We believe this is now addressed in our 1.10 release, so I will close this out. Please create a new ticket if this, or a similar issue is still found, so we can track separately, thanks.
In SDK v1.6, I am seeing a certain level of asynchronous behavior between the two capability agents (AudioPlayer and AIP) which is causing us problems for some user experiences e.g. glowing LED's etc.
Here is the use-case.
We are trying to minimize this experience but there is no way for us to actually glow the Listening LED's only when the stream has actually paused. This knowledge is resides in the AIP Capability Agent.
Is there a way to enhance the SDK to have a central controller of all capability agents that can relay the state when things actually happen? i.e. in this example when the music is acutally paused. In effect, we should get a call back or the SDK should open the stream only after step #6 has happened.
Please let me know.
Sincerely,
@adpandit
The text was updated successfully, but these errors were encountered: