-
Notifications
You must be signed in to change notification settings - Fork 470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capture unmixed individual attendees audio streams #2775
Comments
Hello, the SDK currently doesn't support receiving unmixed audio streams, but if you're interested in audio capture of the unmixed streams we recently launched support to publish to Kinesis Video Streams using Amazon Chime SDK media pipelines. You can find the developer guide for this feature here. Please feel free to ask any questions. |
I have a question to add to this - How do I know where the audio is and how to get them? The docs point to using the BTW thanks for the links to the docs, very helpful 😄 |
Hello @euro-bYte, we publish events to your EventBus when we start/stop streaming an attendee's audio to KVS. The "Using Event Bridge notifications" section of the documentation points to the sample events for your reference. We currently don't have an API to concatenate the chunks from your KVS stream. There's also no provision from KVS to directly write to an S3 bucket, but they do provide sample libraries that you can refer to. Here's one for python: https://github.com/aws-samples/amazon-kinesis-video-streams-consumer-library-for-python If you're only interested in processing the audio after the meeting ends, you can just listen to the Amazon Chime Media Stream Pipeline Kinesis Video Stream End event and use the GetMediaForFragmentList API. Note that, you need to first get the list of fragments using the ListFragments API. All the required parameters for doing this are present in the event payload we publish. |
@avinashmidathada Thank you for the quick answer! |
@avinashmidathada I am hoping you can help me answer one last question this is a bit more technical. I implemented the APIs needed to get the fragments once the meeting ends, but the Here is the basic flow:
const listFragmentURL = await getKVSDataEndpoint(detailedInfo.kinesisVideoStreamArn, 'LIST_FRAGMENTS')
const archiveClient = new KVSArchiveMediaService(listFragmentURL)
const fragmentResponse = await archiveClient.getListOfFragments(
detailedInfo.kinesisVideoStreamArn,
detailedInfo.startTime,
detailedInfo.endTime,
nextToken
)
const getMediaURL = await getKVSDataEndpoint(detailedInfo.kinesisVideoStreamArn, 'GET_MEDIA_FOR_FRAGMENT_LIST')
const mediaClient = new KVSArchiveMediaService(getMediaURL)
const payloadReponse = await mediaClient.getMediaForFragmentList(
detailedInfo.kinesisVideoStreamArn,
fragmentArr
)
const filePath = './downloadedMedia'
const fileStream = fs.createWriteStream(filePath)
const readableStream = payloadReponse.Payload.pipe(fileStream)
What am I doing incorrectly or missing in this flow? |
Hey @euro-bYte Can you confirm the following:
|
Hi @avinashmidathada will there be any support for video with KVS Media Stream Pipelines? Is there any reason why video isn't supported now? How would you recommend collecting combined video and audio on a per user basis for archiving? It seems like we could use Media Capture to get individual video, but mixed audio. KVS integration does only audio. Would it make sense to use the two and then use ffmpeg to combine the audio and video? |
@avinashmidathada The problem was the way I was processing the
In my implementation I ordered it by the |
What are you trying to do?
We are looking for a way to additionally capture the audio streams before they are mixed into the main meeting audio channel. This is so that each speaker's audio can be recorded in isolation from other attendees, similar to how Chime is doing this with the live transcription. Is this officially supported by the SDK? If not, is the a place in the source code where we can hook in to grab the individual attendees audio stream manually?
How can the documentation be improved to help your use case?
What documentation have you looked at so far?
The text was updated successfully, but these errors were encountered: