You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Invite your friends, family, or strangers to a karaoke room, upload a song that will use a machine learning model to extract voice from background music that you can choose to sing to. Practice to see if you're in tune with only the artists voice or do a solo with only background music of the song.
When I go to an event I want to atleast see the people performing not like ants moving. Access a event's camera streams to see different angles of the performance from your phone.
How I built it
It's a completely Serverless App
AWS Machine Learning Models Utilized:
Quantiphi - Source Separation
Quantiphi - Barcode/QR-code Scanner
How I called the Sagemaker Models
AWS Cognito credentials used to generate presignedurls.
Audio file is uploaded to s3 bucket.
Once the Audio File is uploaded it triggers an s3 bucket function.
Trigger function invokes Sagemaker endpoint, stores s3 bucket urls/metadata into RDS. Function then grabs
websocket connection IDs from DynamoDB and returns JSON through API Gateway websocket connection back to user.
Music Room
Built with Twilio WebRTC integrated with Lambda function for generating tokens. User first invites friends by making a request through API Gateway(RestAPI) to a lambda function to generate tokens, and those tokens are sent to the invited users in json by going through their respective API Gateway(Websockets) connections which generates a notification which they can click on, and it joins the music room.
The Chat is built with API Gateway Websockets and connections managed with DynamoDB.
Lyrics created by placing the vocal track returned by source seperation model into AWS Transcribe.
Inspiration from
Hate going to events and can't see anything
What's next for MusicBucket
There are a couple of things I want to do:
-Build video rooms to support more than 4
-Have option to create streams so other users outside of room can watch and chat.
Thank you so much for opening your first issue in this project! We'll try to get back to it as quickly as possible.
Important: This does not count as a submission for the hackathon we are running with DEV.to. We'll still review any submission request here for submission to Twilio CodeExchange but an addition to CodeExchange does not have any impact on your chances to win the hackathon. In order to submit to the hackathon, please create a post on DEV using this template.
While you are waiting...here's a random picture of a corgi (powered by dog.ceo)
Thank you for your submission! Just a reminder that if you are intending to submit to the hackathon, you have to do this separately to this submission by creating a post on dev.to/new/twiliohackathon
What it does
Invite your friends, family, or strangers to a karaoke room, upload a song that will use a machine learning model to extract voice from background music that you can choose to sing to. Practice to see if you're in tune with only the artists voice or do a solo with only background music of the song.
When I go to an event I want to atleast see the people performing not like ants moving. Access a event's camera streams to see different angles of the performance from your phone.
How I built it
It's a completely Serverless App
AWS Machine Learning Models Utilized:
Quantiphi - Source Separation
Quantiphi - Barcode/QR-code Scanner
How I called the Sagemaker Models
AWS Cognito credentials used to generate presignedurls.
Audio file is uploaded to s3 bucket.
Once the Audio File is uploaded it triggers an s3 bucket function.
Trigger function invokes Sagemaker endpoint, stores s3 bucket urls/metadata into RDS. Function then grabs
websocket connection IDs from DynamoDB and returns JSON through API Gateway websocket connection back to user.
Music Room
Built with Twilio WebRTC integrated with Lambda function for generating tokens. User first invites friends by making a request through API Gateway(RestAPI) to a lambda function to generate tokens, and those tokens are sent to the invited users in json by going through their respective API Gateway(Websockets) connections which generates a notification which they can click on, and it joins the music room.
The Chat is built with API Gateway Websockets and connections managed with DynamoDB.
Lyrics created by placing the vocal track returned by source seperation model into AWS Transcribe.
Inspiration from
Hate going to events and can't see anything
What's next for MusicBucket
There are a couple of things I want to do:
-Build video rooms to support more than 4
-Have option to create streams so other users outside of room can watch and chat.
https://vimeo.com/408100417
https://github.com/evans-github/MusicBucket
-AWS Services: API Gateway Websockets, Lambda, Sagemaker, Transcribe, S3, Cognito, DynamoDB
-java, nodejs, postgresql
The text was updated successfully, but these errors were encountered: