Replies: 6 comments 16 replies
-
Did u find the solution ? I am also looking for the same thing . |
Beta Was this translation helpful? Give feedback.
-
we have a solution for this will try and post something tomorrow |
Beta Was this translation helpful? Give feedback.
-
So we do not currently have an audio recording specific component, but we do have a webcam component: https://github.com/tgberkeley/reflex-webcam There are instructions on how to use it in the ReadMe. For your use case I would recommend using the This definitely isn't the optimal solution and you could wrap your own audio recording component such as https://www.npmjs.com/package/react-audio-voice-recorder, which might make things easier. You can follow the docs on how to wrap a component here: https://reflex.dev/docs/custom-components/overview/ Let us know if you have any questions about either of these approaches. |
Beta Was this translation helpful? Give feedback.
-
NOTE: This is not yet an answer, but a compilation of relevant sources of information that seem related to getting to an answer. Hopefully, a full answer can be posted as a reply shortly after! I also have basically zero javascript experience, so I may be overlooking something very obvious! Aims:
Steps:
Progress:Wrapping react component:Seems like the Related reflex documentation:
Example of react webcam component in reflex (has some similarities to audio recording)
The example usage of import React from "react";
import ReactDOM from "react-dom/client";
import { AudioRecorder } from 'react-audio-voice-recorder';
const addAudioElement = (blob) => {
const url = URL.createObjectURL(blob);
const audio = document.createElement("audio");
audio.src = url;
audio.controls = true;
document.body.appendChild(audio);
};
ReactDOM.createRoot(document.getElementById("root")).render(
<React.StrictMode>
<AudioRecorder
onRecordingComplete={addAudioElement}
audioTrackConstraints={{
noiseSuppression: true,
echoCancellation: true,
}}
downloadOnSavePress={true}
downloadFileExtension="webm"
/>
</React.StrictMode>
); Code so far:from typing import Dict, Any
import reflex as rx
from reflex.vars import Var
class AudioRecorder(rx.Component):
"""Wrapper for react-audio-voice-recorder component."""
# The React library to wrap.
library = "react-audio-voice-recorder"
# The React component tag.
tag = "AudioRecorder"
# If the tag is the default export from the module, you can set is_default = True.
# This is normally used when components don't have curly braces around them when importing.
is_default = False
# The props of the React component.
# Note: when Reflex compiles the component to Javascript,
# `snake_case` property names are automatically formatted as `camelCase`.
# The prop names may be defined in `camelCase` as well.
# Show waveform while recording
show_visualizer: Var[bool] = True
# Download audio client-side on save press
download_on_save_press: Var[bool] = False
def get_event_triggers(self) -> Dict[str, Any]:
return {
**super().get_event_triggers(),
"on_recording_complete": lambda e0: [e0],
}
audio_recorder = AudioRecorder.create And then to use it: class AudioState(rx.State):
audio_recorded: str = ""
def update_recorded_audio(self, audio_data: str):
self.audio_recorded = str(audio_data)
def audio_component() -> rx.Component:
return rx.fragment(
audio_recorder.audio_recorder(
id="audio_recorder",
on_recording_complete=AudioState.update_recorded_audio,
download_on_save_press=True,
),
rx.cond(
AudioState.audio_recorded,
rx.text(AudioState.audio_recorded),
rx.text("Click to record audio."),
),
) Problem:This almost works... The component shows up, you can click it to start recording, and on saving, the file is downloaded client-side (so audio was recorded) and the Thoughts:There are a few things I am unsure about:
I think the problem is my lack of understanding of how javascript works, so I don't understand what parts are signficant when trying to wrap it with reflex. I feel like the answer should be pretty simple in the end, and that all of the parts are there, I'm just not sure how to piece them together. Audio processingOnce I can get the audio to the server, I think the rest is reasonable simple... Something like extending the usage to: class AudioState(rx.State):
audio_recorded: str = ""
transcribed_audio: str = ""
def update_recorded_audio(self, audio_data: str):
self.audio_recorded = str(audio_data)
return AudioState.transcribe_audio
async def transcribe_audio(self):
transcription = await external_transcription_service(self.audio_recorded)
self.transcribed_audio = transcription
async def stream_transcribe_audio(self, audio_chunk):
transcription = await external_transcription_service(audio_chunk)
self.transcribed_audio += transcription
def audio_component() -> rx.Component:
return rx.fragment(
audio_recorder.audio_recorder(
id="audio_recorder",
on_recording_complete=AudioState.update_recorded_audio,
on_audio_chunk=AudioState.stream_transcribe_audio, # << Ideally would actually use something like this
),
rx.cond(
AudioState.audio_recorded,
rx.text(f'Transcription: {AudioState.transcribed_audio}'),
rx.text("Click to record audio."),
),
) As noted, ideally I would like to be able to stream the audio and transcribe. But, I'm not sure whether it is easily achievable to add a new event like Closing thoughts:I hope this provides a good starting point for discussion of what is missing, what is wrong, what needs to be changed, and what the logic behind the changes is. |
Beta Was this translation helpful? Give feedback.
-
I was curious about your code… I hope this help.
|
Beta Was this translation helpful? Give feedback.
-
Another very nice version implemented by @masenf here: https://pypi.org/project/reflex-audio-capture/ This implementation can also stream and select which microphone to use etc. |
Beta Was this translation helpful? Give feedback.
-
Is anyone know: how to record audio and call openai's whisper recognition. I don't found any example.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions