-
Hello, I noticed the command line tool is roughly half the speed of running Python code in the following comparison: Command line: Python: model = whisper.load_model("large") If the command line is just slower, than that's good to know. If the two commands are ways are actually doing something different, can someone explain what the difference is? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 6 replies
-
Did they generate same output? Python perhaps using lower size model? My CPU and GPU gets maximized via CMD. I don't think there can be such speed difference. |
Beta Was this translation helpful? Give feedback.
-
The command line has different default settings on some hyperparameters, where the command line tool defaults to beam search and model.transcribe("file.mp3", beam_size=5, best_of=5) greedy decoding is faster but is more likely to get stuck in repetition loops, and in general slightly less accurate. |
Beta Was this translation helpful? Give feedback.
-
So, my command line executions are just slower. Keeping Temperature at 0.0, Beam_size at 1, for a 17 second .wav clip, I have an execution time of 18 seconds on the command line and 8 seconds within the Spyder PYTHON IDE. I've tried every other option parameter, that's what I get, a difference time factor of 2, the command line being slower. I don't understand, should be reversed. |
Beta Was this translation helpful? Give feedback.
-
Does it get progressively slower than more look up see you do or is it just
slow every time even after you haven’t been using it for a while?
On Fri, Mar 10, 2023 at 2:32 PM Jason Doster ***@***.***> wrote:
So, my command line executions are just slower. Keeping Temperature at
0.0, Beam_size at 1, for a 17 second .wav clip, I have an execution time of
18 seconds on the command line and 8 seconds within the Spyder PYTHON IDE.
I've tried every other option parameter, that's what I get, a difference
time factor of 2, the command line being slower. I don't understand, should
be reversed.
—
Reply to this email directly, view it on GitHub
<#177 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGW5AZ5LCU2KVNJG7TD4SLW3OMV5ANCNFSM6AAAAAAQYLB6FI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Jeffrey Duncan
|
Beta Was this translation helpful? Give feedback.
The command line has different default settings on some hyperparameters, where the command line tool defaults to beam search and
model.transcribe()
will do greedy decoding by default. It'll become equivalent if you do:greedy decoding is faster but is more likely to get stuck in repetition loops, and in general slightly less accurate.