Skip to content

Commit

Permalink
talk-llama : reject runs without required arguments (#2153)
Browse files Browse the repository at this point in the history
* Extended talk-llama example to reject runs without required arguments.

Print warning and exit if models are not specified on the command line.

* Update examples/talk-llama/talk-llama.cpp

* Update examples/talk-llama/talk-llama.cpp

---------

Co-authored-by: Georgi Gerganov <[email protected]>
  • Loading branch information
petterreinholdtsen and ggerganov authored May 14, 2024
1 parent f56b830 commit 9d5771a
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions examples/talk-llama/talk-llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,10 @@ int main(int argc, char ** argv) {
cparams.use_gpu = params.use_gpu;

struct whisper_context * ctx_wsp = whisper_init_from_file_with_params(params.model_wsp.c_str(), cparams);
if (!ctx_wsp) {
fprintf(stderr, "No whisper.cpp model specified. Please provide using -mw <modelfile>\n");
return 1;
}

// llama init

Expand All @@ -301,6 +305,10 @@ int main(int argc, char ** argv) {
}

struct llama_model * model_llama = llama_load_model_from_file(params.model_llama.c_str(), lmparams);
if (!model_llama) {
fprintf(stderr, "No llama.cpp model specified. Please provide using -ml <modelfile>\n");
return 1;
}

llama_context_params lcparams = llama_context_default_params();

Expand Down

0 comments on commit 9d5771a

Please sign in to comment.