Skip to content

Commit

Permalink
Merge branch 'master' into fix-config-import-from-optimum
Browse files Browse the repository at this point in the history
  • Loading branch information
eaidova authored Jul 31, 2024
2 parents ed81447 + 3f55103 commit d0744d3
Show file tree
Hide file tree
Showing 15 changed files with 469 additions and 41 deletions.
3 changes: 1 addition & 2 deletions llm_bench/python/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,10 @@ openvino_genai
auto-gptq>=0.5.1 # for gptq
pillow
torch
torchvision<0.19.0
transformers>=4.40.0
diffusers>=0.22.0
#optimum is in dependency list of optimum-intel
git+https://github.com/huggingface/optimum-intel.git@a863f4dd946545bfa5caec43e470bd6ffccf589e#egg=optimum-intel
git+https://github.com/eaidova/optimum-intel.git@ea/remove_bf16_rotary_emb_patching#egg=optimum-intel
git+https://github.com/openvinotoolkit/nncf.git@develop#egg=nncf
packaging
psutil
Expand Down
8 changes: 8 additions & 0 deletions samples/cpp/chat_sample/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,11 @@ UnicodeEncodeError: 'charmap' codec can't encode character '\u25aa' in position
If you encounter the error described in the example when sample is printing output to the Windows console, it is likely due to the default Windows encoding not supporting certain Unicode characters. To resolve this:
1. Enable Unicode characters for Windows cmd - open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot.
2. Enable UTF-8 mode by setting environment variable `PYTHONIOENCODING="utf8"`.

#### Missing chat template

If you encounter an exception indicating a missing "chat template" when launching the `ov::genai::LLMPipeline` in chat mode, it likely means the model was not tuned for chat functionality. To work this around, manually add the chat template to tokenizer_config.json of your model.
The following template can be used as a default, but it may not work properly with every model:
```
"chat_template": "{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|im_start|>user\n' + message['content'] + '<|im_end|>\n<|im_start|>assistant\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|im_end|>\n'}}{% endif %}{% endfor %}",
```
10 changes: 10 additions & 0 deletions samples/python/chat_sample/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,13 @@ To enable Unicode characters for Windows cmd open `Region` settings from `Contro
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU.

See https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md#supported-models for the list of supported models.


## Troubleshooting
### Missing chat template

If you encounter an exception indicating a missing "chat template" when launching the `ov::genai::LLMPipeline` in chat mode, it likely means the model was not tuned for chat functionality. To work this around, manually add the chat template to tokenizer_config.json of your model.
The following template can be used as a default, but it may not work properly with every model:
```
"chat_template": "{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|im_start|>user\n' + message['content'] + '<|im_end|>\n<|im_start|>assistant\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|im_end|>\n'}}{% endif %}{% endfor %}",
```
8 changes: 8 additions & 0 deletions src/cpp/include/openvino/genai/scheduler_config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,13 @@ struct SchedulerConfig {

// max number of scheduled sequences (you can think of it as "max batch size")
std::size_t max_num_seqs = 256;

// Enable caching of KV-blocks.
// When turned on all previously calculated KV-caches are kept in memory for future usages.
// KV-caches can be rewritten if KV-cache limit is reached, but blocks are not released.
// This results in more RAM usage, maximum RAM usage is determined by cache_size or num_kv_blocks parameters.
// When turend off only KV-cache required for batch calculation is kept in memory and
// when a sequence has finished genegartion its cache is released.
bool enable_prefix_caching = false;
};
}
Loading

0 comments on commit d0744d3

Please sign in to comment.