Skip to content

Commit

Permalink
Apply comments
Browse files Browse the repository at this point in the history
  • Loading branch information
olpipi committed Jul 29, 2024
1 parent ce57885 commit bf4f9f3
Show file tree
Hide file tree
Showing 5 changed files with 23 additions and 19 deletions.
8 changes: 8 additions & 0 deletions samples/cpp/chat_sample/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,11 @@ UnicodeEncodeError: 'charmap' codec can't encode character '\u25aa' in position
If you encounter the error described in the example when sample is printing output to the Windows console, it is likely due to the default Windows encoding not supporting certain Unicode characters. To resolve this:
1. Enable Unicode characters for Windows cmd - open `Region` settings from `Control panel`. `Administrative`->`Change system locale`->`Beta: Use Unicode UTF-8 for worldwide language support`->`OK`. Reboot.
2. Enable UTF-8 mode by setting environment variable `PYTHONIOENCODING="utf8"`.

#### Missing chat template

If you encounter an exception indicating a missing "chat template" when launching the `ov::genai::LLMPipeline` in chat mode, it likely means the model was not tuned for chat functionality. To resolve this, manually add the chat template to tokenizer_config.json of your model.
The following template can be used as a default, but it may not work properly with every model:
```
"chat_template": "{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|im_start|>user\n' + message['content'] + '<|im_end|>\n<|im_start|>assistant\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|im_end|>\n'}}{% endif %}{% endfor %}",
```
10 changes: 10 additions & 0 deletions samples/python/chat_sample/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,13 @@ To enable Unicode characters for Windows cmd open `Region` settings from `Contro
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. For example, the model meta-llama/Llama-2-13b-chat-hf can benefit from being run on a dGPU. Modify the source code to change the device for inference to the GPU.

See https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md#supported-models for the list of supported models.


## Troubleshooting
### Missing chat template

If you encounter an exception indicating a missing "chat template" when launching the `ov::genai::LLMPipeline` in chat mode, it likely means the model was not tuned for chat functionality. To resolve this, manually add the chat template to tokenizer_config.json of your model.
The following template can be used as a default, but it may not work properly with every model:
```
"chat_template": "{% for message in messages %}{% if (message['role'] == 'user') %}{{'<|im_start|>user\n' + message['content'] + '<|im_end|>\n<|im_start|>assistant\n'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|im_end|>\n'}}{% endif %}{% endfor %}",
```
6 changes: 0 additions & 6 deletions src/cpp/include/openvino/genai/tokenizer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,6 @@ class OPENVINO_GENAI_EXPORTS Tokenizer {
bool add_generation_prompt,
const std::string& chat_template="") const;

/**
* @brief Returns true if chat tempate exists and is ready to be applied; false otherwise
* @return is chat tempate ready
*/
bool is_chat_template_ready() const;

// information about <bos>, <eos> tokens should be public,
// they are used at least in StreamerBase descendants
int64_t get_bos_token_id() const;
Expand Down
4 changes: 0 additions & 4 deletions src/cpp/src/llm_pipeline.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -266,10 +266,6 @@ class StatefulLLMPipeline final : public LLMPipelineImplBase {
m_history = {};
m_templated_chat_history = "";
}
OPENVINO_ASSERT(m_tokenizer.is_chat_template_ready(),
"There is no existing chat template for actual model. LLMPipeline cannot work in chat mode."
" Please add chat template to tokenizer_config.json or use another model.");

if (system_message.empty())
return;

Expand Down
14 changes: 5 additions & 9 deletions src/cpp/src/tokenizer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -368,6 +368,11 @@ class Tokenizer::TokenizerImpl {
bool add_generation_prompt,
const std::string& chat_template) const {
auto chat_tpl = chat_template.empty() ? m_chat_template : chat_template;
OPENVINO_ASSERT(!chat_tpl.empty(),
"Chat template wasn't found. This may indicate that the model wasn't trained for chat scenario."
" Please add 'chat_template' to tokenizer_config.json to use the model in chat scenario."
" For more information see the section Troubleshooting in README.md");

// Jinja2Cpp does not support slicing, e.g. [1:].
// In templates slicing is used typically in the header to find system prompt.
// If header containts that typical expression we update template and
Expand Down Expand Up @@ -433,10 +438,6 @@ class Tokenizer::TokenizerImpl {
"For exmaple: <start_of_turn>user{user_prompt}<end_of_turn><start_of_turn>model");
}
}

bool is_chat_template_ready() {
return !m_chat_template.empty();
}
};

Tokenizer::Tokenizer(const std::string& tokenizer_path, const ov::AnyMap& plugin_config) {
Expand Down Expand Up @@ -502,11 +503,6 @@ std::string Tokenizer::apply_chat_template(ChatHistory history,
return m_pimpl->apply_chat_template(history, add_generation_prompt, chat_template);
}

bool Tokenizer::is_chat_template_ready() const {
return m_pimpl->is_chat_template_ready();
};


Tokenizer::~Tokenizer() = default;
} // namespace genai
} // namespace ov

0 comments on commit bf4f9f3

Please sign in to comment.