Skip to content

Commit

Permalink
improve wording for ns
Browse files Browse the repository at this point in the history
Signed-off-by: zhichao-aws <[email protected]>
  • Loading branch information
zhichao-aws committed Jul 16, 2024
1 parent 8db6f92 commit 215341a
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions _search-plugins/neural-sparse-search.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ Introduced 2.11

When selecting a model, choose one of the following options:

- Use a sparse encoding model at both ingestion time and search time (high performance, relatively high latency).
- Use a sparse encoding model at ingestion time and a tokenizer at search time for relatively low performance and low latency. The tokenism doesn't conduct model inference, so you can deploy and invoke a tokenizer using the ML Commons Model API for a more consistent experience.
- Use a sparse encoding model at both ingestion time and search time for high search relevance with relatively high latency.
- Use a sparse encoding model at ingestion time and a tokenizer at search time for low search latency with relatively low search relevance. The tokenism doesn't conduct model inference, so you can deploy and invoke a tokenizer using the ML Commons Model API for a more consistent experience.

**PREREQUISITE**<br>
Before using neural sparse search, make sure to set up a [pretrained sparse embedding model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/pretrained-models/#sparse-encoding-models) or your own sparse embedding model. For more information, see [Choosing a model]({{site.url}}{{site.baseurl}}/ml-commons-plugin/integrating-ml-models/#choosing-a-model).
Expand Down

0 comments on commit 215341a

Please sign in to comment.