From 6f779cef0c78efd3dc0f45f9dd30eee3339a65b4 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Thu, 29 Feb 2024 17:00:19 -0600 Subject: [PATCH 01/23] Add Contributing workloads page (#6430) * Add Contributing workloads page Signed-off-by: Archer * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update _benchmark/user-guide/contributing-workloads.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update contributing-workloads.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Update contributing-workloads.md Signed-off-by: Melissa Vagi Signed-off-by: Melissa Vagi * Reconcile doc review and additional technical feedback. Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Heather Halter Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Heather Halter Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update contributing-workloads.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Add additonal technical feedback. Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update contributing-workloads.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Archer Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Melissa Vagi Co-authored-by: Melissa Vagi Co-authored-by: Heather Halter Co-authored-by: Nathan Bower --- .../user-guide/contributing-workloads.md | 57 +++++++++++++++++++ _benchmark/user-guide/distributed-load.md | 2 +- _benchmark/user-guide/telemetry.md | 2 +- 3 files changed, 59 insertions(+), 2 deletions(-) create mode 100644 _benchmark/user-guide/contributing-workloads.md diff --git a/_benchmark/user-guide/contributing-workloads.md b/_benchmark/user-guide/contributing-workloads.md new file mode 100644 index 0000000000..e60f60eaed --- /dev/null +++ b/_benchmark/user-guide/contributing-workloads.md @@ -0,0 +1,57 @@ +--- +layout: default +title: Sharing custom workloads +nav_order: 11 +parent: User guide +--- + +# Sharing custom workloads + +You can share a custom workload with other OpenSearch users by uploading it to the [workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/) on GitHub. + +Make sure that any data included in the workload's dataset does not contain proprietary data or personally identifiable information (PII). + +To share a custom workload, follow these steps. + +## Create a README.md + +Provide a detailed `README.MD` file that includes the following: + +- The purpose of the workload. When creating a description for the workload, consider its specific use and how the that use case differs from others in the [workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/). +- An example document from the dataset that helps users understand the data's structure. +- The workload parameters that can be used to customize the workload. +- A list of default test procedures included in the workload as well as other test procedures that the workload can run. +- An output sample produced by the workload after a test is run. +- A copy of the open-source license that gives the user and OpenSearch Benchmark permission to use the dataset. + +For an example workload README file, go to the `http_logs` [README](https://github.com/opensearch-project/opensearch-benchmark-workloads/blob/main/http_logs/README.md). + +## Verify the workload's structure + +The workload must include the following files: + +- `workload.json` +- `index.json` +- `files.txt` +- `test_procedures/default.json` +- `operations/default.json` + +Both `default.json` file names can be customized to have a descriptive name. The workload can include an optional `workload.py` file to add more dynamic functionality. For more information about a file's contents, go to [Anatomy of a workload]({{site.url}}{{site.baseurl}}/benchmark/user-guide/understanding-workloads/anatomy-of-a-workload/). + +## Testing the workload + +All workloads contributed to OpenSearch Benchmark must fulfill the following testing requirements: + +- All tests run to explore and produce an example from the workload must target an OpenSearch cluster. +- The workload must pass all integration tests. Follow these steps to ensure that the workload passes the integration tests: + 1. Add the workload to your forked copy of the [workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/). Make sure that you've forked both the `opensearch-benchmark-workloads` repository and the [OpenSeach Benchmark](https://github.com/opensearch-project/opensearch-benchmark) repository. + 3. In your forked OpenSearch Benchmark repository, update the `benchmark-os-it.ini` and `benchmark-in-memory.ini` files in the `/osbenchmark/it/resources` directory to point to the forked workloads repository containing your workload. + 4. After you've modified the `.ini` files, commit your changes to a branch for testing. + 6. Run your integration tests using GitHub actions by selecting the branch for which you committed your changes. Verify that the tests have run as expected. + 7. If your integration tests run as expected, go to your forked workloads repository and merge your workload changes into branches `1` and `2`. This allows for your workload to appear in both major versions of OpenSearch Benchmark. + +## Create a PR + +After testing the workload, create a pull request (PR) from your fork to the `opensearch-project` [workloads repository](https://github.com/opensearch-project/opensearch-benchmark-workloads/). Add a sample output and summary result to the PR description. The OpenSearch Benchmark maintainers will review the PR. + +Once the PR is approved, you must share the data corpora of your dataset. The OpenSearch Benchmark team can then add the dataset to a shared S3 bucket. If your data corpora is stored in an S3 bucket, you can use [AWS DataSync](https://docs.aws.amazon.com/datasync/latest/userguide/create-s3-location.html) to share the data corpora. Otherwise, you must inform the maintainers of where the data corpora resides. diff --git a/_benchmark/user-guide/distributed-load.md b/_benchmark/user-guide/distributed-load.md index ec46091974..60fc98500f 100644 --- a/_benchmark/user-guide/distributed-load.md +++ b/_benchmark/user-guide/distributed-load.md @@ -1,7 +1,7 @@ --- layout: default title: Running distributed loads -nav_order: 10 +nav_order: 15 parent: User guide --- diff --git a/_benchmark/user-guide/telemetry.md b/_benchmark/user-guide/telemetry.md index 7cc7f6b730..d4c40c790a 100644 --- a/_benchmark/user-guide/telemetry.md +++ b/_benchmark/user-guide/telemetry.md @@ -1,7 +1,7 @@ --- layout: default title: Enabling telemetry devices -nav_order: 15 +nav_order: 30 parent: User guide --- From ee2b67f3546069b9a9b349efcd9c7c983a2f7d66 Mon Sep 17 00:00:00 2001 From: zhichao-aws Date: Fri, 1 Mar 2024 22:28:21 +0800 Subject: [PATCH 02/23] update register sparse model (#6555) Signed-off-by: zhichao-aws --- _ml-commons-plugin/api/model-apis/register-model.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/_ml-commons-plugin/api/model-apis/register-model.md b/_ml-commons-plugin/api/model-apis/register-model.md index c10292dba8..880cbd68e5 100644 --- a/_ml-commons-plugin/api/model-apis/register-model.md +++ b/_ml-commons-plugin/api/model-apis/register-model.md @@ -98,13 +98,9 @@ Field | Data type | Required/Optional | Description POST /_plugins/_ml/models/_register { "name": "amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v1", - "version": "1.0.0", + "version": "1.0.1", "model_group_id": "Z1eQf4oB5Vm0Tdw8EIP2", - "description": "This is a neural sparse encoding model: It transfers text into sparse vector, and then extract nonzero index and value to entry and weights. It serves only in ingestion and customer should use tokenizer model in query.", - "model_format": "TORCH_SCRIPT", - "function_name": "SPARSE_ENCODING", - "model_content_hash_value": "9a41adb6c13cf49a7e3eff91aef62ed5035487a6eca99c996156d25be2800a9a", - "url": "https://artifacts.opensearch.org/models/ml-models/amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v1/1.0.0/torch_script/opensearch-neural-sparse-encoding-doc-v1-1.0.0-torch_script.zip" + "model_format": "TORCH_SCRIPT" } ``` {% include copy-curl.html %} From 5f486abf838e297e0dd896dc565d0d57df6c5617 Mon Sep 17 00:00:00 2001 From: zhichao-aws Date: Fri, 1 Mar 2024 22:30:03 +0800 Subject: [PATCH 03/23] Deprecate max_token_score in neural sparse search (#6554) * deprecated max_token_score Signed-off-by: zhichao-aws * Update _query-dsl/specialized/neural-sparse.md Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: zhichao-aws Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _query-dsl/specialized/neural-sparse.md | 8 +++----- _search-plugins/neural-sparse-search.md | 3 +-- 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/_query-dsl/specialized/neural-sparse.md b/_query-dsl/specialized/neural-sparse.md index c91c491dcf..70fcfd892c 100644 --- a/_query-dsl/specialized/neural-sparse.md +++ b/_query-dsl/specialized/neural-sparse.md @@ -20,8 +20,7 @@ Include the following request fields in the `neural_sparse` query: "neural_sparse": { "": { "query_text": "", - "model_id": "", - "max_token_score": "" + "model_id": "" } } ``` @@ -32,7 +31,7 @@ Field | Data type | Required/Optional | Description :--- | :--- | :--- `query_text` | String | Required | The query text from which to generate vector embeddings. `model_id` | String | Required | The ID of the sparse encoding model or tokenizer model that will be used to generate vector embeddings from the query text. The model must be deployed in OpenSearch before it can be used in sparse neural search. For more information, see [Using custom models within OpenSearch]({{site.url}}{{site.baseurl}}/ml-commons-plugin/using-ml-models/) and [Neural sparse search]({{site.url}}{{site.baseurl}}/search-plugins/neural-sparse-search/). -`max_token_score` | Float | Optional | The theoretical upper bound of the score for all tokens in the vocabulary (required for performance optimization). For OpenSearch-provided [pretrained sparse embedding models]({{site.url}}{{site.baseurl}}/ml-commons-plugin/pretrained-models/#sparse-encoding-models), we recommend setting `max_token_score` to 2 for `amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v1` and to 3.5 for `amazon/neural-sparse/opensearch-neural-sparse-encoding-v1`. +`max_token_score` | Float | Optional | (Deprecated) The theoretical upper bound of the score for all tokens in the vocabulary (required for performance optimization). For OpenSearch-provided [pretrained sparse embedding models]({{site.url}}{{site.baseurl}}/ml-commons-plugin/pretrained-models/#sparse-encoding-models), we recommend setting `max_token_score` to 2 for `amazon/neural-sparse/opensearch-neural-sparse-encoding-doc-v1` and to 3.5 for `amazon/neural-sparse/opensearch-neural-sparse-encoding-v1`. This field has been deprecated as of OpenSearch 2.12. #### Example request @@ -43,8 +42,7 @@ GET my-nlp-index/_search "neural_sparse": { "passage_embedding": { "query_text": "Hi world", - "model_id": "aP2Q8ooBpBj3wT4HVS8a", - "max_token_score": 2 + "model_id": "aP2Q8ooBpBj3wT4HVS8a" } } } diff --git a/_search-plugins/neural-sparse-search.md b/_search-plugins/neural-sparse-search.md index c46da172a7..31ae43991e 100644 --- a/_search-plugins/neural-sparse-search.md +++ b/_search-plugins/neural-sparse-search.md @@ -154,8 +154,7 @@ GET my-nlp-index/_search "neural_sparse": { "passage_embedding": { "query_text": "Hi world", - "model_id": "aP2Q8ooBpBj3wT4HVS8a", - "max_token_score": 2 + "model_id": "aP2Q8ooBpBj3wT4HVS8a" } } } From e476ee8db964b03decf3091729bd9b4d62beb61a Mon Sep 17 00:00:00 2001 From: Darshit Chanpura <35282393+DarshitChanpura@users.noreply.github.com> Date: Fri, 1 Mar 2024 11:44:29 -0500 Subject: [PATCH 04/23] Updates SAML demo setup documentation (#6532) * Updates SAML demo setup documentation Signed-off-by: Darshit Chanpura * Updates some language around steps Signed-off-by: Darshit Chanpura * Deleted old saml zip Signed-off-by: Darshit Chanpura * Fixes vale errors Signed-off-by: Darshit Chanpura * Fixes style check Signed-off-by: Darshit Chanpura * Address PR feedback Signed-off-by: Darshit Chanpura * Addresses more comments Signed-off-by: Darshit Chanpura * Adds onboarding as part of vocab Signed-off-by: Darshit Chanpura * Changes the sentence phrase Signed-off-by: Darshit Chanpura * Addresses more feedback Signed-off-by: Darshit Chanpura --------- Signed-off-by: Darshit Chanpura --- .../styles/Vocab/OpenSearch/Words/accept.txt | 1 + _security/authentication-backends/saml.md | 60 +++++++++--------- assets/examples/saml-example-custom.zip | Bin 5337 -> 0 bytes 3 files changed, 30 insertions(+), 31 deletions(-) delete mode 100644 assets/examples/saml-example-custom.zip diff --git a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt index d86d176979..0a14e17e7d 100644 --- a/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt +++ b/.github/vale/styles/Vocab/OpenSearch/Words/accept.txt @@ -77,6 +77,7 @@ Levenshtein [Mm]ultiword [Nn]amespace [Oo]versamples? +[Oo]nboarding pebibyte [Pp]erformant [Pp]luggable diff --git a/_security/authentication-backends/saml.md b/_security/authentication-backends/saml.md index e4f94b4383..ee6e2184dd 100755 --- a/_security/authentication-backends/saml.md +++ b/_security/authentication-backends/saml.md @@ -19,37 +19,35 @@ This profile is meant for use with web browsers. It is not a general-purpose way We provide a fully functional example that can help you understand how to use SAML with OpenSearch Dashboards. -1. Download [the example zip file]({{site.url}}{{site.baseurl}}/assets/examples/saml-example-custom.zip) to a preferred location in your directory and unzip it. -1. At the command line, specify the location of the files in your directory and run `docker-compose up`. -1. Review the files: - - * `customize-docker-compose.yml`: Defines two OpenSearch nodes, an OpenSearch Dashboards server, and a SAML server. - * `customize-opensearch_dashboards.yml`: Includes SAML settings for the default `opensearch_dashboards.yml` file. - * `customize-config.yml`: Configures SAML for authentication. - - You can remove "customize" from the file names if you plan to modify and keep these files for production. - {: .tip } - -1. In the `docker-compose.yml` file, specify your OpenSearch version number in the `image` field for nodes 1 and 2 and the OpenSearch Dashboards server. For example, if you are running OpenSearch version {{site.opensearch_major_minor_version}}, the `image` fields will resemble the following examples: - - ```yml - opensearch-saml-node1: - image: opensearchproject/opensearch:{{site.opensearch_major_minor_version}} - ``` - ```yml - opensearch-saml-node2: - image: opensearchproject/opensearch:{{site.opensearch_major_minor_version}} - ``` - ```yml - opensearch-saml-dashboards: - image: opensearchproject/opensearch-dashboards:{{site.opensearch_major_minor_version}} - ``` - -1. Access OpenSearch Dashboards at [http://localhost:5601](http://localhost:5601){:target='\_blank'}. Note that OpenSearch Dashboards immediately redirects you to the SAML login page. - -1. Log in to OpenSearch Dashboards. The default username is `admin` and the default password is set in your `customize-docker-compose.yml` file in the `OPENSEARCH_INITIAL_ADMIN_PASSWORD=` setting. - -1. After logging in, note that your user in the upper-right is `SAMLAdmin`, as defined in `/var/www/simplesamlphp/config/authsources.php` of the SAML server. +1. Visit the [saml-demo branch](https://github.com/opensearch-project/demos/tree/saml-demo) of the demos repository and download it to a folder of your choice. If you're not familiar with how to use GitHub, see the [OpenSearch onboarding guide](https://github.com/opensearch-project/demos/blob/main/ONBOARDING.md) for instructions. + +1. Navigate to the `demo` folder: + ```zsh + $ cd /demo + ``` + +1. Review the following files, as needed: + + * `.env`: + * Defines the OpenSearch and OpenSearch Dashboards version to use. The default is the latest version ({{site.opensearch_major_minor_version}}). + * Defines the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` variable required by versions 2.12 and later. + * `./custom-config/opensearch_dashboards.yml`: Includes the SAML settings for the default `opensearch_dashboards.yml` file. + * `./custom-config/config.yml`: Configures SAML for authentication. + * `docker-compose.yml`: Defines an OpenSearch server node, an OpenSearch Dashboards server node, and a SAML server node. + * `./saml/config/authsources.php`: Contains the list of users that can be authenticated by this SAML domain. + +1. From the command line, run: + ```zsh + $ docker-compose up. + ``` + +1. Access OpenSearch Dashboards at [http://localhost:5601](http://localhost:5601){:target='\_blank'}. + +1. Select `Log in with single sign-on`. This redirects you to the SAML login page. + +1. Log in to OpenSearch Dashboards with a user defined in `./saml/config/authsources.php` (such as `user1` with password `user1pass`). + +1. After logging in, note that the user ID shown in the upper-right corner of the screen is the same as the `NameID` attribute for the user defined in `./saml/config/authsources.php` of the SAML server (that is, `saml-test` for `user1`). 1. If you want to examine the SAML server, run `docker ps` to find its container ID and then `docker exec -it /bin/bash`. diff --git a/assets/examples/saml-example-custom.zip b/assets/examples/saml-example-custom.zip deleted file mode 100644 index acb733ffd51858aec59d3f1a603da3b522cf5232..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 5337 zcmd5=c|6p6_aDYG*|TqDYm$sLOP0!#EMv+wj4jI;6EgN0;tFFYdnqN^Z07^ZLy9JFoBeb;*-Jq1^_Ft4@ocJ)VPR>pr5vHh*oTxanaQak1ur!y5 zlA>ZY*j+rQJDs@ag&;RK1DZ&{(T$>1pr&TNoJ>s1ufH9qwnRi+wxABRycUi7CMI5w zb>cF(&v$+|KXfd^WvocKQWS$_KHe1HI+JifS+;?q1vaaS5>J2-&d#Nr~FveN0p%JSFPx$q+0C)0-3>? zAqv*-L~3Y20^YCTVi%%~T!olH5Cr0<;j$hK1^L5BMnW8 zCf=O1?_hdAe^j~JBZ(+3y|~kua{N$$h+f*8=DE{gu6;h@3WU?6@c6G_`|L2;Zo94_FyZdm6SBAlmJo+&a@H| z0gJ7uTE*iCfD$O-2w%@yC$B5d87pp6NmhjEj0LHoNod!zka z{N;l@aVx>{jtp~oPg}tChs2DwX!d%h$VMHPh0Imc#b2JQAEMOTB@xSV%V#mDA;v{n1>5H#TH#sjubR+yWDtUoM6Mae@ECChP)D0KVS=fdo0 zq()5OcJX=W%oWuack9fq@30ogg}Q7*G-Yr9C~#a6h2^}PrFQXB92%x2PTH99A0~(6BwirN_xbfnkIrAH5rdPCj?d!B z9rFB&S$76-!B^yJ2F~Ri)YqgED+WG?K<@VxxXOjNX>${nN4@rQT8l5&g(ppc^O*dp z-5P?}c2owW=!T*9fQGE!*QOO2jhLkW)@}HpBNA5?lnZyJHwb1D&Ip9avPo?&&#wnp!+B_j&o2=(ST zk9>^yO~D4#m=UIvXTL|l^-FOP~i;ZABopCwETExQvDV}IucZ{bO)D}Ev3x6&R6 zHoSXE-_UkV$+JWb+WC2xId?{nqK(1a`vRZuX{s?@dmGDJ68;%1VW zH%iaOX5O~g7#z>Ys8JHE5#I_eYAunGiHDXvpldd(^A7KiBof8)5>#JmGr=XmkVMxQ4K7 z;u__Qvw}b}TT_mUH^u|&Cx`L&^zrt`{>e2x%`lhgf3F|7<~i#5v1f{0VdEA184;AO z-{d@6C1+9*Mk1K-Jz4YojcpmHzDU(%>n%C=ahz+3p3yMATd z{aXFcs@zU!8)i*|-m+vEocL6YxyZYth;{l`M$m<6;j>No%3oQ>+$VM*ks(9wqQRkN^t8&r#)AoMx?7Q53i6UjY_BI%C=7JlG%k+EZ9vNoc zgujWbO6l`zyxIDUZ)oh&r>sCZ=d_GL&1dhcM|SVf*1g9UcsbrwIjn=yox*t8?Oo}< z(n*tZgB|a!YKSOO+&^i3&}k1h0_5z$*u@0*3#fo?Az|e z)|r7X`>7$bYFfsJiUt;2`@R&<+6E`Km48O6YPL~hlR0>6M+c2IV zL1wUz*_t;d@G(SusB#yacL=`{k=^5OL^3_3!gaD48W>p_;F@xa2d7Q9Vw{4U?EU& zUZ$VjD9KVP%l#L4?NrS3QtxKbDQRC%1g=deav+iVzPU=Ir`U9n5Gdcuzhu(;^!UJj z3V3$VM?J+V%OF)8<*!?m#Hw>s_;YZ)cSGjH*VCzvsy+dDsWoCRZ?He^d$JZ8!z8t!r@jGW-o6tr7=s%)z9m+7~af&K7r+>F&1 z6gatW9qu$$*`v24aQaNdL2oVe=X<_qv%P_SC|{|^L_tegc`R@{btsZ&LMt zhx#ZQ#Wd=}R=o6N$+r>W$PU(ujDEV*W?`sReiNF<-m=KNgLhIy&|1(o>!KhuEAy(b z^$je4(Qj|8tVKjF*=lNOT%6FfAz4pe%CsX-+6rq3O$d`Nk%cvE1RFDjZIl5hnISQI z9b^7VeXi%%8t`t0KU&QV9p+j^!FYR}aCiNa^3v}j(R*k!kMv*Xo^0eF?bVWO+;nBT zwwC)@>`lJ=XgamresYbjv1w`Rc{XGDv0FJVPm{A%bZ(r{)izH_Nx_TW`^6#c4vgN% z6Cng6p_qe?pUO@7$vrwKjdee76DQviUdlH9LQFAGFBR*SdCQI4I*NVCykJ@Oj?h? zuh|yk&yjYEjs+Q8mwIK^Q_zDIv0@{6YF}->R)3#Em;WwzgdwkDNpVVb=-IiZoY>-V zkPhi-bTVD<)vgd!{8i(az?`Z47|xf?hFa-F=_Qp+ zsqnh4$-CjMl(##X2%hq-{p4QJc0Id=;o&RaqB-VosH-$sp-fxwOw+}WXF>yiA#}y` zL^`IEt3_Yz61vuSVVw9R3$JmZtMX8beQ3T~2@X=a(?Uu-_+tA71** zZFMSkfpnKWv{^2+mG=U!^?m;VyBhXi7e3_GmKTHpanp{$lDBj0!~h;CXYJ%;IKXSjr-b- zVe&F^M-j%xFk?#&NX#`VsHL8bEgwRNE{ZpZ4}x9{Dq&+jfH&7k*o8fMCi3n7f1s(% z|M0Mi)?EOAn8E4jdBIyBB+y=4ktSW>vLWR*oB(Wktni2G)q{ogqcw1_uqsVNIPLa7 zJZ4zkKVSry($`O_z?8l!g+vU^+YS5Qd?ejuX8+N~`hgIbc~_;77)0xe|8!)dv41`R zuQvxwnd{qaV9H#TLSg`|1Z}Xg#)ALx%=x4B^{pt-(yvM(F_vj-XMc7ho21tIjR2kR z`kD)LzN=D5oM7G(hQBGUKrj45AN-@)_4NoS=&MpljHea Date: Fri, 1 Mar 2024 16:59:27 +0000 Subject: [PATCH 05/23] Update docker.md (#6519) * Update docker.md Adding OPENSEARCH_INITIAL_ADMIN_PASSWORD variable to OpenSearch node for version 2.12.0. Signed-off-by: Pawel Wlodarczyk * Added note about 2.12 Signed-off-by: Heather Halter * Update _install-and-configure/install-opensearch/docker.md Signed-off-by: Heather Halter * Update _install-and-configure/install-opensearch/docker.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Heather Halter * Update docker.md Signed-off-by: Heather Halter * Update _install-and-configure/install-opensearch/docker.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Heather Halter --------- Signed-off-by: Pawel Wlodarczyk Signed-off-by: Heather Halter Co-authored-by: Heather Halter Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _install-and-configure/install-opensearch/docker.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/_install-and-configure/install-opensearch/docker.md b/_install-and-configure/install-opensearch/docker.md index 394039bfc2..00d9cf20e9 100644 --- a/_install-and-configure/install-opensearch/docker.md +++ b/_install-and-configure/install-opensearch/docker.md @@ -155,6 +155,13 @@ docker-compose -f /path/to/your-file.yml up If this is your first time launching an OpenSearch cluster using Docker Compose, use the following example `docker-compose.yml` file. Save it in the home directory of your host and name it `docker-compose.yml`. This file will create a cluster that contains three containers: two containers running the OpenSearch service and a single container running OpenSearch Dashboards. These containers will communicate over a bridge network called `opensearch-net` and use two volumes, one for each OpenSearch node. Because this file does not explicitly disable the demo security configuration, self-signed TLS certificates are installed and internal users with default names and passwords are created. +### Setting a custom admin password + +Starting with OpenSearch 2.12, a custom admin password is required to set up a demo security configuration. For a Docker cluster set up using a `docker-compose.yml` file, do either of the following: + +1. Export `OPENSEARCH_INITIAL_ADMIN_PASSWORD` with a value in the same terminal session before running `docker-compose up`. +2. Create an `.env` file in the same folder as your `docker-compose.yml` file with the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` and strong password values. + ### Sample docker-compose.yml ```yml @@ -170,6 +177,7 @@ services: - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligible to serve as cluster manager - bootstrap.memory_lock=true # Disable JVM heap memory swapping - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM + - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD} # Sets the demo admin user password when using demo configuration, required for OpenSearch 2.12 and later ulimits: memlock: soft: -1 # Set memlock to unlimited (no soft or hard limit) @@ -194,6 +202,7 @@ services: - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 - bootstrap.memory_lock=true - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" + - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD} ulimits: memlock: soft: -1 From 53ed63ce55b95a7b791723fa31f2b0f4986cafc9 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Fri, 1 Mar 2024 11:56:08 -0700 Subject: [PATCH 06/23] Add link to saved object (#6548) Signed-off-by: Melissa Vagi --- _dashboards/management/management-index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_dashboards/management/management-index.md b/_dashboards/management/management-index.md index c1757893ea..7edc4d06c2 100644 --- a/_dashboards/management/management-index.md +++ b/_dashboards/management/management-index.md @@ -9,7 +9,7 @@ has_children: true Introduced 2.10 {: .label .label-purple } -Dashboards Management serves as the command center for customizing OpenSearch Dashboards to your needs. A view of the interface is shown in the following image. +**Dashboards Management** serves as the command center for customizing OpenSearch Dashboards to your needs. A view of the interface is shown in the following image. Dashboards Management interface @@ -18,9 +18,9 @@ Dashboards Management serves as the command center for customizing OpenSearch Da ## Applications -The following applications are available in Dashboards Management: +The following applications are available in **Dashboards Management**: - **[Index Patterns]({{site.url}}{{site.baseurl}}/dashboards/management/index-patterns/):** To access OpenSearch data, you need to create an index pattern so that you can select the data you want to use and define the properties of the fields. The Index Pattern tool gives you the ability to create an index pattern from within the UI. Index patterns point to one or more indexes, data streams, or index aliases. - **[Data Sources]({{site.url}}{{site.baseurl}}/dashboards/management/multi-data-sources/):** The Data Sources tool is used to configure and manage the data sources that OpenSearch uses to collect and analyze data. You can use the tool to specify the source configuration in your copy of the [OpenSearch Dashboards configuration file]({{site.url}}{{site.baseurl}}https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/config/opensearch_dashboards.yml). -- **Saved Objects:** The Saved Objects tool helps you organize and manage your saved objects. Saved objects are files that store data, such as dashboards, visualizations, and maps, for later use. +- **[Saved Objects](https://opensearch.org/blog/enhancement-multiple-data-source-import-saved-object/):** The Saved Objects tool helps you organize and manage your saved objects. Saved objects are files that store data, such as dashboards, visualizations, and maps, for later use. - **[Advanced Settings]({{site.url}}{{site.baseurl}}/dashboards/management/advanced-settings/):** The Advanced Settings tool gives you the flexibility to personalize the behavior of OpenSearch Dashboards. The tool is divided into settings sections, such as General, Accessibility, and Notifications, and you can use it to customize and optimize many of your Dashboards settings. From bd73c3637f1dc657154ca8e4389d48ad0cc33a84 Mon Sep 17 00:00:00 2001 From: Heather Halter Date: Fri, 1 Mar 2024 11:18:04 -0800 Subject: [PATCH 07/23] Adds info about windows to Docker install (#6520) * adds info about windows Signed-off-by: Heather Halter * Update _install-and-configure/install-opensearch/docker.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: Heather Halter * Revised the intro sentence Signed-off-by: Heather Halter * text edits Signed-off-by: Heather Halter * updated based on editorial Signed-off-by: Heather Halter * Fixed instructions for setting admin password Signed-off-by: Heather Halter * Added copy button Signed-off-by: Heather Halter --------- Signed-off-by: Heather Halter Signed-off-by: Heather Halter Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- .../install-opensearch/docker.md | 27 ++++++++++++++----- 1 file changed, 21 insertions(+), 6 deletions(-) diff --git a/_install-and-configure/install-opensearch/docker.md b/_install-and-configure/install-opensearch/docker.md index 00d9cf20e9..189d647ce5 100644 --- a/_install-and-configure/install-opensearch/docker.md +++ b/_install-and-configure/install-opensearch/docker.md @@ -29,9 +29,11 @@ Docker Compose is a utility that allows users to launch multiple containers with If you need to install Docker Compose manually and your host supports Python, you can use [pip](https://pypi.org/project/pip/) to install the [Docker Compose package](https://pypi.org/project/docker-compose/) automatically. {: .tip} -## Important host settings +## Configure important host settings +Before installing OpenSearch using Docker, configure the following settings. These are the most important settings that can affect the performance of your services, but for additional information, see [important system settings]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/index/#important-settings){:target='\_blank'}. -Before launching OpenSearch you should review some [important system settings]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/index/#important-settings){:target='\_blank'} that can impact the performance of your services. +### Linux settings +For a Linux environment, run the following commands: 1. Disable memory paging and swapping performance on the host to improve performance. ```bash @@ -54,6 +56,14 @@ Before launching OpenSearch you should review some [important system settings]({ cat /proc/sys/vm/max_map_count ``` +### Windows settings +For Windows workloads using WSL through Docker Desktop, run the following commands in a terminal to set the `vm.max_map_count`: + +```bash +wsl -d docker-desktop +sysctl -w vm.max_map_count=262144 +``` + ## Run OpenSearch in a Docker container Official OpenSearch images are hosted on [Docker Hub](https://hub.docker.com/u/opensearchproject/) and [Amazon ECR](https://gallery.ecr.aws/opensearchproject/). If you want to inspect the images you can pull them individually using `docker pull`, such as in the following examples. @@ -153,14 +163,19 @@ You can specify a custom file location and name when invoking `docker-compose` w docker-compose -f /path/to/your-file.yml up ``` -If this is your first time launching an OpenSearch cluster using Docker Compose, use the following example `docker-compose.yml` file. Save it in the home directory of your host and name it `docker-compose.yml`. This file will create a cluster that contains three containers: two containers running the OpenSearch service and a single container running OpenSearch Dashboards. These containers will communicate over a bridge network called `opensearch-net` and use two volumes, one for each OpenSearch node. Because this file does not explicitly disable the demo security configuration, self-signed TLS certificates are installed and internal users with default names and passwords are created. +If this is your first time launching an OpenSearch cluster using Docker Compose, use the following example `docker-compose.yml` file. Save it in the home directory of your host and name it `docker-compose.yml`. This file creates a cluster that contains three containers: two containers running the OpenSearch service and a single container running OpenSearch Dashboards. These containers communicate over a bridge network called `opensearch-net` and use two volumes, one for each OpenSearch node. Because this file does not explicitly disable the demo security configuration, self-signed TLS certificates are installed and internal users with default names and passwords are created. ### Setting a custom admin password -Starting with OpenSearch 2.12, a custom admin password is required to set up a demo security configuration. For a Docker cluster set up using a `docker-compose.yml` file, do either of the following: +Starting with OpenSearch 2.12, a custom admin password is required to set up a demo security configuration. Do one of the following: -1. Export `OPENSEARCH_INITIAL_ADMIN_PASSWORD` with a value in the same terminal session before running `docker-compose up`. -2. Create an `.env` file in the same folder as your `docker-compose.yml` file with the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` and strong password values. +- Before running `docker-compose.yml`, set a new custom admin password using the following command: + ``` + export OPENSEARCH_INITIAL_ADMIN_PASSWORD= + ``` + {% include copy.html %} + +- Create an `.env` file in the same folder as your `docker-compose.yml` file with the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` and a strong password value. ### Sample docker-compose.yml From 90d3bbc5dc246feef2ff3b2d6dbc2dff6206e7d4 Mon Sep 17 00:00:00 2001 From: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Date: Fri, 1 Mar 2024 13:55:52 -0600 Subject: [PATCH 08/23] Add copy buttons to demo configuration page (#6561) Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _security/configuration/demo-configuration.md | 23 +++++++++++++------ 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/_security/configuration/demo-configuration.md b/_security/configuration/demo-configuration.md index feb89f47ad..0f8cd4138e 100644 --- a/_security/configuration/demo-configuration.md +++ b/_security/configuration/demo-configuration.md @@ -31,8 +31,9 @@ Use the following steps to set up the Security plugin using Docker: 3. Run the following command: ```bash -$ docker-compose up +docker-compose up ``` +{% include copy.html %} ### Setting up a custom admin password **Note**: For OpenSearch versions 2.12 and later, you must set the initial admin password before installation. To customize the admin password, you can take the following steps: @@ -47,14 +48,16 @@ $ docker-compose up For TAR distributions on Linux, download the Linux setup files from the OpenSearch [Download & Get Started](https://opensearch.org/downloads.html) page. Then use the following command to run the demo configuration: ```bash -$ ./opensearch-tar-install.sh +./opensearch-tar-install.sh ``` +{% include copy.html %} For OpenSearch 2.12 or later, set a new custom admin password before installation by using the following command: ```bash -$ export OPENSEARCH_INITIAL_ADMIN_PASSWORD= +export OPENSEARCH_INITIAL_ADMIN_PASSWORD= ``` +{% include copy.html %} ### Windows @@ -63,12 +66,14 @@ For ZIP distributions on Windows, after downloading and extracting the setup fil ```powershell > .\opensearch-windows-install.bat ``` +{% include copy.html %} For OpenSearch 2.12 or later, set a new custom admin password before installation by running the following command: ```powershell > set OPENSEARCH_INITIAL_ADMIN_PASSWORD= ``` +{% include copy.html %} ### Helm @@ -85,28 +90,32 @@ extraEnvs: For RPM packages, install OpenSearch and set up the demo configuration by running the following command: ```bash -$ sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm +sudo yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm ``` +{% include copy.html %} For OpenSearch 2.12 or later, set a new custom admin password before installation by using the following command: ```bash -$ sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD= yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm +sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD= yum install opensearch-{{site.opensearch_version}}-linux-x64.rpm ``` +{% include copy.html %} ### DEB For DEB packages, install OpenSearch and set up the demo configuration by running the following command: ```bash -$ sudo dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb +sudo dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb ``` +{% include copy.html %} For OpenSearch 2.12 or later, set a new custom admin password before installation by using the following command: ```bash -$ sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD= dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb +sudo env OPENSEARCH_INITIAL_ADMIN_PASSWORD= dpkg -i opensearch-{{site.opensearch_version}}-linux-arm64.deb ``` +{% include copy.html %} ## Local distribution From b3be56704c0a833834a2fb42358e835719218dd1 Mon Sep 17 00:00:00 2001 From: Heather Halter Date: Fri, 1 Mar 2024 12:47:14 -0800 Subject: [PATCH 09/23] Updated the setting in 'certificate validation' section (#6553) Signed-off-by: Heather Halter --- _security/authentication-backends/ldap.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_security/authentication-backends/ldap.md b/_security/authentication-backends/ldap.md index 52ebb99b9b..c6caec4524 100755 --- a/_security/authentication-backends/ldap.md +++ b/_security/authentication-backends/ldap.md @@ -155,7 +155,7 @@ By default, the Security plugin validates the TLS certificate of the LDAP server ``` plugins.security.ssl.transport.pemtrustedcas_filepath: ... -plugins.security.ssl.http.truststore_filepath: ... +plugins.security.ssl.transport.truststore_filepath: ... ``` If your server uses a certificate signed by a different CA, import this CA into your truststore or add it to your trusted CA file on each node. From 93d07a0dcba48d2b2c433347ec6dedcb6fd5e762 Mon Sep 17 00:00:00 2001 From: John Heraghty <148883955+john-eliatra@users.noreply.github.com> Date: Fri, 1 Mar 2024 22:58:47 +0000 Subject: [PATCH 10/23] Add 'DLS and multiple roles' section to DLS topic (#6408) * explaination of setting plugins.security.dfm_empty_overrides_all: true Signed-off-by: leanne.laceybyrne@eliatra.com * datadog grammer corrected in documentation Signed-off-by: leanne.laceybyrne@eliatra.com * Update _security/access-control/document-level-security.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _security/access-control/document-level-security.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _security/access-control/document-level-security.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _security/access-control/document-level-security.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * Update _security/access-control/document-level-security.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * adding more examples of setting for dsl to make it clearer Signed-off-by: leanne.laceybyrne@eliatra.com * small edits to fix spacing in previous commit Signed-off-by: leanne.laceybyrne@eliatra.com * Update _security/access-control/document-level-security.md Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> * reviewdog fixes Signed-off-by: leanne.laceybyrne@eliatra.com * Formatting edits. Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update document-level-security.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: leanne.laceybyrne@eliatra.com Signed-off-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: leanne.laceybyrne@eliatra.com Co-authored-by: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Nathan Bower --- .../access-control/document-level-security.md | 94 +++++++++++++++++++ 1 file changed, 94 insertions(+) diff --git a/_security/access-control/document-level-security.md b/_security/access-control/document-level-security.md index d1c275119b..3f2049a1e2 100644 --- a/_security/access-control/document-level-security.md +++ b/_security/access-control/document-level-security.md @@ -185,3 +185,97 @@ plugins.security.dls.mode: filter-level Lucene-level DLS | `lucene-level` | This setting makes all DLS queries apply to the Lucene level. | Lucene-level DLS modifies Lucene queries and data structures directly. This is the most efficient mode but does not allow certain advanced constructs in DLS queries, including TLQs. Filter-level DLS | `filter-level` | This setting makes all DLS queries apply to the filter level. | In this mode, OpenSearch applies DLS by modifying queries that OpenSearch receives. This allows for term-level lookup queries in DLS queries, but you can only use the `get`, `search`, `mget`, and `msearch` operations to retrieve data from the protected index. Additionally, cross-cluster searches are limited with this mode. Adaptive | `adaptive-level` | The default setting that allows OpenSearch to automatically choose the mode. | DLS queries without TLQs are executed in Lucene-level mode, while DLS queries that contain TLQ are executed in filter- level mode. + +## DLS and multiple roles + +OpenSearch combines all DLS queries with the logical `OR` operator. However, when a role that uses DLS is combined with another security role that doesn't use DLS, the query results are filtered to display only documents matching the DLS from the first role. This filter rule also applies to roles that do not grant read documents. + +### When to enable `plugins.security.dfm_empty_overrides_all` + +When to enable the `plugins.security.dfm_empty_overrides_all` setting depends on whether you want to restrict user access to documents without DLS. + + +To ensure access is not restricted, you can set the following configuration in `opensearch.yml`: + +``` +plugins.security.dfm_empty_overrides_all: true +``` +{% include copy.html %} + + +The following examples show what level of access roles with DLS enabled and without DLS enabled, depending on the interaction. These examples can help you decide when to enable the `plugins.security.dfm_empty_overrides_all` setting. + +#### Example: Document access + +This example demonstrates that enabling `plugins.security.dfm_empty_overrides_all` is beneficial in scenarios where you need specific users to have unrestricted access to documents despite being part of a broader group with restricted access. + +**Role A with DLS**: This role is granted to a broad group of users and includes DLS to restrict access to specific documents, as shown in the following permission set: + +``` +{ + "index_permissions": [ + { + "index_patterns": ["example-index"], + "dls": "[.. some DLS here ..]", + "allowed_actions": ["indices:data/read/search"] + } + ] +} +``` + +**Role B without DLS:** This role is specifically granted to certain users, such as administrators, and does not include DLS, as shown in the following permission set: + +``` +{ + "index_permissions" : [ + { + "index_patterns" : ["*"], + "allowed_actions" : ["indices:data/read/search"] + } + ] +} +``` +{% include copy.html %} + +Setting `plugins.security.dfm_empty_overrides_all` to `true` ensures that administrators assigned Role B can override any DLS restrictions imposed by Role A. This allows specific Role B users to access all documents, regardless of the restrictions applied by Role A's DLS restrictions. + +#### Example: Search template access + +In this example, two roles are defined, one with DLS and another without DLS, granting access to search templates: + +**Role A with DLS:** + +``` +{ + "index_permissions": [ + { + "index_patterns": [ + "example-index" + ], + "dls": "[.. some DLS here ..]", + "allowed_actions": [ + "indices:data/read/search", + ] + } + ] +} +``` +{% include copy.html %} + +**Role B, without DLS**, which only grants access to search templates: + +``` +{ + "index_permissions" : [ + { + "index_patterns" : [ "*" ], + "allowed_actions" : [ "indices:data/read/search/template" ] + } + ] +} +``` +{% include copy.html %} + +When a user has both Role A and Role B permissions, the query results are filtered based on Role A's DLS, even though Role B doesn't use DLS. The DLS settings are retained, and the returned access is appropriately restricted. + +When a user is assigned both Role A and Role B and the `plugins.security.dfm_empty_overrides_all` setting is enabled, Role B's permissions Role B's permissions will override Role A's restrictions, allowing that user to access all documents. This ensures that the role without DLS takes precedence in the search query response. From 9156e32cfba02eddf32742cd88d5fd5b5050c9c1 Mon Sep 17 00:00:00 2001 From: Jonny Coddington Date: Mon, 4 Mar 2024 17:09:33 +0000 Subject: [PATCH 11/23] Updated incorrect `DeleteRequest` to `DeleteIndexRequest` (#6568) Signed-off-by: Jonny Coddington --- _clients/java.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_clients/java.md b/_clients/java.md index e47070bdc9..4c1e06a44b 100644 --- a/_clients/java.md +++ b/_clients/java.md @@ -344,7 +344,7 @@ client.delete(b -> b.index(index).id("1")); The following sample code deletes an index: ```java -DeleteIndexRequest deleteIndexRequest = new DeleteRequest.Builder().index(index).build(); +DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest.Builder().index(index).build(); DeleteIndexResponse deleteIndexResponse = client.indices().delete(deleteIndexRequest); ``` {% include copy.html %} From 174078f90e01cb804cf84f4a853efeeff251a47c Mon Sep 17 00:00:00 2001 From: Craig Perkins Date: Mon, 4 Mar 2024 17:15:43 -0500 Subject: [PATCH 12/23] Update keystore documentation to remove placeholder section (#6566) * Update keystore documentation to remove placeholder section Signed-off-by: Craig Perkins * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Craig Perkins Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _security/configuration/opensearch-keystore.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/_security/configuration/opensearch-keystore.md b/_security/configuration/opensearch-keystore.md index 6574669a9d..8a6f3357df 100644 --- a/_security/configuration/opensearch-keystore.md +++ b/_security/configuration/opensearch-keystore.md @@ -107,10 +107,6 @@ After this command, you will be prompted to enter the secret key securely. No response exists for this command. To confirm that the setting was deleted, use `opensearch-keystore list`. -## Referring to keystore entries +## KeyStore entries as OpenSearch settings -After a setting has been added to a keystore, you can refer back to that setting in your OpenSearch configuration. To refer back to the setting, add the keystore setting name as a placeholder in the `opensearch.yml` configuration file, as shown in the following example: - -```bash -plugins.security.ssl.http.pemkey_password_secure: ${plugins.security.ssl.http.pemkey_password_secure} -``` +After a setting has been added to a keystore, it is implicitly added to the OpenSearch configuration as if it were another entry in `opensearch.yml`. To modify a keystore entry use `./bin/opensearch-keystore upgrade `. To remove an entry, use `./bin/opensearch-keystore remove `. From 9a647c3e68e101f8ed04019536342414195c3702 Mon Sep 17 00:00:00 2001 From: Guido Lena Cota Date: Tue, 5 Mar 2024 19:30:41 +0100 Subject: [PATCH 13/23] Fix minor typos (#6556) * Fix minor typos Signed-off-by: Guido Lena Cota * Apply suggestions from code review Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --------- Signed-off-by: Guido Lena Cota Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _search-plugins/knn/knn-index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_search-plugins/knn/knn-index.md b/_search-plugins/knn/knn-index.md index 15fab5e4e1..4a527f3bcb 100644 --- a/_search-plugins/knn/knn-index.md +++ b/_search-plugins/knn/knn-index.md @@ -208,8 +208,8 @@ The following example uses the `hnsw` method without specifying an encoder (by d Paramater Name | Required | Default | Updatable | Description :--- | :--- | :--- | :--- | :--- -`m` | false | 1 | false | Determine how many many sub-vectors to break the vector into. sub-vectors are encoded independently of each other. This dimension of the vector must be divisible by `m`. Max value is 1024. -`code_size` | false | 8 | false | Determines the number of bits to encode a sub-vector into. Max value is 8. **Note** --- for IVF, this value must be less than or equal to 8. For HNSW, this value can only be 8. +`m` | false | 1 | false | Determines the number of subvectors into which to break the vector. Subvectors are encoded independently of each other. This dimension of the vector must be divisible by `m`. Maximum value is 1,024. +`code_size` | false | 8 | false | Determines the number of bits into which to encode a subvector. Maximum value is 8. For IVF, this value must be less than or equal to 8. For HNSW, this value can only be 8. ### Choosing the right method From 1fbdd03a71ae539500f8a6cbddcbcbeeee6e22a8 Mon Sep 17 00:00:00 2001 From: Daniel Widdis Date: Tue, 5 Mar 2024 10:32:24 -0800 Subject: [PATCH 14/23] Add documentation for Search Workflow State API (#6546) * Add documentation for Search Workflow State API Signed-off-by: Daniel Widdis * Improve wording per review suggestions Signed-off-by: Daniel Widdis * Match indendation on example queries Signed-off-by: Daniel Widdis --------- Signed-off-by: Daniel Widdis --- .../api/get-workflow-status.md | 9 ++- _automating-configurations/api/index.md | 1 + .../api/search-workflow-state.md | 63 +++++++++++++++++++ .../api/search-workflow.md | 4 +- 4 files changed, 72 insertions(+), 5 deletions(-) create mode 100644 _automating-configurations/api/search-workflow-state.md diff --git a/_automating-configurations/api/get-workflow-status.md b/_automating-configurations/api/get-workflow-status.md index 0b0e6e9437..03870af174 100644 --- a/_automating-configurations/api/get-workflow-status.md +++ b/_automating-configurations/api/get-workflow-status.md @@ -83,7 +83,8 @@ While provisioning is in progress, OpenSearch returns a partial resource list: { "workflow_step_name": "create_connector", "workflow_step_id": "create_connector_1", - "connector_id": "NdjCQYwBLmvn802B0IwE" + "resource_type": "connector_id", + "resource_id": "NdjCQYwBLmvn802B0IwE" } ] } @@ -99,12 +100,14 @@ Upon provisioning completion, OpenSearch returns the full resource list: { "workflow_step_name": "create_connector", "workflow_step_id": "create_connector_1", - "connector_id": "NdjCQYwBLmvn802B0IwE" + "resource_type": "connector_id", + "resource_id": "NdjCQYwBLmvn802B0IwE" }, { "workflow_step_name": "register_remote_model", "workflow_step_id": "register_model_2", - "model_id": "N9jCQYwBLmvn802B0oyh" + "resource_type": "model_id", + "resource_id": "N9jCQYwBLmvn802B0oyh" } ] } diff --git a/_automating-configurations/api/index.md b/_automating-configurations/api/index.md index f67c5cb664..5fb050539b 100644 --- a/_automating-configurations/api/index.md +++ b/_automating-configurations/api/index.md @@ -19,5 +19,6 @@ OpenSearch supports the following workflow APIs: * [Get workflow status]({{site.url}}{{site.baseurl}}/automating-configurations/api/get-workflow-status/) * [Get workflow steps]({{site.url}}{{site.baseurl}}/automating-configurations/api/get-workflow-steps/) * [Search workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/search-workflow/) +* [Search workflow state]({{site.url}}{{site.baseurl}}/automating-configurations/api/search-workflow-state/) * [Deprovision workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/deprovision-workflow/) * [Delete workflow]({{site.url}}{{site.baseurl}}/automating-configurations/api/delete-workflow/) \ No newline at end of file diff --git a/_automating-configurations/api/search-workflow-state.md b/_automating-configurations/api/search-workflow-state.md new file mode 100644 index 0000000000..9e21f14392 --- /dev/null +++ b/_automating-configurations/api/search-workflow-state.md @@ -0,0 +1,63 @@ +--- +layout: default +title: Search for a workflow state +parent: Workflow APIs +nav_order: 65 +--- + +# Search for a workflow + +This is an experimental feature and is not recommended for use in a production environment. For updates on the progress of the feature or if you want to leave feedback, see the associated [GitHub issue](https://github.com/opensearch-project/flow-framework/issues/475). +{: .warning} + +You can search for resources created by workflows by matching a query to a field. The fields you can search correspond to those returned by the [Get Workflow Status API]({{site.url}}{{site.baseurl}}/automating-configurations/api/get-workflow-status/). + +## Path and HTTP methods + +```json +GET /_plugins/_flow_framework/workflow/state/_search +POST /_plugins/_flow_framework/workflow/state/_search +``` + +#### Example request: All workflows with a state of `NOT_STARTED` + +```json +GET /_plugins/_flow_framework/workflow/state/_search +{ + "query": { + "match": { + "state": "NOT_STARTED" + } + } +} +``` +{% include copy-curl.html %} + +#### Example request: All workflows that have a `resources_created` field with a `workflow_step_id` of `register_model_2` + +```json +GET /_plugins/_flow_framework/workflow/state/_search +{ + "query": { + "nested": { + "path": "resources_created", + "query": { + "bool": { + "must": [ + { + "match": { + "resources_created.workflow_step_id": "register_model_2" + } + } + ] + } + } + } + } +} +``` +{% include copy-curl.html %} + +#### Example response + +The response contains documents matching the search parameters. \ No newline at end of file diff --git a/_automating-configurations/api/search-workflow.md b/_automating-configurations/api/search-workflow.md index 8227bbd50b..7eb8890f7e 100644 --- a/_automating-configurations/api/search-workflow.md +++ b/_automating-configurations/api/search-workflow.md @@ -24,7 +24,7 @@ POST /_plugins/_flow_framework/workflow/_search ```json GET /_plugins/_flow_framework/workflow/_search { - "query": { + "query": { "match_all": {} } } @@ -36,7 +36,7 @@ GET /_plugins/_flow_framework/workflow/_search ```json GET /_plugins/_flow_framework/workflow/_search { - "query": { + "query": { "match": { "use_case": "REMOTE_MODEL_DEPLOYMENT" } From 544ff2431eeb55ccbb1110a49ba97fb2350ed3d6 Mon Sep 17 00:00:00 2001 From: eugene7421 <158471256+eugene7421@users.noreply.github.com> Date: Tue, 5 Mar 2024 20:01:07 +0000 Subject: [PATCH 15/23] Updated index permissions as per customer request #20230726 (#6404) * I updated index permissions as per customer request Signed-off-by: eugene7421 * Update permissions.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * fixing datadog issues Signed-off-by: leanne.laceybyrne@eliatra.com * fix more links Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update URL strcuture. Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * reviewdog issues ammeded Signed-off-by: leanne.laceybyrne@eliatra.com * Update permissions.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update permissions.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Update permissions.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: eugene7421 Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Signed-off-by: leanne.laceybyrne@eliatra.com Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: leanne.laceybyrne@eliatra.com --- _security/access-control/permissions.md | 152 ++++++++++++------------ 1 file changed, 78 insertions(+), 74 deletions(-) diff --git a/_security/access-control/permissions.md b/_security/access-control/permissions.md index 60939612fd..226eb259c7 100644 --- a/_security/access-control/permissions.md +++ b/_security/access-control/permissions.md @@ -380,80 +380,84 @@ See [Index templates]({{site.url}}{{site.baseurl}}/im-plugin/index-templates/). These permissions apply to an index or index pattern. You might want a user to have read access to all indexes (that is, `*`), but write access to only a few (for example, `web-logs` and `product-catalog`). -- indices:admin/aliases -- indices:admin/aliases/get -- indices:admin/analyze -- indices:admin/cache/clear -- indices:admin/close -- indices:admin/close* -- indices:admin/create (create indexes) -- indices:admin/data_stream/create -- indices:admin/data_stream/delete -- indices:admin/data_stream/get -- indices:admin/delete (delete indexes) -- indices:admin/exists -- indices:admin/flush -- indices:admin/flush* -- indices:admin/forcemerge -- indices:admin/get (retrieve index and mapping) -- indices:admin/mapping/put -- indices:admin/mappings/fields/get -- indices:admin/mappings/fields/get* -- indices:admin/mappings/get -- indices:admin/open -- indices:admin/plugins/replication/index/setup/validate -- indices:admin/plugins/replication/index/start -- indices:admin/plugins/replication/index/pause -- indices:admin/plugins/replication/index/resume -- indices:admin/plugins/replication/index/stop -- indices:admin/plugins/replication/index/update -- indices:admin/plugins/replication/index/status_check -- indices:admin/refresh -- indices:admin/refresh* -- indices:admin/resolve/index -- indices:admin/rollover -- indices:admin/seq_no/global_checkpoint_sync -- indices:admin/settings/update -- indices:admin/shards/search_shards -- indices:admin/template/delete -- indices:admin/template/get -- indices:admin/template/put -- indices:admin/upgrade -- indices:admin/validate/query -- indices:data/read/explain -- indices:data/read/field_caps -- indices:data/read/field_caps* -- indices:data/read/get -- indices:data/read/mget -- indices:data/read/mget* -- indices:data/read/msearch -- indices:data/read/msearch/template -- indices:data/read/mtv (multi-term vectors) -- indices:data/read/mtv* -- indices:data/read/plugins/replication/file_chunk -- indices:data/read/plugins/replication/changes -- indices:data/read/scroll -- indices:data/read/scroll/clear -- indices:data/read/search -- indices:data/read/search* -- indices:data/read/search/template -- indices:data/read/tv (term vectors) -- indices:data/write/bulk -- indices:data/write/bulk* -- indices:data/write/delete (delete documents) -- indices:data/write/delete/byquery -- indices:data/write/plugins/replication/changes -- indices:data/write/index (add documents to existing indexes) -- indices:data/write/reindex -- indices:data/write/update -- indices:data/write/update/byquery -- indices:monitor/data_stream/stats -- indices:monitor/recovery -- indices:monitor/segments -- indices:monitor/settings/get -- indices:monitor/shard_stores -- indices:monitor/stats -- indices:monitor/upgrade + +| Permission | Description | +| --- | --- | +| `indices:admin/aliases` | Permissions for [index aliases]({{site.url}}{{site.baseurl}}/im-plugin/index-alias/). | +| `indices:admin/aliases/get` | Permission to get [index aliases]({{site.url}}{{site.baseurl}}/im-plugin/index-alias/). | +| `indices:admin/analyze` | Permission to use the [Analyze API]({{site.url}}{{site.baseurl}}/api-reference/analyze-apis/). | +| `indices:admin/cache/clear` | Permission to [clear cache]({{site.url}}{{site.baseurl}}/api-reference/index-apis/clear-index-cache/). | +| `indices:admin/close` | Permission to [close an index]({{site.url}}{{site.baseurl}}/api-reference/index-apis/close-index/). | +| `indices:admin/close*` | Permission to [close an index]({{site.url}}{{site.baseurl}}/api-reference/index-apis/close-index/). | +| `indices:admin/create` | Permission to [create indexes]({{site.url}}{{site.baseurl}}/api-reference/index-apis/create-index/). | +| `indices:admin/data_stream/create` | Permission to create [data streams]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/datastream/#creating-a-data-stream). | +| `indices:admin/data_stream/delete` | Permission to [delete data streams]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/datastream/#deleting-a-data-stream). | +| `indices:admin/data_stream/get` | Permission to [get data streams]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/datastream/#viewing-a-data-stream). | +| `indices:admin/delete` | Permission to [delete indexes]({{site.url}}{{site.baseurl}}/api-reference/index-apis/delete-index/). | +| `indices:admin/exists` | Permission to use [exists query]({{site.url}}{{site.baseurl}}/query-dsl/term/exists/). | +| `indices:admin/flush` | Permission to [flush an index]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/index-management/#flushing-an-index). | +| `indices:admin/flush*` | Permission to [flush an index]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/index-management/#flushing-an-index). | +| `indices:admin/forcemerge` | Permission to force merge indexes and data streams. | +| `indices:admin/get` | Permission to get index and mapping. | +| `indices:admin/mapping/put` | Permission to add new mappings and fields to an index. | +| `indices:admin/mappings/fields/get` | Permission to get mappings fields. | +| `indices:admin/mappings/fields/get*` | Permission to get mappings fields. | +| `indices:admin/mappings/get` | Permission to [get mappings]({{site.url}}{{site.baseurl}}/security-analytics/api-tools/mappings-api/#get-mappings). | +| `indices:admin/open` | Permission to [open an index]({{site.url}}{{site.baseurl}}/api-reference/index-apis/open-index/). | +| `indices:admin/plugins/replication/index/setup/validate` | Permission to validate a connection to a [remote cluster]({{site.url}}{{site.baseurl}}/tuning-your-cluster/replication-plugin/getting-started/#set-up-a-cross-cluster-connection). | +| `indices:admin/plugins/replication/index/start` | Permission to [start cross-cluster replication]({{site.url}}{{site.baseurl}}/tuning-your-cluster/replication-plugin/getting-started/#start-replication). | +| `indices:admin/plugins/replication/index/pause` | Permission to pause cross-cluster replication. | +| `indices:admin/plugins/replication/index/resume` | Permission to resume cross-cluster replication. | +| `indices:admin/plugins/replication/index/stop` | Permission to stop cross-cluster replication. | +| `indices:admin/plugins/replication/index/update` | Permission to update cross-cluster replication settings. | +| `indices:admin/plugins/replication/index/status_check` | Permission to check the status of cross-cluster replication. | +| `indices:admin/refresh` | Permission to use the [index refresh API]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/index-management/#refreshing-an-index). | +| `indices:admin/refresh*` | Permission to use the index refresh API. | +| `indices:admin/resolve/index` | Permission to resolve index names, index aliases and data streams. | +| `indices:admin/rollover` | Permission to perform [index rollover]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/rollover/). | +| `indices:admin/seq_no/global_checkpoint_sync` | Permission to perform a global checkpoint sync. | +| `indices:admin/settings/update` | Permission to [update index settings]({{site.url}}{{site.baseurl}}/api-reference/index-apis/update-settings/). | +| `indices:admin/shards/search_shards` | Permission to perform [cross cluster search]({{site.url}}{{site.baseurl}}/security/access-control/cross-cluster-search/). | +| `indices:admin/template/delete` | Permission to [delete index templates]({{site.url}}{{site.baseurl}}/im-plugin/index-templates/#delete-a-template). | +| `indices:admin/template/get` | Permission to [get index templates]({{site.url}}{{site.baseurl}}/im-plugin/index-templates/#retrieve-a-template). | +| `indices:admin/template/put` | Permission to [create index templates]({{site.url}}{{site.baseurl}}/im-plugin/index-templates/#create-a-template). | +| `indices:admin/upgrade` | Permission for administrators to perform upgrades. | +| `indices:admin/validate/query` | Permission to validate a specific query. | +| `indices:data/read/explain` | Permission to run the [Explain API]({{site.url}}{{site.baseurl}}/api-reference/explain/). | +| `indices:data/read/field_caps` | Permission to run the [Field Capabilities API]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/alias/#using-aliases-in-field-capabilities-api-operations). | +| `indices:data/read/field_caps*` | Permission to run the Field Capabilities API. | +| `indices:data/read/get` | Permission to read index data. | +| `indices:data/read/mget` | Permission to run [multiple GET operations]({{site.url}}{{site.baseurl}}/api-reference/document-apis/multi-get/) in one request. | +| `indices:data/read/mget*` | Permission to run multiple GET operations in one request. | +| `indices:data/read/msearch` | Permission to run [multiple search]({{site.url}}{{site.baseurl}}/api-reference/multi-search/) requests into a single request. | +| `indices:data/read/msearch/template` | Permission to bundle [multiple search templates]({{site.url}}{{site.baseurl}}/api-reference/search-template/#multiple-search-templates) and send them to your OpenSearch cluster in a single request. | +| `indices:data/read/mtv` | Permission to retrieve multiple term vectors with a single request. | +| `indices:data/read/mtv*` | Permission to retrieve multiple term vectors with a single request. | +| `indices:data/read/plugins/replication/file_chunk` | Permission to check files during segment replication. | +| `indices:data/read/plugins/replication/changes` | Permission to make changes to segment replication settings. | +| `indices:data/read/scroll` | Permission to scroll data. | +| `indices:data/read/scroll/clear` | Permission to clear read scroll data. | +| `indices:data/read/search` | Permission to [search]({{site.url}}{{site.baseurl}}/api-reference/search/) data.| +| `indices:data/read/search*` | Permission to search data. | +| `indices:data/read/search/template` | Permission to read a search template. | +| `indices:data/read/tv` | Permission to retrieve information and statistics for terms in the fields of a particular document. | +| `indices:data/write/bulk` | Permission to run a [bulk]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) request. | +| `indices:data/write/bulk*` | Permission to run a bulk request. | +| `indices:data/write/delete` | Permission to [delete documents]({{site.url}}{{site.baseurl}}/api-reference/document-apis/delete-document/). | +| `indices:data/write/delete/byquery` | Permission to delete all documents that [match a query]({{site.url}}{{site.baseurl}}/api-reference/document-apis/delete-by-query/). | +| `indices:data/write/plugins/replication/changes` | | +| `indices:data/write/index` | Permission to add documents to existing indexes. See also [Index document]( {{site.url}}{{site.baseurl}}/api-reference/document-apis/index-document/ ) | +| `indices:data/write/reindex` | Permission to run a [reindex]({{site.url}}{{site.baseurl}}/im-plugin/reindex-data/). | +| `indices:data/write/update` | Permission to update an index. | +| `indices:data/write/update/byquery` | Permission to run the script to update all of the documents that [match the query]({{site.url}}{{site.baseurl}}/api-reference/document-apis/update-by-query/). | +| `indices:monitor/data_stream/stats` | Permission to stream stats. | +| `indices:monitor/recovery` | Permission to access recovery stats. | +| `indices:monitor/segments` | Permission to access segment stats. | +| `indices:monitor/settings/get` | Permission to get mointor settings. | +| `indices:monitor/shard_stores` | Permission to access shard store stats. | +| `indices:monitor/stats` | Permission to access monitoring stats. | +| `indices:monitor/upgrade` | Permission to access upgrade stats. | + ## Security REST permissions From aa16bc0eab8faac6c6a234e2ed8b3f45983c9640 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Tue, 5 Mar 2024 14:31:14 -0700 Subject: [PATCH 16/23] Copy edit title (#6583) Signed-off-by: Melissa Vagi --- _data-prepper/common-use-cases/log-enrichment.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_data-prepper/common-use-cases/log-enrichment.md b/_data-prepper/common-use-cases/log-enrichment.md index 49861f3d79..c3cb7e5f03 100644 --- a/_data-prepper/common-use-cases/log-enrichment.md +++ b/_data-prepper/common-use-cases/log-enrichment.md @@ -1,11 +1,11 @@ --- layout: default -title: Log enrichment with Data Prepper +title: Log enrichment parent: Common use cases nav_order: 50 --- -# Log enrichment with Data Prepper +# Log enrichment You can perform different types of log enrichment with Data Prepper, including: From d460767d25c2ea61628694c4871cebb23d3e257e Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 5 Mar 2024 16:49:09 -0500 Subject: [PATCH 17/23] Add 1.3.15 to version history (#6582) * Add 1.3.15 to version history Signed-off-by: Fanit Kolchina * Typo fix Signed-off-by: Fanit Kolchina --------- Signed-off-by: Fanit Kolchina --- _about/version-history.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_about/version-history.md b/_about/version-history.md index 39b4a2a03d..25e345568f 100644 --- a/_about/version-history.md +++ b/_about/version-history.md @@ -27,6 +27,7 @@ OpenSearch version | Release highlights | Release date [2.0.1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.1.md) | Includes bug fixes and maintenance updates for Alerting and Anomaly Detection. | 16 June 2022 [2.0.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0.md) | Includes document-level monitors for alerting, OpenSearch Notifications plugins, and Geo Map Tiles in OpenSearch Dashboards. Also adds support for Lucene 9 and bug fixes for all OpenSearch plugins. For a full list of release highlights, see the Release Notes. | 26 May 2022 [2.0.0-rc1](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.0.0-rc1.md) | The Release Candidate for 2.0.0. This version allows you to preview the upcoming 2.0.0 release before the GA release. The preview release adds document-level alerting, support for Lucene 9, and the ability to use term lookup queries in document level security. | 03 May 2022 +[1.3.15](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.15.md) | Includes bug fixes and maintenance updates for cross-cluster replication, SQL, OpenSearch Dashboards reporting, and alerting. | 05 March 2024 [1.3.14](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.14.md) | Includes bug fixes and maintenance updates for OpenSearch security and OpenSearch Dashboards security. | 12 December 2023 [1.3.13](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.13.md) | Includes bug fixes for Anomaly Detection, adds maintenance updates and infrastructure enhancements. | 21 September 2023 [1.3.12](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-1.3.12.md) | Adds maintenance updates for OpenSearch security and OpenSearch Dashboards observability. Includes bug fixes for observability, OpenSearch Dashboards visualizations, and OpenSearch security. | 10 August 2023 From 254e099a2f1e3c75f28977bf3c44b9ec1771b893 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Tue, 5 Mar 2024 14:54:05 -0700 Subject: [PATCH 18/23] Add copy processor to table (#6585) * Add copy processor to table Signed-off-by: Melissa Vagi * Update _ingest-pipelines/processors/index-processors.md Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi Co-authored-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _ingest-pipelines/processors/index-processors.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/_ingest-pipelines/processors/index-processors.md b/_ingest-pipelines/processors/index-processors.md index 781780e47e..4a962937ec 100644 --- a/_ingest-pipelines/processors/index-processors.md +++ b/_ingest-pipelines/processors/index-processors.md @@ -30,7 +30,8 @@ Processor type | Description :--- | :--- `append` | Adds one or more values to a field in a document. `bytes` | Converts a human-readable byte value to its value in bytes. -`convert` | Changes the data type of a field in a document. +`convert` | Changes the data type of a field in a document. +`copy` | Copies an entire object in an existing field to another field. `csv` | Extracts CSVs and stores them as individual fields in a document. `date` | Parses dates from fields and then uses the date or timestamp as the timestamp for a document. `date_index_name` | Indexes documents into time-based indexes based on a date or timestamp field in a document. From bfed90f699675e84bb04c86b21a0bada47a5de0c Mon Sep 17 00:00:00 2001 From: leanneeliatra <131779422+leanneeliatra@users.noreply.github.com> Date: Wed, 6 Mar 2024 14:46:03 +0000 Subject: [PATCH 19/23] fixing rendering issues on table updates (#6603) Signed-off-by: leanne.laceybyrne@eliatra.com --- _security/access-control/permissions.md | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/_security/access-control/permissions.md b/_security/access-control/permissions.md index 226eb259c7..4f8df5e042 100644 --- a/_security/access-control/permissions.md +++ b/_security/access-control/permissions.md @@ -381,8 +381,9 @@ See [Index templates]({{site.url}}{{site.baseurl}}/im-plugin/index-templates/). These permissions apply to an index or index pattern. You might want a user to have read access to all indexes (that is, `*`), but write access to only a few (for example, `web-logs` and `product-catalog`). -| Permission | Description | -| --- | --- | + +| **Permission** | **Description** | +| :--- | :--- | | `indices:admin/aliases` | Permissions for [index aliases]({{site.url}}{{site.baseurl}}/im-plugin/index-alias/). | | `indices:admin/aliases/get` | Permission to get [index aliases]({{site.url}}{{site.baseurl}}/im-plugin/index-alias/). | | `indices:admin/analyze` | Permission to use the [Analyze API]({{site.url}}{{site.baseurl}}/api-reference/analyze-apis/). | @@ -415,7 +416,7 @@ These permissions apply to an index or index pattern. You might want a user to h | `indices:admin/refresh*` | Permission to use the index refresh API. | | `indices:admin/resolve/index` | Permission to resolve index names, index aliases and data streams. | | `indices:admin/rollover` | Permission to perform [index rollover]({{site.url}}{{site.baseurl}}/dashboards/im-dashboards/rollover/). | -| `indices:admin/seq_no/global_checkpoint_sync` | Permission to perform a global checkpoint sync. | +| `indices:admin/seq_no/global_checkpoint_sync` | Permission to perform a global checkpoint sync. | | `indices:admin/settings/update` | Permission to [update index settings]({{site.url}}{{site.baseurl}}/api-reference/index-apis/update-settings/). | | `indices:admin/shards/search_shards` | Permission to perform [cross cluster search]({{site.url}}{{site.baseurl}}/security/access-control/cross-cluster-search/). | | `indices:admin/template/delete` | Permission to [delete index templates]({{site.url}}{{site.baseurl}}/im-plugin/index-templates/#delete-a-template). | @@ -426,18 +427,18 @@ These permissions apply to an index or index pattern. You might want a user to h | `indices:data/read/explain` | Permission to run the [Explain API]({{site.url}}{{site.baseurl}}/api-reference/explain/). | | `indices:data/read/field_caps` | Permission to run the [Field Capabilities API]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/alias/#using-aliases-in-field-capabilities-api-operations). | | `indices:data/read/field_caps*` | Permission to run the Field Capabilities API. | -| `indices:data/read/get` | Permission to read index data. | +| `indices:data/read/get` | Permission to read index data. | | `indices:data/read/mget` | Permission to run [multiple GET operations]({{site.url}}{{site.baseurl}}/api-reference/document-apis/multi-get/) in one request. | -| `indices:data/read/mget*` | Permission to run multiple GET operations in one request. | +| `indices:data/read/mget*` | Permission to run multiple GET operations in one request. | | `indices:data/read/msearch` | Permission to run [multiple search]({{site.url}}{{site.baseurl}}/api-reference/multi-search/) requests into a single request. | | `indices:data/read/msearch/template` | Permission to bundle [multiple search templates]({{site.url}}{{site.baseurl}}/api-reference/search-template/#multiple-search-templates) and send them to your OpenSearch cluster in a single request. | | `indices:data/read/mtv` | Permission to retrieve multiple term vectors with a single request. | | `indices:data/read/mtv*` | Permission to retrieve multiple term vectors with a single request. | | `indices:data/read/plugins/replication/file_chunk` | Permission to check files during segment replication. | -| `indices:data/read/plugins/replication/changes` | Permission to make changes to segment replication settings. | +| `indices:data/read/plugins/replication/changes` | Permission to make changes to segment replication settings. | | `indices:data/read/scroll` | Permission to scroll data. | | `indices:data/read/scroll/clear` | Permission to clear read scroll data. | -| `indices:data/read/search` | Permission to [search]({{site.url}}{{site.baseurl}}/api-reference/search/) data.| +| `indices:data/read/search` | Permission to [search]({{site.url}}{{site.baseurl}}/api-reference/search/) data. | | `indices:data/read/search*` | Permission to search data. | | `indices:data/read/search/template` | Permission to read a search template. | | `indices:data/read/tv` | Permission to retrieve information and statistics for terms in the fields of a particular document. | @@ -445,8 +446,8 @@ These permissions apply to an index or index pattern. You might want a user to h | `indices:data/write/bulk*` | Permission to run a bulk request. | | `indices:data/write/delete` | Permission to [delete documents]({{site.url}}{{site.baseurl}}/api-reference/document-apis/delete-document/). | | `indices:data/write/delete/byquery` | Permission to delete all documents that [match a query]({{site.url}}{{site.baseurl}}/api-reference/document-apis/delete-by-query/). | -| `indices:data/write/plugins/replication/changes` | | -| `indices:data/write/index` | Permission to add documents to existing indexes. See also [Index document]( {{site.url}}{{site.baseurl}}/api-reference/document-apis/index-document/ ) | +| `indices:data/write/plugins/replication/changes` | Permission to make changes to data replication configurations and settings within indices. | +| `indices:data/write/index` | Permission to add documents to existing indexes. See also [Index document]( {{site.url}}{{site.baseurl}}/api-reference/document-apis/index-document/ ). | | `indices:data/write/reindex` | Permission to run a [reindex]({{site.url}}{{site.baseurl}}/im-plugin/reindex-data/). | | `indices:data/write/update` | Permission to update an index. | | `indices:data/write/update/byquery` | Permission to run the script to update all of the documents that [match the query]({{site.url}}{{site.baseurl}}/api-reference/document-apis/update-by-query/). | @@ -457,6 +458,7 @@ These permissions apply to an index or index pattern. You might want a user to h | `indices:monitor/shard_stores` | Permission to access shard store stats. | | `indices:monitor/stats` | Permission to access monitoring stats. | | `indices:monitor/upgrade` | Permission to access upgrade stats. | + From 4f6a11f3c7431bf37082ab620cba3b645527bc35 Mon Sep 17 00:00:00 2001 From: Pawel Wlodarczyk Date: Wed, 6 Mar 2024 16:39:10 +0000 Subject: [PATCH 20/23] Audit logging initial state description. (#6570) * Update index.md Signed-off-by: Pawel Wlodarczyk * Update index.md Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --------- Signed-off-by: Pawel Wlodarczyk Signed-off-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _security/audit-logs/index.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/_security/audit-logs/index.md b/_security/audit-logs/index.md index 79c0d674a1..becb001ec0 100644 --- a/_security/audit-logs/index.md +++ b/_security/audit-logs/index.md @@ -26,7 +26,7 @@ redirect_from: Audit logs let you track access to your OpenSearch cluster and are useful for compliance purposes or in the aftermath of a security breach. You can configure the categories to be logged, the detail level of the logged messages, and where to store the logs. -To enable audit logging: +Audit logging is disabled by default. To enable audit logging: 1. Add the following line to `opensearch.yml` on each node: @@ -220,3 +220,7 @@ The default setting is `10`. Setting this value to `0` disables the thread pool, plugins.security.audit.config.threadpool.max_queue_len: 100000 ``` +## Disabling audit logs + +To disable audit logs after they've been enabled, remove the `plugins.security.audit.type: internal_opensearch` setting from `opensearch.yml`, or switch off the **Enable audit logging** check box in OpenSearch Dashboards. + From 7b5cc3e38fc24216e21f1277c2ed6f9cc63c9db2 Mon Sep 17 00:00:00 2001 From: Stavros Macrakis <134456002+smacrakis@users.noreply.github.com> Date: Wed, 6 Mar 2024 13:59:21 -0500 Subject: [PATCH 21/23] "Linux and MacOS" => "Linux or MacOS" (#6576) checked against the detail page on installation Signed-off-by: Stavros Macrakis <134456002+smacrakis@users.noreply.github.com> Co-authored-by: Naarcha-AWS <97990722+Naarcha-AWS@users.noreply.github.com> --- _benchmark/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_benchmark/index.md b/_benchmark/index.md index 5b0d3fb463..25b3738e7d 100644 --- a/_benchmark/index.md +++ b/_benchmark/index.md @@ -18,7 +18,7 @@ OpenSearch Benchmark is a macrobenchmark utility provided by the [OpenSearch Pro - Informing decisions about when to upgrade your cluster to a new version. - Determining how changes to your workflow---such as modifying mappings or queries---might impact your cluster. -OpenSearch Benchmark can be installed directly on a compatible host running Linux and macOS. You can also run OpenSearch Benchmark in a Docker container. See [Installing OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/installing-benchmark/) for more information. +OpenSearch Benchmark can be installed directly on a compatible host running Linux or macOS. You can also run OpenSearch Benchmark in a Docker container. See [Installing OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/installing-benchmark/) for more information. The following diagram visualizes how OpenSearch Benchmark works when run against a local host: From c08cd989524698043dfa2f587ae49a4ad2b860d4 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Wed, 6 Mar 2024 14:07:03 -0500 Subject: [PATCH 22/23] Fix YAML error in time filter file (#6622) Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _dashboards/discover/time-filter.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_dashboards/discover/time-filter.md b/_dashboards/discover/time-filter.md index 8fad57cfba..288138d079 100644 --- a/_dashboards/discover/time-filter.md +++ b/_dashboards/discover/time-filter.md @@ -5,7 +5,7 @@ parent: Analyzing data nav_order: 20 redirect_from: - /dashboards/get-started/time-filter/ - -/dashboards/discover/time-filter/ + - /dashboards/discover/time-filter/ --- # Time filter From a742d47b013c112f8b6739a432763a51518a3681 Mon Sep 17 00:00:00 2001 From: Melissa Vagi Date: Wed, 6 Mar 2024 12:19:41 -0700 Subject: [PATCH 23/23] Alphabetize by title (#6608) * Alphabetize by title Signed-off-by: Melissa Vagi * Alphabetize by title Signed-off-by: Melissa Vagi --------- Signed-off-by: Melissa Vagi --- _data-prepper/common-use-cases/anomaly-detection.md | 2 +- .../common-use-cases/codec-processor-combinations.md | 2 +- _data-prepper/common-use-cases/event-aggregation.md | 2 +- _data-prepper/common-use-cases/log-analytics.md | 2 +- _data-prepper/common-use-cases/log-enrichment.md | 2 +- _data-prepper/common-use-cases/metrics-traces.md | 2 +- _data-prepper/common-use-cases/s3-logs.md | 2 +- _data-prepper/common-use-cases/text-processing.md | 2 +- _data-prepper/common-use-cases/trace-analytics.md | 6 +++--- 9 files changed, 11 insertions(+), 11 deletions(-) diff --git a/_data-prepper/common-use-cases/anomaly-detection.md b/_data-prepper/common-use-cases/anomaly-detection.md index 029ff360d0..e7003558f1 100644 --- a/_data-prepper/common-use-cases/anomaly-detection.md +++ b/_data-prepper/common-use-cases/anomaly-detection.md @@ -2,7 +2,7 @@ layout: default title: Anomaly detection parent: Common use cases -nav_order: 30 +nav_order: 5 --- # Anomaly detection diff --git a/_data-prepper/common-use-cases/codec-processor-combinations.md b/_data-prepper/common-use-cases/codec-processor-combinations.md index ae1209e973..57185f2ce9 100644 --- a/_data-prepper/common-use-cases/codec-processor-combinations.md +++ b/_data-prepper/common-use-cases/codec-processor-combinations.md @@ -2,7 +2,7 @@ layout: default title: Codec processor combinations parent: Common use cases -nav_order: 25 +nav_order: 10 --- # Codec processor combinations diff --git a/_data-prepper/common-use-cases/event-aggregation.md b/_data-prepper/common-use-cases/event-aggregation.md index b0ee13c935..f6e2757d9a 100644 --- a/_data-prepper/common-use-cases/event-aggregation.md +++ b/_data-prepper/common-use-cases/event-aggregation.md @@ -2,7 +2,7 @@ layout: default title: Event aggregation parent: Common use cases -nav_order: 40 +nav_order: 25 --- # Event aggregation diff --git a/_data-prepper/common-use-cases/log-analytics.md b/_data-prepper/common-use-cases/log-analytics.md index e8db781714..30a021b101 100644 --- a/_data-prepper/common-use-cases/log-analytics.md +++ b/_data-prepper/common-use-cases/log-analytics.md @@ -2,7 +2,7 @@ layout: default title: Log analytics parent: Common use cases -nav_order: 10 +nav_order: 30 --- # Log analytics diff --git a/_data-prepper/common-use-cases/log-enrichment.md b/_data-prepper/common-use-cases/log-enrichment.md index c3cb7e5f03..b4004251c6 100644 --- a/_data-prepper/common-use-cases/log-enrichment.md +++ b/_data-prepper/common-use-cases/log-enrichment.md @@ -2,7 +2,7 @@ layout: default title: Log enrichment parent: Common use cases -nav_order: 50 +nav_order: 35 --- # Log enrichment diff --git a/_data-prepper/common-use-cases/metrics-traces.md b/_data-prepper/common-use-cases/metrics-traces.md index 14971c6f03..c15eaa099b 100644 --- a/_data-prepper/common-use-cases/metrics-traces.md +++ b/_data-prepper/common-use-cases/metrics-traces.md @@ -2,7 +2,7 @@ layout: default title: Deriving metrics from traces parent: Common use cases -nav_order: 60 +nav_order: 20 --- # Deriving metrics from traces diff --git a/_data-prepper/common-use-cases/s3-logs.md b/_data-prepper/common-use-cases/s3-logs.md index 2987c9a677..7986a7eef8 100644 --- a/_data-prepper/common-use-cases/s3-logs.md +++ b/_data-prepper/common-use-cases/s3-logs.md @@ -2,7 +2,7 @@ layout: default title: S3 logs parent: Common use cases -nav_order: 20 +nav_order: 40 --- # S3 logs diff --git a/_data-prepper/common-use-cases/text-processing.md b/_data-prepper/common-use-cases/text-processing.md index 54b436644e..041ca63ab2 100644 --- a/_data-prepper/common-use-cases/text-processing.md +++ b/_data-prepper/common-use-cases/text-processing.md @@ -2,7 +2,7 @@ layout: default title: Text processing parent: Common use cases -nav_order: 35 +nav_order: 55 --- # Text processing diff --git a/_data-prepper/common-use-cases/trace-analytics.md b/_data-prepper/common-use-cases/trace-analytics.md index 9067ce49b7..1f6c3b7cc4 100644 --- a/_data-prepper/common-use-cases/trace-analytics.md +++ b/_data-prepper/common-use-cases/trace-analytics.md @@ -2,7 +2,7 @@ layout: default title: Trace analytics parent: Common use cases -nav_order: 5 +nav_order: 60 --- # Trace analytics @@ -15,7 +15,7 @@ When using Data Prepper as a server-side component to collect trace data, you ca The following flowchart illustrates the trace analytics workflow, from running OpenTelemetry Collector to using OpenSearch Dashboards for visualization. -Trace analyticis component overview{: .img-fluid} +Trace analytics component overview{: .img-fluid} To monitor trace analytics, you need to set up the following components in your service environment: - Add **instrumentation** to your application so it can generate telemetry data and send it to an OpenTelemetry collector. @@ -322,7 +322,7 @@ For other configurations available for OpenSearch sinks, see [Data Prepper OpenS ## OpenTelemetry Collector -You need to run OpenTelemetry Collector in your service environment. Follow [Getting Started](https://opentelemetry.io/docs/collector/getting-started/#getting-started) to install an OpenTelemetry collector. Ensure that you configure the collector with an exporter configured for your Data Prepper instance. The following example `otel-collector-config.yaml` file receives data from various instrumentations and exports it to Data Prepper. +You need to run OpenTelemetry Collector in your service environment. Follow [Getting Started](https://opentelemetry.io/docs/collector/getting-started/#getting-started) to install an OpenTelemetry collector. Ensure that you configure the collector with an exporter configured for your Data Prepper instance. The following example `otel-collector-config.yaml` file receives data from various instrumentations and exports it to Data Prepper. ### Example otel-collector-config.yaml file