From cb5ed15b29831a1746cc73e2fb791328f3e70f54 Mon Sep 17 00:00:00 2001 From: Sruti Parthiban Date: Mon, 24 Aug 2020 16:47:30 -0700 Subject: [PATCH] Merge from master to 1.10 release (#181) * Fix NullPointerException when PA starts collecting metrics before master node is fully up * Fix unit tests on Mac. Fix NPE during MasterServiceEventMetrics collection. * Reorder imports, refactor unit tests * add RFC for RCA * Create README.md * Split Elasticsearch version independent code (#75) This commit has 3 major changes - 1) Performance Analyzer code that is Elasticsearch version independent. 3) Performance optimization in the Elasticsearch plugin to emit events to a single event log file. This brings down CPU utilization by an order of magnitude on large clusters. * Update Performance Analyzer to support ElasticSearch version 7.3.2 * This commit merges some of the fixes and features that should have been in the split version of PA. Features and fixes introduced in this PR: Allow performance analyzer to be en(dis)abled through a cluster setting across the cluster. Allow logging to be controllable through the cluster setting Capture a node's role along with the host address and the node name Checkstyle compliance Some issues that are still not addressed in this PR: Update build scripts to start the agent from the reader location instead of the plugin location Remove pa_config, pa_bin and other folders that are already present in the reader * Adding the dependency on the renamed jar performanceanalyzer-rca from performanceanalyzer * Delete Unused test class Remove NewFormatProcessorTest class which is not used. * Adding shardsPerCollection REST API to update the shards Per Collection in node stats collector (#83) * Update gradle wrapper * Add isMasterNode to NodeDetailsStatus (#84) * make the unit test backward compatible with the isMasterNode in NodeDetailsStatus * Create gradle.yml (#87) * Create gradle.yml * Update gradle.yml * Update gradle.yml * Added the bouncy castle jars * Added the licenses file * Add cd.yml and enable CD pipeline to upload artifact to S3 (#90) * add cd.yml * upgrade ospackage version to 8.2.0 * change s3 * update cd.yml * removing the -i flag * fix a bug in StatsTests.java (#97) * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md * We must handle all exceptions while intercepting ES requests (#99) * Making sure that we don't throw exceptions while intercepting ES requests PerformanceAnalyzer intercepts various ES request paths toget detailed metrics. But today if we throw an exception, then it will bubble all the way upto ES and fail the request. * Addressing the PR comments * Updating the .gitignore * style changes * Adding Shard Size Metric as a part of Node Stats (#101) * Adding Shard Size Metric as a part of Node Stats * removing the -i flag * fix a bug in StatsTests.java (#97) * Update CONTRIBUTORS.md * Update CONTRIBUTORS.md * Adressing Typos Co-authored-by: Aditya Jindal Co-authored-by: Joydeep Sinha <49728262+yojs@users.noreply.github.com> Co-authored-by: Ruizhen Guo <55893852+rguo-aws@users.noreply.github.com> Co-authored-by: Balaji * collect queue latency metric in PerformanceAnalyzer (#111) Authored-By: rguo-aws * Remove unnecessary string formatting (#112) * Odfe it framework release (#107) * ODFE IT Framework POC * Testing to see if Dockerstuff is set up * Modified workflow to set DOCKER_COMPOSE_LOCATION * Modify workflow to include stacktrace and no symbolic linkage * Set DOCKER_COMPOSE_LOCATION using set-env * Try set-env in a different location * Attempt to fix docker-compose set-env * Make workflow set vm.max_map_count * Use sudo when setting vm.max_map_count * Make performance-analyzer execute integration tests on checkin * Clean up PerformanceAnalyzerIT and build.gradle script * Add newline to end of gradle.properties file * Modify gradle.yml and checkMetrics * Fix ObjectMapper allocation and move TestUtils definition Co-authored-by: Sid Narayan * Add github badges (#114) * Add github badges * Add github badges * Run integ tests as part of git workflow instead of build (#115) This commit makes it so that you can build PA without running integration tests. This is useful for many reasons, including being able to build RCA without depending on PA's integration tests and dramatically reducing build times. Integration Testing has been added as part of the Github Actions workflow * Fixup ITs and binding issues (#119) This commit makes IT execution much more robust and only executes ITs if the user passes the -Dtests.enableIT flag to the Gradle environment. This commit also ensures that we bind to all interfaces when we spin up a local Docker cluster for testing. * collect queue capacity on writer (#118) * Pa build fix (#122) * Remove * junit import * Fix logger usage * Ignore JsonKeyTests * Remove sed operation from build.gradle The sed logic is now baked into the Dockerfile in performance-analyzer-rca so it's no longer necessary here. * Restore JsonKeyTests * PA will no longer crash when SecurityManager says no (#113) * PA will no longer crash when SecurityManager says no PA attempts to set the default SSL Socket Factory (which defines rules for the SSL Sockets it creates) as well as default hostname verification rules when it is initialized by the Elasticsearch plugin loader. However, this behavior would result in an AccessControlException when run alongside the opendistro-security plugin. This commit is a simple fix which allows these two plugins to work together. * Update logging to WARN level * Calculate rejection increase and emit the delta increase of rejection as metric (#124) * Enable spotbugs, address spotbug warnings (#126) * Fix cluster state when pa is enabled from controller (#125) * Fix cluster state when pa is Enabled from controller * Add license info * Move PA files to subdir owned by elasticsearch user (#146) * IT improvements (#143) * Use true/false instead of null/present for integTest props integTest is a gradle task which runs our integration tests. It uses system properties like -Dtests.useDockerCluster to decide whether or not to perform certain actions like spinning up a docker cluster for testing. The task would previously perform the property's action if the property was present. This commit makes the integTest task only execute a system property action if that property is set to "true" * Make IT port number configurable The PerformanceAnalyzerIT class previously assumed that the Performance Analyzer Webservice would always be listening on port 9600 for any deployment of PerformanceAnalyzer. Since this isn't always the case, this commit makes the port number configurable through a gradle property. * Allow logging to be enabled for ensurePaAndRcaEnabled * cache max size metric collector (#145) * Adding changes to collect Cache Max Size metric * Updating the Cache Max Size Dimension to use toString (#153) * Fixing checkstyle build failure (#158) * Add an IT which verifies that the RCA REST endpoint can be queried (#157) * Add an IT which verifies that the RCA REST endpoint can be queried * Add try-catch to handle 404 exceptions * Add initial support for dynamic config overriding (#148) * Add initial support for dynamic config overriding * Use helper to serialize/deserialize instead of the wrapper * Add licence header to new files * Update licence year to 2020 * Node collector split (#162) Node Collector split is created based on the metrics which are required for all the shards on the node and other which can be collected on a few number of shards per iteration. Built the Jar from this Patch and applied on the AES cluster. The Cache related metrics which should be collected for all the shards irrespective of the value of shardsPerCollection value are getting collected. Tested with a zero value of this parameter (shardsPerCollection). * Use the correct ctor for NodeDetailsCollector (#166) * Use the correct ctor for NodeDetailsCollector * Check for null ConfigOverrides wrapper while appending timestamps * Add unit test for null cluster setting (#167) * Use the correct ctor for NodeDetailsCollector * Check for null ConfigOverrides wrapper while appending timestamps * Add unit test for null cluster setting for config overrides * Split capacity/latency collecting logic into separate try/catch block (#168) * Update PULL_REQUEST_TEMPLATE.md * Fix invalid cluster state (#172) * Fix invalid cluster state * Address PR comments * Skip RCA tests when building PA in Github workflows (#177) * Build against elasticsearch 7.9 and resolve dependency conflicts * Add licenses for dependencies * Add licenses for dependencies * Change minor version * Add release notes and contributors * Changed licenses * Modify github workflows * Fix merge conflicts * Fix jarHell around log4j Co-authored-by: Karthik Kumarguru <52506191+ktkrg@users.noreply.github.com> Co-authored-by: Karthik Kumarguru Co-authored-by: Partha Kanuparthy Co-authored-by: Partha Kanuparthy <40440819+aesgithub@users.noreply.github.com> Co-authored-by: Adithya Chandra Co-authored-by: Venkata Jyothsna Donapati Co-authored-by: Palash Hedau Co-authored-by: Joydeep Sinha Co-authored-by: Balaji Co-authored-by: khushbr <59671881+khushbr@users.noreply.github.com> Co-authored-by: Chandra Co-authored-by: Pardeep Singh <56094865+spardeepsingh@users.noreply.github.com> Co-authored-by: Ruizhen Co-authored-by: Joydeep Sinha <49728262+yojs@users.noreply.github.com> Co-authored-by: Joydeep Sinha Co-authored-by: Ruizhen Guo <55893852+rguo-aws@users.noreply.github.com> Co-authored-by: Aditya Jindal Co-authored-by: Aditya Jindal Co-authored-by: Ricardo L. Stephen <43506361+ricardolstephen@users.noreply.github.com> Co-authored-by: Sid Narayan Co-authored-by: Sid Narayan Co-authored-by: Peter Zhu --- .github/workflows/gradle.yml | 2 +- build.gradle | 4 +- licenses/commons-lang3-NOTICE.txt | 34 +++ licenses/log4j-api-2.13.0.jar.sha1 | 1 - licenses/log4j-api-LICENSE.txt | 202 -------------- licenses/log4j-api-NOTICE.txt | 0 licenses/log4j-core-2.13.0.jar.sha1 | 1 - licenses/log4j-core-LICENSE.txt | 202 -------------- licenses/log4j-core-NOTICE.txt | 0 .../performanceanalyzer-rca-1.10.jar.sha1 | 2 +- pa_config/supervisord.conf | 8 +- .../PerformanceAnalyzerPlugin.java | 67 +++-- .../CacheConfigMetricsCollector.java | 131 +++++++++ .../collectors/NodeDetailsCollector.java | 31 ++- .../NodeStatsAllShardsMetricsCollector.java | 250 ++++++++++++++++++ ...NodeStatsFixedShardsMetricsCollector.java} | 229 ++++------------ .../ThreadPoolMetricsCollector.java | 39 +-- .../setting/ClusterSettingsManager.java | 100 +++++-- .../PerformanceAnalyzerClusterSettings.java | 13 +- .../ConfigOverridesClusterSettingHandler.java | 204 ++++++++++++++ ...formanceAnalyzerClusterSettingHandler.java | 17 +- ...erformanceAnalyzerClusterConfigAction.java | 186 +++++++------ .../PerformanceAnalyzerConfigAction.java | 217 ++++++++------- ...eAnalyzerOverridesClusterConfigAction.java | 208 +++++++++++++++ .../PerformanceAnalyzerResourceProvider.java | 71 ++--- .../http_action/whoami/WhoAmIAction.java | 2 +- .../performanceanalyzer/util/Utils.java | 88 +++++- .../PerformanceAnalyzerIT.java | 29 +- .../collectors/JsonKeyTests.java | 33 ++- ...eStatsAllShardsMetricsCollectorTests.java} | 18 +- ...StatsFixedShardsMetricsCollectorTests.java | 78 ++++++ .../config/ConfigOverridesTestHelper.java | 57 ++++ ...anceAnalyzerClusterSettingHandlerTest.java | 13 +- ...igOverridesClusterSettingHandlerTests.java | 171 ++++++++++++ .../hwnet/CollectMetricsTests.java | 87 +++--- .../reader/AbstractReaderTests.java | 66 ++--- 36 files changed, 1879 insertions(+), 982 deletions(-) delete mode 100644 licenses/log4j-api-2.13.0.jar.sha1 delete mode 100644 licenses/log4j-api-LICENSE.txt delete mode 100644 licenses/log4j-api-NOTICE.txt delete mode 100644 licenses/log4j-core-2.13.0.jar.sha1 delete mode 100644 licenses/log4j-core-LICENSE.txt delete mode 100644 licenses/log4j-core-NOTICE.txt create mode 100644 src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/CacheConfigMetricsCollector.java create mode 100644 src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollector.java rename src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/{NodeStatsMetricsCollector.java => NodeStatsFixedShardsMetricsCollector.java} (61%) create mode 100644 src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandler.java create mode 100644 src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerOverridesClusterConfigAction.java rename src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/{NodeStatsMetricsCollectorTests.java => NodeStatsAllShardsMetricsCollectorTests.java} (74%) create mode 100644 src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollectorTests.java create mode 100644 src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/ConfigOverridesTestHelper.java create mode 100644 src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandlerTests.java diff --git a/.github/workflows/gradle.yml b/.github/workflows/gradle.yml index bc9f0640..9224e62a 100644 --- a/.github/workflows/gradle.yml +++ b/.github/workflows/gradle.yml @@ -34,7 +34,7 @@ jobs: run: ./gradlew publishToMavenLocal - name: Build PA gradle using the new RCA jar working-directory: ./tmp/pa - run: rm licenses/performanceanalyzer-rca-1.3.jar.sha1 + run: rm licenses/performanceanalyzer-rca-1.10.jar.sha1 - name: Update SHA working-directory: ./tmp/pa run: ./gradlew updateShas diff --git a/build.gradle b/build.gradle index fae53ec9..419175ce 100644 --- a/build.gradle +++ b/build.gradle @@ -145,10 +145,10 @@ dependencies { compile 'com.amazon.opendistro.elasticsearch:performanceanalyzer-rca:1.10' compile 'com.fasterxml.jackson.core:jackson-annotations:2.10.4' compile 'com.fasterxml.jackson.core:jackson-databind:2.10.4' - compile(group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.13.0') { + compile(group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.11.1') { force = 'true' } - compile(group: 'org.apache.logging.log4j', name: 'log4j-core', version: '2.13.0') { + compile(group: 'org.apache.logging.log4j', name: 'log4j-core', version: '2.11.1') { force = 'true' } diff --git a/licenses/commons-lang3-NOTICE.txt b/licenses/commons-lang3-NOTICE.txt index e69de29b..544c52ba 100644 --- a/licenses/commons-lang3-NOTICE.txt +++ b/licenses/commons-lang3-NOTICE.txt @@ -0,0 +1,34 @@ +Licensed to the Apache Software Foundation (ASF) under one or more +contributor license agreements. See the NOTICE file distributed with +this work for additional information regarding copyright ownership. +The ASF licenses this file to You under the Apache License, Version 2.0 +(the "License"); you may not use this file except in compliance with +the License. You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +============================================================================= + + Commons Lang Package + Version 3.0 + Release Notes + + +INTRODUCTION: + +This document contains the release notes for the 3.0 version of Apache Commons Lang. +Commons Lang is a set of utility functions and reusable components that should be of use in any +Java environment. + +Lang 3.0 now targets Java 5.0, making use of features that arrived with Java 5.0 such as generics, +variable arguments, autoboxing, concurrency and formatted output. + +For the latest advice on upgrading, see the following page: + + https://commons.apache.org/lang/article3_0.html \ No newline at end of file diff --git a/licenses/log4j-api-2.13.0.jar.sha1 b/licenses/log4j-api-2.13.0.jar.sha1 deleted file mode 100644 index 1948b83a..00000000 --- a/licenses/log4j-api-2.13.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -0adaee84c60e0705cf39df8b9fedfab100d3faf2 \ No newline at end of file diff --git a/licenses/log4j-api-LICENSE.txt b/licenses/log4j-api-LICENSE.txt deleted file mode 100644 index 98a324cf..00000000 --- a/licenses/log4j-api-LICENSE.txt +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 1999-2005 The Apache Software Foundation - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. \ No newline at end of file diff --git a/licenses/log4j-api-NOTICE.txt b/licenses/log4j-api-NOTICE.txt deleted file mode 100644 index e69de29b..00000000 diff --git a/licenses/log4j-core-2.13.0.jar.sha1 b/licenses/log4j-core-2.13.0.jar.sha1 deleted file mode 100644 index faca7482..00000000 --- a/licenses/log4j-core-2.13.0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -57b8b57dac4c87696acb4b8457fd8cbf4273d40d \ No newline at end of file diff --git a/licenses/log4j-core-LICENSE.txt b/licenses/log4j-core-LICENSE.txt deleted file mode 100644 index 98a324cf..00000000 --- a/licenses/log4j-core-LICENSE.txt +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright 1999-2005 The Apache Software Foundation - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. \ No newline at end of file diff --git a/licenses/log4j-core-NOTICE.txt b/licenses/log4j-core-NOTICE.txt deleted file mode 100644 index e69de29b..00000000 diff --git a/licenses/performanceanalyzer-rca-1.10.jar.sha1 b/licenses/performanceanalyzer-rca-1.10.jar.sha1 index e469e97b..e35ad80d 100644 --- a/licenses/performanceanalyzer-rca-1.10.jar.sha1 +++ b/licenses/performanceanalyzer-rca-1.10.jar.sha1 @@ -1 +1 @@ -e23368cc75483fa07bf2d727c639dbd3ef448dbf \ No newline at end of file +9c4d17774058b44a6f2ee6b9752c4f4254414784 diff --git a/pa_config/supervisord.conf b/pa_config/supervisord.conf index 65792c10..7920ace4 100644 --- a/pa_config/supervisord.conf +++ b/pa_config/supervisord.conf @@ -1,13 +1,13 @@ ; supervisor config file [unix_http_server] -file=/usr/share/supervisor/supervisord.sock +file=/usr/share/supervisor/performance_analyzer/supervisord.sock chmod=0770 [supervisord] -logfile=/usr/share/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log) -pidfile=/usr/share/supervisor/supervisord.pid ; (supervisord pidfile;default supervisord.pid) -childlogdir=/usr/share/supervisor ; ('AUTO' child log dir, default $TEMP) +logfile=/usr/share/supervisor/performance_analyzer/supervisord.log ; (main log file;default $CWD/supervisord.log) +pidfile=/usr/share/supervisor/performance_analyzer/supervisord.pid ; (supervisord pidfile;default supervisord.pid) +childlogdir=/usr/share/supervisor/performance_analyzer ; ('AUTO' child log dir, default $TEMP) ; the below section must remain in the config file for RPC ; (supervisorctl/web interface) to work, additional interfaces may be diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerPlugin.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerPlugin.java index ca871d30..5db5ed80 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerPlugin.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerPlugin.java @@ -15,7 +15,11 @@ package com.amazon.opendistro.elasticsearch.performanceanalyzer; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesWrapper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler.ConfigOverridesClusterSettingHandler; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.CacheConfigMetricsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler.NodeStatsSettingHandler; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.http_action.config.PerformanceAnalyzerOverridesClusterConfigAction; import com.amazon.opendistro.elasticsearch.performanceanalyzer.http_action.config.PerformanceAnalyzerResourceProvider; import java.io.File; import java.security.AccessController; @@ -78,7 +82,8 @@ import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.ScheduledMetricCollectorsExecutor; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.StatsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.ThreadPoolMetricsCollector; -import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsMetricsCollector; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsAllShardsMetricsCollector; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsFixedShardsMetricsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.ClusterSettingsManager; import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler.PerformanceAnalyzerClusterSettingHandler; import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.PerformanceAnalyzerClusterSettings; @@ -95,6 +100,7 @@ import com.amazon.opendistro.elasticsearch.performanceanalyzer.transport.PerformanceAnalyzerTransportInterceptor; import com.amazon.opendistro.elasticsearch.performanceanalyzer.util.Utils; import com.amazon.opendistro.elasticsearch.performanceanalyzer.writer.EventLogQueueProcessor; + import static java.util.Collections.singletonList; public final class PerformanceAnalyzerPlugin extends Plugin implements ActionPlugin, NetworkPlugin, SearchPlugin { @@ -104,6 +110,8 @@ public final class PerformanceAnalyzerPlugin extends Plugin implements ActionPlu private static SecurityManager sm = null; private final PerformanceAnalyzerClusterSettingHandler perfAnalyzerClusterSettingHandler; private final NodeStatsSettingHandler nodeStatsSettingHandler; + private final ConfigOverridesClusterSettingHandler configOverridesClusterSettingHandler; + private final ConfigOverridesWrapper configOverridesWrapper; private final PerformanceAnalyzerController performanceAnalyzerController; private final ClusterSettingsManager clusterSettingsManager; @@ -152,18 +160,40 @@ public PerformanceAnalyzerPlugin(final Settings settings, final java.nio.file.Pa //initialize plugin settings. Accessing plugin settings before this //point will break, as the plugin location will not be initialized. PluginSettings.instance(); - scheduledMetricCollectorsExecutor = new ScheduledMetricCollectorsExecutor(); this.performanceAnalyzerController = new PerformanceAnalyzerController(scheduledMetricCollectorsExecutor); + + configOverridesWrapper = new ConfigOverridesWrapper(); + clusterSettingsManager = new ClusterSettingsManager(Arrays.asList(PerformanceAnalyzerClusterSettings.COMPOSITE_PA_SETTING, + PerformanceAnalyzerClusterSettings.PA_NODE_STATS_SETTING), + Collections.singletonList(PerformanceAnalyzerClusterSettings.CONFIG_OVERRIDES_SETTING)); + configOverridesClusterSettingHandler = new ConfigOverridesClusterSettingHandler(configOverridesWrapper, clusterSettingsManager, + PerformanceAnalyzerClusterSettings.CONFIG_OVERRIDES_SETTING); + clusterSettingsManager.addSubscriberForStringSetting(PerformanceAnalyzerClusterSettings.CONFIG_OVERRIDES_SETTING, + configOverridesClusterSettingHandler); + perfAnalyzerClusterSettingHandler = new PerformanceAnalyzerClusterSettingHandler(performanceAnalyzerController, + clusterSettingsManager); + clusterSettingsManager.addSubscriberForIntSetting(PerformanceAnalyzerClusterSettings.COMPOSITE_PA_SETTING, + perfAnalyzerClusterSettingHandler); + + nodeStatsSettingHandler = new NodeStatsSettingHandler(performanceAnalyzerController, + clusterSettingsManager); + clusterSettingsManager.addSubscriberForIntSetting(PerformanceAnalyzerClusterSettings.PA_NODE_STATS_SETTING, + nodeStatsSettingHandler); + scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new ThreadPoolMetricsCollector()); + scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new CacheConfigMetricsCollector()); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new CircuitBreakerCollector()); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new OSMetricsCollector()); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new HeapMetricsCollector()); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new MetricsPurgeActivity()); - scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new NodeDetailsCollector()); - scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new NodeStatsMetricsCollector(performanceAnalyzerController)); + scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new NodeDetailsCollector(configOverridesWrapper)); + scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new + NodeStatsAllShardsMetricsCollector(performanceAnalyzerController)); + scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new + NodeStatsFixedShardsMetricsCollector(performanceAnalyzerController)); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new MasterServiceMetrics()); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new MasterServiceEventMetrics()); scheduledMetricCollectorsExecutor.addScheduledMetricCollector(new DisksCollector()); @@ -171,22 +201,6 @@ public PerformanceAnalyzerPlugin(final Settings settings, final java.nio.file.Pa scheduledMetricCollectorsExecutor.addScheduledMetricCollector(StatsCollector.instance()); scheduledMetricCollectorsExecutor.start(); - clusterSettingsManager = new ClusterSettingsManager( - Arrays.asList(PerformanceAnalyzerClusterSettings.COMPOSITE_PA_SETTING, - PerformanceAnalyzerClusterSettings.PA_NODE_STATS_SETTING)); - - perfAnalyzerClusterSettingHandler = new PerformanceAnalyzerClusterSettingHandler( - performanceAnalyzerController, - clusterSettingsManager); - clusterSettingsManager.addSubscriberForSetting(PerformanceAnalyzerClusterSettings.COMPOSITE_PA_SETTING, - perfAnalyzerClusterSettingHandler); - - nodeStatsSettingHandler = new NodeStatsSettingHandler( - performanceAnalyzerController, - clusterSettingsManager); - clusterSettingsManager.addSubscriberForSetting(PerformanceAnalyzerClusterSettings.PA_NODE_STATS_SETTING, - nodeStatsSettingHandler); - EventLog eventLog = new EventLog(); EventLogFileHandler eventLogFileHandler = new EventLogFileHandler(eventLog, PluginSettings.instance().getMetricsLocation()); new EventLogQueueProcessor(eventLogFileHandler, @@ -230,13 +244,16 @@ public List getRestHandlers(final Settings s final SettingsFilter settingsFilter, final IndexNameExpressionResolver indexNameExpressionResolver, final Supplier nodesInCluster) { - PerformanceAnalyzerConfigAction performanceanalyzerConfigAction = new PerformanceAnalyzerConfigAction(restController, - performanceAnalyzerController); + PerformanceAnalyzerConfigAction performanceanalyzerConfigAction = new PerformanceAnalyzerConfigAction( + restController, performanceAnalyzerController); PerformanceAnalyzerConfigAction.setInstance(performanceanalyzerConfigAction); PerformanceAnalyzerResourceProvider performanceAnalyzerRp = new PerformanceAnalyzerResourceProvider(settings, restController); PerformanceAnalyzerClusterConfigAction paClusterConfigAction = new PerformanceAnalyzerClusterConfigAction(settings, restController, perfAnalyzerClusterSettingHandler, nodeStatsSettingHandler); - return Arrays.asList(performanceanalyzerConfigAction, paClusterConfigAction, performanceAnalyzerRp); + PerformanceAnalyzerOverridesClusterConfigAction paOverridesConfigClusterAction = + new PerformanceAnalyzerOverridesClusterConfigAction(settings, restController, + configOverridesClusterSettingHandler, configOverridesWrapper); + return Arrays.asList(performanceanalyzerConfigAction, paClusterConfigAction, performanceAnalyzerRp, paOverridesConfigClusterAction); } @Override @@ -276,9 +293,9 @@ public Map> getTransports(Settings settings, ThreadP @Override public List> getSettings() { return Arrays.asList(PerformanceAnalyzerClusterSettings.COMPOSITE_PA_SETTING, - PerformanceAnalyzerClusterSettings.PA_NODE_STATS_SETTING); + PerformanceAnalyzerClusterSettings.PA_NODE_STATS_SETTING, + PerformanceAnalyzerClusterSettings.CONFIG_OVERRIDES_SETTING); } } - diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/CacheConfigMetricsCollector.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/CacheConfigMetricsCollector.java new file mode 100644 index 00000000..502cbe00 --- /dev/null +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/CacheConfigMetricsCollector.java @@ -0,0 +1,131 @@ +/* + * Copyright <2019> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors; + +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.decisionmaker.DecisionMakerConsts.CACHE_MAX_WEIGHT; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CacheType.FIELD_DATA_CACHE; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CacheType.SHARD_REQUEST_CACHE; + +import com.amazon.opendistro.elasticsearch.performanceanalyzer.ESResources; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CacheConfigDimension; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CacheConfigValue; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsConfiguration; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsProcessor; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics; +import com.fasterxml.jackson.annotation.JsonInclude; +import com.fasterxml.jackson.annotation.JsonInclude.Include; +import com.fasterxml.jackson.annotation.JsonProperty; +import java.security.AccessController; +import java.security.PrivilegedAction; +import org.apache.commons.lang3.reflect.FieldUtils; +import org.elasticsearch.common.cache.Cache; +import org.elasticsearch.indices.IndicesService; + +/* + * Unlike Cache Hit, Miss, Eviction Count and Size, which is tracked on a per shard basis, + * the Cache Max size is a node-level static setting and thus, we need a custom collector + * (other than NodeStatsMetricsCollector which collects the per shard metrics) for this + * metric. + * + * CacheConfigMetricsCollector collects the max size for the Field Data and Shard Request + * Cache currently and can be extended for remaining cache types and any other node level + * cache metric. + * + */ +public class CacheConfigMetricsCollector extends PerformanceAnalyzerMetricsCollector implements MetricsProcessor { + public static final int SAMPLING_TIME_INTERVAL = MetricsConfiguration.CONFIG_MAP.get( + CacheConfigMetricsCollector.class).samplingInterval; + private static final int KEYS_PATH_LENGTH = 0; + private StringBuilder value; + + public CacheConfigMetricsCollector() { + super(SAMPLING_TIME_INTERVAL, "CacheConfigMetrics"); + value = new StringBuilder(); + } + + @Override + public void collectMetrics(long startTime) { + IndicesService indicesService = ESResources.INSTANCE.getIndicesService(); + if (indicesService == null) { + return; + } + + value.setLength(0); + value.append(PerformanceAnalyzerMetrics.getJsonCurrentMilliSeconds()); + // This is for backward compatibility. Core ES may or may not emit maxWeight metric. + // (depending on whether the patch has been applied or not). Thus, we need to use + // reflection to check whether getMaxWeight() method exist in Cache.java + // + // Currently, we are collecting maxWeight metrics only for FieldData and Shard Request Cache. + CacheMaxSizeStatus fieldDataCacheMaxSizeStatus = AccessController.doPrivileged( + (PrivilegedAction) () -> { + try { + Cache fieldDataCache = indicesService.getIndicesFieldDataCache().getCache(); + long fieldDataMaxSize = (Long) FieldUtils.readField(fieldDataCache, CACHE_MAX_WEIGHT, true); + return new CacheMaxSizeStatus(FIELD_DATA_CACHE.toString(), fieldDataMaxSize); + } catch (Exception e) { + return new CacheMaxSizeStatus(FIELD_DATA_CACHE.toString(), null); + } + }); + value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor).append(fieldDataCacheMaxSizeStatus.serialize()); + CacheMaxSizeStatus shardRequestCacheMaxSizeStatus = AccessController.doPrivileged( + (PrivilegedAction) () -> { + try { + Object reqCache = FieldUtils.readField(indicesService, "indicesRequestCache", true); + Cache requestCache = (Cache) FieldUtils.readField(reqCache, "cache", true); + Long requestCacheMaxSize = (Long) FieldUtils.readField(requestCache, CACHE_MAX_WEIGHT, true); + return new CacheMaxSizeStatus(SHARD_REQUEST_CACHE.toString(), requestCacheMaxSize); + } catch (Exception e) { + return new CacheMaxSizeStatus(SHARD_REQUEST_CACHE.toString(), null); + } + }); + value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor).append(shardRequestCacheMaxSizeStatus.serialize()); + saveMetricValues(value.toString(), startTime); + } + + @Override + public String getMetricsPath(long startTime, String... keysPath) { + // throw exception if keys.length is not equal to 0 + if (keysPath.length != KEYS_PATH_LENGTH) { + throw new RuntimeException("keys length should be " + KEYS_PATH_LENGTH); + } + + return PerformanceAnalyzerMetrics.generatePath(startTime, PerformanceAnalyzerMetrics.sCacheConfigPath); + } + + static class CacheMaxSizeStatus extends MetricStatus { + + private final String cacheType; + + @JsonInclude(Include.NON_NULL) + private final long cacheMaxSize; + + CacheMaxSizeStatus(String cacheType, Long cacheMaxSize) { + this.cacheType = cacheType; + this.cacheMaxSize = cacheMaxSize; + } + + @JsonProperty(CacheConfigDimension.Constants.TYPE_VALUE) + public String getCacheType() { + return cacheType; + } + + @JsonProperty(CacheConfigValue.Constants.CACHE_MAX_SIZE_VALUE) + public long getCacheMaxSize() { + return cacheMaxSize; + } + } +} diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeDetailsCollector.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeDetailsCollector.java index 378ca3d2..b1b2971f 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeDetailsCollector.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeDetailsCollector.java @@ -16,6 +16,8 @@ package com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors; import com.amazon.opendistro.elasticsearch.performanceanalyzer.ESResources; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesHelper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesWrapper; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.NodeDetailColumns; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.NodeRole; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsConfiguration; @@ -27,15 +29,18 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; +import java.io.IOException; import java.util.Iterator; public class NodeDetailsCollector extends PerformanceAnalyzerMetricsCollector implements MetricsProcessor { public static final int SAMPLING_TIME_INTERVAL = MetricsConfiguration.CONFIG_MAP.get(NodeDetailsCollector.class).samplingInterval; private static final Logger LOG = LogManager.getLogger(NodeDetailsCollector.class); private static final int KEYS_PATH_LENGTH = 0; + private final ConfigOverridesWrapper configOverridesWrapper; - public NodeDetailsCollector() { + public NodeDetailsCollector(final ConfigOverridesWrapper configOverridesWrapper) { super(SAMPLING_TIME_INTERVAL, "NodeDetails"); + this.configOverridesWrapper = configOverridesWrapper; } @Override @@ -54,6 +59,30 @@ public void collectMetrics(long startTime) { .append( PerformanceAnalyzerMetrics.sMetricNewLineDelimitor); + // We add the config overrides in line#2 because we don't know how many lines + // follow that belong to actual node details, and the reader also has no way to + // know this information in advance unless we add the number of nodes as + // additional metadata in the file. + try { + if (configOverridesWrapper != null) { + String rcaOverrides = ConfigOverridesHelper.serialize(configOverridesWrapper.getCurrentClusterConfigOverrides()); + value.append(rcaOverrides); + } else { + LOG.warn("Overrides wrapper is null. Check NodeDetailsCollector instantiation."); + } + } catch (IOException ioe) { + LOG.error("Unable to serialize rca config overrides.", ioe); + } + value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor); + + // line#3 denotes when the timestamp when the config override happened. + if (configOverridesWrapper != null) { + value.append(configOverridesWrapper.getLastUpdatedTimestamp()); + } else { + value.append(0L); + } + value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor); + DiscoveryNodes discoveryNodes = ESResources.INSTANCE.getClusterService().state().nodes(); DiscoveryNode masterNode = discoveryNodes.getMasterNode(); diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollector.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollector.java new file mode 100644 index 00000000..acfc68f7 --- /dev/null +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollector.java @@ -0,0 +1,250 @@ +/* + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors; + +import java.lang.reflect.Field; +import java.util.HashMap; +import java.util.Map; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.PerformanceAnalyzerController; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.util.Utils; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; +import org.elasticsearch.action.admin.indices.stats.IndexShardStats; +import org.elasticsearch.action.admin.indices.stats.ShardStats; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.indices.NodeIndicesStats; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.ESResources; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.ShardStatsValue; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsConfiguration; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsProcessor; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics; +import com.fasterxml.jackson.annotation.JsonIgnore; +import com.fasterxml.jackson.annotation.JsonProperty; + +/** + * This collector collects metrics for all shards on a node in a single run. + * These metrics are light weight metrics which have minimal performance impacts + * on the performance of the node. + */ + +@SuppressWarnings("unchecked") +public class NodeStatsAllShardsMetricsCollector extends PerformanceAnalyzerMetricsCollector implements MetricsProcessor { + public static final int SAMPLING_TIME_INTERVAL = MetricsConfiguration.CONFIG_MAP.get( + NodeStatsAllShardsMetricsCollector.class).samplingInterval; + private static final int KEYS_PATH_LENGTH = 2; + private static final Logger LOG = LogManager.getLogger(NodeStatsAllShardsMetricsCollector.class); + private HashMap currentShards; + private final PerformanceAnalyzerController controller; + + + public NodeStatsAllShardsMetricsCollector(final PerformanceAnalyzerController controller) { + super(SAMPLING_TIME_INTERVAL, "NodeStatsMetrics"); + currentShards = new HashMap<>(); + this.controller = controller; + } + + private void populateCurrentShards() { + currentShards.clear(); + currentShards = Utils.getShards(); + } + + private Map valueCalculators = new HashMap() { { + put(ShardStatsValue.INDEXING_THROTTLE_TIME.toString(), + (shardStats) -> shardStats.getStats().getIndexing().getTotal().getThrottleTime().millis()); + + put(ShardStatsValue.CACHE_QUERY_HIT.toString(), (shardStats) -> shardStats.getStats().getQueryCache().getHitCount()); + put(ShardStatsValue.CACHE_QUERY_MISS.toString(), (shardStats) -> shardStats.getStats().getQueryCache().getMissCount()); + put(ShardStatsValue.CACHE_QUERY_SIZE.toString(), (shardStats) -> shardStats.getStats().getQueryCache().getMemorySizeInBytes()); + + put(ShardStatsValue.CACHE_FIELDDATA_EVICTION.toString(), (shardStats) -> shardStats.getStats().getFieldData().getEvictions()); + put(ShardStatsValue.CACHE_FIELDDATA_SIZE.toString(), (shardStats) -> shardStats.getStats().getFieldData().getMemorySizeInBytes()); + + put(ShardStatsValue.CACHE_REQUEST_HIT.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getHitCount()); + put(ShardStatsValue.CACHE_REQUEST_MISS.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getMissCount()); + put(ShardStatsValue.CACHE_REQUEST_EVICTION.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getEvictions()); + put(ShardStatsValue.CACHE_REQUEST_SIZE.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getMemorySizeInBytes()); + + } }; + + @Override + public String getMetricsPath(long startTime, String... keysPath) { + // throw exception if keysPath.length is not equal to 2 (Keys should be Index Name, and ShardId) + if (keysPath.length != KEYS_PATH_LENGTH) { + throw new RuntimeException("keys length should be " + KEYS_PATH_LENGTH); + } + return PerformanceAnalyzerMetrics.generatePath(startTime, PerformanceAnalyzerMetrics.sIndicesPath, keysPath[0], keysPath[1]); + } + + @Override + public void collectMetrics(long startTime) { + IndicesService indicesService = ESResources.INSTANCE.getIndicesService(); + + if (indicesService == null) { + return; + } + + try { + populateCurrentShards(); + // Metrics populated for all shards in every collection. + for (HashMap.Entry currentShard : currentShards.entrySet() ){ + IndexShard currentIndexShard = (IndexShard)currentShard.getValue(); + IndexShardStats currentIndexShardStats = Utils.indexShardStats(indicesService, + currentIndexShard, new CommonStatsFlags(CommonStatsFlags.Flag.QueryCache, + CommonStatsFlags.Flag.FieldData, + CommonStatsFlags.Flag.RequestCache)); + for (ShardStats shardStats : currentIndexShardStats.getShards()) { + StringBuilder value = new StringBuilder(); + + value.append(PerformanceAnalyzerMetrics.getJsonCurrentMilliSeconds()); + // Populate the result with cache specific metrics only. + value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor) + .append(new NodeStatsMetricsAllShardsPerCollectionStatus(shardStats).serialize()); + saveMetricValues(value.toString(), startTime, currentIndexShardStats.getShardId().getIndexName(), + String.valueOf(currentIndexShardStats.getShardId().id())); + } + } + + } catch (Exception ex) { + LOG.debug("Exception in Collecting NodesStats Metrics: {} for startTime {} with ExceptionCode: {}", + () -> ex.toString(), () -> startTime, () -> StatExceptionCode.NODESTATS_COLLECTION_ERROR.toString()); + StatsCollector.instance().logException(StatExceptionCode.NODESTATS_COLLECTION_ERROR); + } + } + + //- Separated to have a unit test; and catch any code changes around this field + Field getNodeIndicesStatsByShardField() throws Exception { + Field field = NodeIndicesStats.class.getDeclaredField("statsByShard"); + field.setAccessible(true); + return field; + } + + public class NodeStatsMetricsAllShardsPerCollectionStatus extends MetricStatus { + + @JsonIgnore + private ShardStats shardStats; + + private final long queryCacheHitCount; + private final long queryCacheMissCount; + private final long queryCacheInBytes; + private final long fieldDataEvictions; + private final long fieldDataInBytes; + private final long requestCacheHitCount; + private final long requestCacheMissCount; + private final long requestCacheEvictions; + private final long requestCacheInBytes; + + public NodeStatsMetricsAllShardsPerCollectionStatus(ShardStats shardStats) { + super(); + this.shardStats = shardStats; + + this.queryCacheHitCount = calculate( + ShardStatsValue.CACHE_QUERY_HIT); + this.queryCacheMissCount = calculate( + ShardStatsValue.CACHE_QUERY_MISS); + this.queryCacheInBytes = calculate( + ShardStatsValue.CACHE_QUERY_SIZE); + this.fieldDataEvictions = calculate( + ShardStatsValue.CACHE_FIELDDATA_EVICTION); + this.fieldDataInBytes = calculate(ShardStatsValue.CACHE_FIELDDATA_SIZE); + this.requestCacheHitCount = calculate( + ShardStatsValue.CACHE_REQUEST_HIT); + this.requestCacheMissCount = calculate( + ShardStatsValue.CACHE_REQUEST_MISS); + this.requestCacheEvictions = calculate( + ShardStatsValue.CACHE_REQUEST_EVICTION); + this.requestCacheInBytes = calculate( + ShardStatsValue.CACHE_REQUEST_SIZE); + } + + @SuppressWarnings("checkstyle:parameternumber") + public NodeStatsMetricsAllShardsPerCollectionStatus(long queryCacheHitCount, long queryCacheMissCount, + long queryCacheInBytes, long fieldDataEvictions, + long fieldDataInBytes, long requestCacheHitCount, + long requestCacheMissCount, long requestCacheEvictions, + long requestCacheInBytes) { + super(); + this.shardStats = null; + + this.queryCacheHitCount = queryCacheHitCount; + this.queryCacheMissCount = queryCacheMissCount; + this.queryCacheInBytes = queryCacheInBytes; + this.fieldDataEvictions = fieldDataEvictions; + this.fieldDataInBytes = fieldDataInBytes; + this.requestCacheHitCount = requestCacheHitCount; + this.requestCacheMissCount = requestCacheMissCount; + this.requestCacheEvictions = requestCacheEvictions; + this.requestCacheInBytes = requestCacheInBytes; + } + + + private long calculate(ShardStatsValue nodeMetric) { + return valueCalculators.get(nodeMetric.toString()).calculateValue(shardStats); + } + + @JsonIgnore + public ShardStats getShardStats() { + return shardStats; + } + + @JsonProperty(ShardStatsValue.Constants.QUEY_CACHE_HIT_COUNT_VALUE) + public long getQueryCacheHitCount() { + return queryCacheHitCount; + } + + @JsonProperty(ShardStatsValue.Constants.QUERY_CACHE_MISS_COUNT_VALUE) + public long getQueryCacheMissCount() { + return queryCacheMissCount; + } + + @JsonProperty(ShardStatsValue.Constants.QUERY_CACHE_IN_BYTES_VALUE) + public long getQueryCacheInBytes() { + return queryCacheInBytes; + } + + @JsonProperty(ShardStatsValue.Constants.FIELDDATA_EVICTION_VALUE) + public long getFieldDataEvictions() { + return fieldDataEvictions; + } + + @JsonProperty(ShardStatsValue.Constants.FIELD_DATA_IN_BYTES_VALUE) + public long getFieldDataInBytes() { + return fieldDataInBytes; + } + + @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_HIT_COUNT_VALUE) + public long getRequestCacheHitCount() { + return requestCacheHitCount; + } + + @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_MISS_COUNT_VALUE) + public long getRequestCacheMissCount() { + return requestCacheMissCount; + } + + @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_EVICTION_VALUE) + public long getRequestCacheEvictions() { + return requestCacheEvictions; + } + + @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_IN_BYTES_VALUE) + public long getRequestCacheInBytes() { + return requestCacheInBytes; + } + + } +} diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsMetricsCollector.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollector.java similarity index 61% rename from src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsMetricsCollector.java rename to src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollector.java index 45453908..b546aad1 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsMetricsCollector.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollector.java @@ -16,26 +16,19 @@ package com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors; import java.lang.reflect.Field; -import java.util.EnumSet; import java.util.HashMap; -import java.util.HashSet; import java.util.Iterator; import java.util.Map; - import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.PerformanceAnalyzerController; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.util.Utils; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; -import org.elasticsearch.action.admin.indices.stats.CommonStats; import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; import org.elasticsearch.action.admin.indices.stats.IndexShardStats; import org.elasticsearch.action.admin.indices.stats.ShardStats; -import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.IndexShardState; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.indices.NodeIndicesStats; - import com.amazon.opendistro.elasticsearch.performanceanalyzer.ESResources; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.ShardStatsValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsConfiguration; @@ -44,83 +37,41 @@ import com.fasterxml.jackson.annotation.JsonIgnore; import com.fasterxml.jackson.annotation.JsonProperty; +/** + * This collector collects metrics for fixed number of shards on a node in a single run. + * These metrics are heavy weight metrics which have performance impacts + * on the performance of the node. The number of shards is set via a cluster settings api. + * The parameter to set is shardsPerCollection. The metrics will be populated for these many shards + * in a single run. + */ + @SuppressWarnings("unchecked") -public class NodeStatsMetricsCollector extends PerformanceAnalyzerMetricsCollector implements MetricsProcessor { - public static final int SAMPLING_TIME_INTERVAL = MetricsConfiguration.CONFIG_MAP.get(NodeStatsMetricsCollector.class).samplingInterval; +public class NodeStatsFixedShardsMetricsCollector extends PerformanceAnalyzerMetricsCollector implements MetricsProcessor { + public static final int SAMPLING_TIME_INTERVAL = MetricsConfiguration.CONFIG_MAP.get( + NodeStatsAllShardsMetricsCollector.class).samplingInterval; private static final int KEYS_PATH_LENGTH = 2; - private static final Logger LOG = LogManager.getLogger(NodeStatsMetricsCollector.class); + private static final Logger LOG = LogManager.getLogger(NodeStatsFixedShardsMetricsCollector.class); private HashMap currentShards; private Iterator> currentShardsIter; private final PerformanceAnalyzerController controller; - - public NodeStatsMetricsCollector(final PerformanceAnalyzerController controller) { + public NodeStatsFixedShardsMetricsCollector(final PerformanceAnalyzerController controller) { super(SAMPLING_TIME_INTERVAL, "NodeStatsMetrics"); currentShards = new HashMap<>(); currentShardsIter = currentShards.entrySet().iterator(); this.controller = controller; } - private String getUniqueShardIdKey(ShardId shardId) { - return "[" + shardId.getIndex().getUUID() + "][" + shardId.getId() + "]"; - } - private void populateCurrentShards() { currentShards.clear(); - Iterator indexServices = ESResources.INSTANCE.getIndicesService().iterator(); - while (indexServices.hasNext()) { - Iterator indexShards = indexServices.next().iterator(); - while (indexShards.hasNext()) { - IndexShard shard = indexShards.next(); - currentShards.put(getUniqueShardIdKey(shard.shardId()), shard); - } - } + currentShards = Utils.getShards(); currentShardsIter = currentShards.entrySet().iterator(); } - /** - * This function is copied directly from IndicesService.java in elastic search as the original function is not public - * we need to collect stats per shard based instead of calling the stat() function to fetch all at once(which increases - * cpu usage on data nodes dramatically). - */ - private IndexShardStats indexShardStats(final IndicesService indicesService, final IndexShard indexShard, - final CommonStatsFlags flags) { - if (indexShard.routingEntry() == null) { - return null; - } - - return new IndexShardStats( - indexShard.shardId(), - new ShardStats[]{ - new ShardStats( - indexShard.routingEntry(), - indexShard.shardPath(), - new CommonStats(indicesService.getIndicesQueryCache(), indexShard, flags), - null, - null, - null) - }); - } - - private static final EnumSet CAN_WRITE_INDEX_BUFFER_STATES = EnumSet.of( - IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED); - private Map valueCalculators = new HashMap() { { put(ShardStatsValue.INDEXING_THROTTLE_TIME.toString(), (shardStats) -> shardStats.getStats().getIndexing().getTotal().getThrottleTime().millis()); - put(ShardStatsValue.CACHE_QUERY_HIT.toString(), (shardStats) -> shardStats.getStats().getQueryCache().getHitCount()); - put(ShardStatsValue.CACHE_QUERY_MISS.toString(), (shardStats) -> shardStats.getStats().getQueryCache().getMissCount()); - put(ShardStatsValue.CACHE_QUERY_SIZE.toString(), (shardStats) -> shardStats.getStats().getQueryCache().getMemorySizeInBytes()); - - put(ShardStatsValue.CACHE_FIELDDATA_EVICTION.toString(), (shardStats) -> shardStats.getStats().getFieldData().getEvictions()); - put(ShardStatsValue.CACHE_FIELDDATA_SIZE.toString(), (shardStats) -> shardStats.getStats().getFieldData().getMemorySizeInBytes()); - - put(ShardStatsValue.CACHE_REQUEST_HIT.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getHitCount()); - put(ShardStatsValue.CACHE_REQUEST_MISS.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getMissCount()); - put(ShardStatsValue.CACHE_REQUEST_EVICTION.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getEvictions()); - put(ShardStatsValue.CACHE_REQUEST_SIZE.toString(), (shardStats) -> shardStats.getStats().getRequestCache().getMemorySizeInBytes()); - put(ShardStatsValue.REFRESH_EVENT.toString(), (shardStats) -> shardStats.getStats().getRefresh().getTotal()); put(ShardStatsValue.REFRESH_TIME.toString(), (shardStats) -> shardStats.getStats().getRefresh().getTotalTimeInMillis()); @@ -143,10 +94,10 @@ private IndexShardStats indexShardStats(final IndicesService indicesService, fin put(ShardStatsValue.DOC_VALUES_MEMORY.toString(), (shardStats) -> shardStats.getStats().getSegments().getDocValuesMemoryInBytes()); put(ShardStatsValue.INDEX_WRITER_MEMORY.toString(), (shardStats) -> shardStats.getStats().getSegments().getIndexWriterMemoryInBytes()); - put(ShardStatsValue.VERSION_MAP_MEMORY.toString(), - (shardStats) -> shardStats.getStats().getSegments() - .getVersionMapMemoryInBytes()); - put(ShardStatsValue.BITSET_MEMORY.toString(), (shardStats) -> shardStats.getStats().getSegments().getBitsetMemoryInBytes()); + put(ShardStatsValue.VERSION_MAP_MEMORY.toString(), + (shardStats) -> shardStats.getStats().getSegments() + .getVersionMapMemoryInBytes()); + put(ShardStatsValue.BITSET_MEMORY.toString(), (shardStats) -> shardStats.getStats().getSegments().getBitsetMemoryInBytes()); put(ShardStatsValue.INDEXING_BUFFER.toString(), (shardStats) -> getIndexBufferBytes(shardStats)); put(ShardStatsValue.SHARD_SIZE_IN_BYTES.toString(), (shardStats) -> shardStats.getStats().getStore().getSizeInBytes()); @@ -154,13 +105,14 @@ private IndexShardStats indexShardStats(final IndicesService indicesService, fin } }; private long getIndexBufferBytes(ShardStats shardStats) { - IndexShard shard = currentShards.get(getUniqueShardIdKey(shardStats.getShardRouting().shardId())); + IndexShard shard = currentShards.get(Utils.getUniqueShardIdKey(shardStats.getShardRouting().shardId())); if (shard == null) { return 0; } - return CAN_WRITE_INDEX_BUFFER_STATES.contains(shard.state()) ? shard.getWritingBytes() + shard.getIndexBufferRAMBytesUsed() : 0; + return Utils.CAN_WRITE_INDEX_BUFFER_STATES.contains(shard.state()) ? shard.getWritingBytes() + + shard.getIndexBufferRAMBytesUsed() : 0; } @@ -183,10 +135,6 @@ public void collectMetrics(long startTime) { return; } - NodeIndicesStats nodeIndicesStats = indicesService.stats(CommonStatsFlags.ALL); - - HashSet currentShards = new HashSet<>(); - try { //reach the end of current shardId list. retrieve new shard list from IndexService if (!currentShardsIter.hasNext()) { @@ -197,14 +145,21 @@ public void collectMetrics(long startTime) { break; } IndexShard currentIndexShard = currentShardsIter.next().getValue(); - IndexShardStats currentIndexShardStats = this.indexShardStats(indicesService, currentIndexShard, CommonStatsFlags.ALL); + IndexShardStats currentIndexShardStats = Utils.indexShardStats(indicesService, + currentIndexShard, new CommonStatsFlags(CommonStatsFlags.Flag.Segments, + CommonStatsFlags.Flag.Store, + CommonStatsFlags.Flag.Indexing, + CommonStatsFlags.Flag.Merge, + CommonStatsFlags.Flag.Flush, + CommonStatsFlags.Flag.Refresh, + CommonStatsFlags.Flag.Recovery)); for (ShardStats shardStats : currentIndexShardStats.getShards()) { StringBuilder value = new StringBuilder(); value.append(PerformanceAnalyzerMetrics.getJsonCurrentMilliSeconds()); //- go through the list of metrics to be collected and emit value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor) - .append(new NodeStatsMetricsStatus(shardStats).serialize()); + .append(new NodeStatsMetricsFixedShardsPerCollectionStatus(shardStats).serialize()); saveMetricValues(value.toString(), startTime, currentIndexShardStats.getShardId().getIndexName(), String.valueOf(currentIndexShardStats.getShardId().id())); @@ -212,11 +167,12 @@ public void collectMetrics(long startTime) { } } catch (Exception ex) { LOG.debug("Exception in Collecting NodesStats Metrics: {} for startTime {} with ExceptionCode: {}", - () -> ex.toString(), () -> startTime, () -> StatExceptionCode.NODESTATS_COLLECTION_ERROR.toString()); + () -> ex.toString(), () -> startTime, () -> StatExceptionCode.NODESTATS_COLLECTION_ERROR.toString()); StatsCollector.instance().logException(StatExceptionCode.NODESTATS_COLLECTION_ERROR); } } + //- Separated to have a unit test; and catch any code changes around this field Field getNodeIndicesStatsByShardField() throws Exception { Field field = NodeIndicesStats.class.getDeclaredField("statsByShard"); @@ -224,21 +180,12 @@ Field getNodeIndicesStatsByShardField() throws Exception { return field; } - public class NodeStatsMetricsStatus extends MetricStatus { + public class NodeStatsMetricsFixedShardsPerCollectionStatus extends MetricStatus { @JsonIgnore private ShardStats shardStats; private final long indexingThrottleTime; - private final long queryCacheHitCount; - private final long queryCacheMissCount; - private final long queryCacheInBytes; - private final long fieldDataEvictions; - private final long fieldDataInBytes; - private final long requestCacheHitCount; - private final long requestCacheMissCount; - private final long requestCacheEvictions; - private final long requestCacheInBytes; private final long refreshCount; private final long refreshTime; private final long flushCount; @@ -260,29 +207,11 @@ public class NodeStatsMetricsStatus extends MetricStatus { private final long bitsetMemory; private final long shardSizeInBytes; - public NodeStatsMetricsStatus(ShardStats shardStats) { + public NodeStatsMetricsFixedShardsPerCollectionStatus(ShardStats shardStats) { super(); this.shardStats = shardStats; - this.indexingThrottleTime = calculate( - ShardStatsValue.INDEXING_THROTTLE_TIME); - this.queryCacheHitCount = calculate( - ShardStatsValue.CACHE_QUERY_HIT); - this.queryCacheMissCount = calculate( - ShardStatsValue.CACHE_QUERY_MISS); - this.queryCacheInBytes = calculate( - ShardStatsValue.CACHE_QUERY_SIZE); - this.fieldDataEvictions = calculate( - ShardStatsValue.CACHE_FIELDDATA_EVICTION); - this.fieldDataInBytes = calculate(ShardStatsValue.CACHE_FIELDDATA_SIZE); - this.requestCacheHitCount = calculate( - ShardStatsValue.CACHE_REQUEST_HIT); - this.requestCacheMissCount = calculate( - ShardStatsValue.CACHE_REQUEST_MISS); - this.requestCacheEvictions = calculate( - ShardStatsValue.CACHE_REQUEST_EVICTION); - this.requestCacheInBytes = calculate( - ShardStatsValue.CACHE_REQUEST_SIZE); + this.indexingThrottleTime = calculate(ShardStatsValue.INDEXING_THROTTLE_TIME); this.refreshCount = calculate(ShardStatsValue.REFRESH_EVENT); this.refreshTime = calculate(ShardStatsValue.REFRESH_TIME); this.flushCount = calculate(ShardStatsValue.FLUSH_EVENT); @@ -308,31 +237,18 @@ public NodeStatsMetricsStatus(ShardStats shardStats) { } @SuppressWarnings("checkstyle:parameternumber") - public NodeStatsMetricsStatus(long indexingThrottleTime, - long queryCacheHitCount, long queryCacheMissCount, - long queryCacheInBytes, long fieldDataEvictions, - long fieldDataInBytes, long requestCacheHitCount, - long requestCacheMissCount, long requestCacheEvictions, - long requestCacheInBytes, long refreshCount, long refreshTime, - long flushCount, long flushTime, long mergeCount, - long mergeTime, long mergeCurrent, long indexBufferBytes, - long segmentCount, long segmentsMemory, long termsMemory, - long storedFieldsMemory, long termVectorsMemory, - long normsMemory, long pointsMemory, long docValuesMemory, - long indexWriterMemory, long versionMapMemory, - long bitsetMemory, long shardSizeInBytes) { + public NodeStatsMetricsFixedShardsPerCollectionStatus(long indexingThrottleTime, long refreshCount, long refreshTime, + long flushCount, long flushTime, long mergeCount, + long mergeTime, long mergeCurrent, long indexBufferBytes, + long segmentCount, long segmentsMemory, long termsMemory, + long storedFieldsMemory, long termVectorsMemory, + long normsMemory, long pointsMemory, long docValuesMemory, + long indexWriterMemory, long versionMapMemory, + long bitsetMemory, long shardSizeInBytes) { super(); this.shardStats = null; + this.indexingThrottleTime = indexingThrottleTime; - this.queryCacheHitCount = queryCacheHitCount; - this.queryCacheMissCount = queryCacheMissCount; - this.queryCacheInBytes = queryCacheInBytes; - this.fieldDataEvictions = fieldDataEvictions; - this.fieldDataInBytes = fieldDataInBytes; - this.requestCacheHitCount = requestCacheHitCount; - this.requestCacheMissCount = requestCacheMissCount; - this.requestCacheEvictions = requestCacheEvictions; - this.requestCacheInBytes = requestCacheInBytes; this.refreshCount = refreshCount; this.refreshTime = refreshTime; this.flushCount = flushCount; @@ -360,61 +276,11 @@ private long calculate(ShardStatsValue nodeMetric) { return valueCalculators.get(nodeMetric.toString()).calculateValue(shardStats); } - @JsonIgnore - public ShardStats getShardStats() { - return shardStats; - } - @JsonProperty(ShardStatsValue.Constants.INDEXING_THROTTLE_TIME_VALUE) public long getIndexingThrottleTime() { return indexingThrottleTime; } - @JsonProperty(ShardStatsValue.Constants.QUEY_CACHE_HIT_COUNT_VALUE) - public long getQueryCacheHitCount() { - return queryCacheHitCount; - } - - @JsonProperty(ShardStatsValue.Constants.QUERY_CACHE_MISS_COUNT_VALUE) - public long getQueryCacheMissCount() { - return queryCacheMissCount; - } - - @JsonProperty(ShardStatsValue.Constants.QUERY_CACHE_IN_BYTES_VALUE) - public long getQueryCacheInBytes() { - return queryCacheInBytes; - } - - @JsonProperty(ShardStatsValue.Constants.FIELDDATA_EVICTION_VALUE) - public long getFieldDataEvictions() { - return fieldDataEvictions; - } - - @JsonProperty(ShardStatsValue.Constants.FIELD_DATA_IN_BYTES_VALUE) - public long getFieldDataInBytes() { - return fieldDataInBytes; - } - - @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_HIT_COUNT_VALUE) - public long getRequestCacheHitCount() { - return requestCacheHitCount; - } - - @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_MISS_COUNT_VALUE) - public long getRequestCacheMissCount() { - return requestCacheMissCount; - } - - @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_EVICTION_VALUE) - public long getRequestCacheEvictions() { - return requestCacheEvictions; - } - - @JsonProperty(ShardStatsValue.Constants.REQUEST_CACHE_IN_BYTES_VALUE) - public long getRequestCacheInBytes() { - return requestCacheInBytes; - } - @JsonProperty(ShardStatsValue.Constants.REFRESH_COUNT_VALUE) public long getRefreshCount() { return refreshCount; @@ -450,6 +316,11 @@ public long getMergeCurrent() { return mergeCurrent; } + @JsonIgnore + public ShardStats getShardStats() { + return shardStats; + } + @JsonProperty(ShardStatsValue.Constants.INDEX_BUFFER_BYTES_VALUE) public long getIndexBufferBytes() { return indexBufferBytes; diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/ThreadPoolMetricsCollector.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/ThreadPoolMetricsCollector.java index 86609452..18178a2a 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/ThreadPoolMetricsCollector.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/ThreadPoolMetricsCollector.java @@ -76,26 +76,31 @@ public void collectMetrics(long startTime) { statsRecordMap.put(threadPoolName, new ThreadPoolStatsRecord(startTime, stats.getRejected())); final long finalRejectionDelta = rejectionDelta; ThreadPoolStatus threadPoolStatus = AccessController.doPrivileged((PrivilegedAction) () -> { + Integer capacity; + Double latency; + //This is for backward compatibility. core ES may or may not emit latency metric + // (depending on whether the patch has been applied or not) + // so we need to use reflection to check whether getCapacity() method exist in ThreadPoolStats.java. try { - //This is for backward compatibility. core ES may or may not emit latency metric - // (depending on whether the patch has been applied or not) - // so we need to use reflection to check whether getLatency() method exist in ThreadPoolStats.java. - // call stats.getLatency() - Method getLantencyMethod = Stats.class.getMethod("getLatency"); - double latency = (Double) getLantencyMethod.invoke(stats); // call stats.getCapacity() Method getCapacityMethod = Stats.class.getMethod("getCapacity"); - int capacity = (Integer) getCapacityMethod.invoke(stats); - return new ThreadPoolStatus(stats.getName(), - stats.getQueue(), finalRejectionDelta, - stats.getThreads(), stats.getActive(), - latency, capacity); - } catch (Exception e) { + capacity = (Integer) getCapacityMethod.invoke(stats); + } + catch (Exception e) { //core ES does not have the latency patch. send the threadpool metrics without adding latency. - return new ThreadPoolStatus(stats.getName(), - stats.getQueue(), finalRejectionDelta, - stats.getThreads(), stats.getActive()); + capacity = null; + } + try { + // call stats.getLatency() + Method getLantencyMethod = Stats.class.getMethod("getLatency"); + latency = (Double) getLantencyMethod.invoke(stats); + } catch (Exception e) { + latency = null; } + return new ThreadPoolStatus(stats.getName(), + stats.getQueue(), finalRejectionDelta, + stats.getThreads(), stats.getActive(), + latency, capacity); }); value.append(PerformanceAnalyzerMetrics.sMetricNewLineDelimitor) .append(threadPoolStatus.serialize()); @@ -161,8 +166,8 @@ public ThreadPoolStatus(String type, long rejected, int threadsCount, int threadsActive, - double queueLatency, - int queueCapacity) { + Double queueLatency, + Integer queueCapacity) { this.type = type; this.queueSize = queueSize; this.rejected = rejected; diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/ClusterSettingsManager.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/ClusterSettingsManager.java index c52ee4c2..ce3ea360 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/ClusterSettingsManager.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/ClusterSettingsManager.java @@ -28,14 +28,17 @@ */ public class ClusterSettingsManager implements ClusterStateListener { private static final Logger LOG = LogManager.getLogger(ClusterSettingsManager.class); - private Map, List>> listenerMap = new HashMap<>(); - private final List> managedSettings = new ArrayList<>(); + private final Map, List>> intSettingListenerMap = new HashMap<>(); + private final Map, List>> stringSettingListenerMap = new HashMap<>(); + private final List> managedIntSettings = new ArrayList<>(); + private final List> managedStringSettings = new ArrayList<>(); private final ClusterSettingsResponseHandler clusterSettingsResponseHandler; private boolean initialized = false; - public ClusterSettingsManager(List> initialSettings) { - managedSettings.addAll(initialSettings); + public ClusterSettingsManager(List> intSettings, List> stringSettings) { + managedIntSettings.addAll(intSettings); + managedStringSettings.addAll(stringSettings); this.clusterSettingsResponseHandler = new ClusterSettingsResponseHandler(); } @@ -45,18 +48,35 @@ public ClusterSettingsManager(List> initialSettings) { * @param setting The setting that needs to be listened to. * @param listener The listener object that will be called when the setting changes. */ - public void addSubscriberForSetting(Setting setting, ClusterSettingListener listener) { - if (listenerMap.containsKey(setting)) { - final List> currentListeners = listenerMap.get(setting); + public void addSubscriberForIntSetting(Setting setting, ClusterSettingListener listener) { + if (intSettingListenerMap.containsKey(setting)) { + final List> currentListeners = intSettingListenerMap.get(setting); if (!currentListeners.contains(listener)) { currentListeners.add(listener); - listenerMap.put(setting, currentListeners); + intSettingListenerMap.put(setting, currentListeners); } } else { - listenerMap.put(setting, Collections.singletonList(listener)); + intSettingListenerMap.put(setting, Collections.singletonList(listener)); } } + /** + * Adds a listener that will be called when the requested setting's value changes. + * + * @param setting The setting that needs to be listened to. + * @param listener The listener object that will be called when the setting changes. + */ + public void addSubscriberForStringSetting(Setting setting, ClusterSettingListener listener) { + if (stringSettingListenerMap.containsKey(setting)) { + final List> currentListeners = stringSettingListenerMap.get(setting); + if (!currentListeners.contains(listener)) { + currentListeners.add(listener); + stringSettingListenerMap.put(setting, currentListeners); + } + } else { + stringSettingListenerMap.put(setting, Collections.singletonList(listener)); + } + } /** * Bootstraps the listeners and tries to read initial values for cluster settings. */ @@ -91,14 +111,34 @@ public void updateSetting(final Setting setting, final Integer newValue ESResources.INSTANCE.getClient().admin().cluster().updateSettings(request); } + /** + * Updates the requested setting with the requested value across the cluster. + * + * @param setting The setting that needs to be updated. + * @param newValue The new value for the setting. + */ + public void updateSetting(final Setting setting, final String newValue) { + final ClusterUpdateSettingsRequest request = new ClusterUpdateSettingsRequest(); + request.persistentSettings(Settings.builder() + .put(setting.getKey(), newValue) + .build()); + ESResources.INSTANCE.getClient().admin().cluster().updateSettings(request); + } + /** * Registers a setting update listener for all the settings managed by this instance. */ private void registerSettingUpdateListener() { - for (Setting setting : managedSettings) { + for (Setting setting : managedIntSettings) { ESResources.INSTANCE.getClusterService() .getClusterSettings() - .addSettingsUpdateConsumer(setting, updatedVal -> callListeners(setting, updatedVal)); + .addSettingsUpdateConsumer(setting, updatedVal -> callIntSettingListeners(setting, updatedVal)); + } + + for (Setting setting : managedStringSettings) { + ESResources.INSTANCE.getClusterService() + .getClusterSettings() + .addSettingsUpdateConsumer(setting, updatedVal -> callStringSettingListeners(setting, updatedVal)); } } @@ -166,9 +206,9 @@ public void clusterChanged(final ClusterChangedEvent event) { * @param setting The setting whose listeners need to be notified. * @param settingValue The new value for the setting. */ - private void callListeners(final Setting setting, int settingValue) { + private void callIntSettingListeners(final Setting setting, int settingValue) { try { - final List> listeners = listenerMap.get(setting); + final List> listeners = intSettingListenerMap.get(setting); if (listeners != null) { for (ClusterSettingListener listener : listeners) { listener.onSettingUpdate(settingValue); @@ -180,6 +220,25 @@ private void callListeners(final Setting setting, int settingValue) { } } + /** + * Calls all the listeners for the specified setting with the requested value. + * + * @param setting The setting whose listeners need to be notified. + * @param settingValue The new value for the setting. + */ + private void callStringSettingListeners(final Setting setting, String settingValue) { + try { + final List> listeners = stringSettingListenerMap.get(setting); + if (listeners != null) { + for (ClusterSettingListener listener : listeners) { + listener.onSettingUpdate(settingValue); + } + } + } catch(Exception ex) { + LOG.error(ex); + StatsCollector.instance().logException(StatExceptionCode.ES_REQUEST_INTERCEPTOR_ERROR); + } + } /** * Class that handles response to GET /_cluster/settings */ @@ -192,14 +251,19 @@ private class ClusterSettingsResponseHandler implements ActionListener setting : managedSettings) { + for (final Setting setting : managedIntSettings) { Integer settingValue = clusterSettings.getAsInt(setting.getKey(), null); if (settingValue != null) { - callListeners(setting, settingValue); + callIntSettingListeners(setting, settingValue); + } + } + + for (final Setting setting : managedStringSettings) { + String settingValue = clusterSettings.get(setting.getKey(), ""); + if (settingValue != null) { + callStringSettingListeners(setting, settingValue); } } } diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/PerformanceAnalyzerClusterSettings.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/PerformanceAnalyzerClusterSettings.java index e60c5e5e..a8e27b58 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/PerformanceAnalyzerClusterSettings.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/PerformanceAnalyzerClusterSettings.java @@ -14,7 +14,7 @@ public final class PerformanceAnalyzerClusterSettings { 0, Setting.Property.NodeScope, Setting.Property.Dynamic - ); + ); public enum PerformanceAnalyzerFeatureBits { PA_BIT, @@ -33,4 +33,15 @@ public enum PerformanceAnalyzerFeatureBits { Setting.Property.NodeScope, Setting.Property.Dynamic ); + + /** + * Cluster setting controlling the config overrides to be applied on performance + * analyzer components. + */ + public static final Setting CONFIG_OVERRIDES_SETTING = Setting.simpleString( + "cluster.metadata.perf_analyzer.config.overrides", + "", + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); } diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandler.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandler.java new file mode 100644 index 00000000..6c9a0f67 --- /dev/null +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandler.java @@ -0,0 +1,204 @@ +/* + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler; + +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverrides; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesHelper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesWrapper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.ClusterSettingListener; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.ClusterSettingsManager; +import com.google.common.collect.ImmutableList; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.elasticsearch.common.settings.Setting; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; + +public class ConfigOverridesClusterSettingHandler implements ClusterSettingListener { + + private static final Logger LOG = LogManager.getLogger(ConfigOverridesClusterSettingHandler.class); + + private final ClusterSettingsManager clusterSettingsManager; + private final ConfigOverridesWrapper overridesHolder; + private final Setting setting; + + public ConfigOverridesClusterSettingHandler(final ConfigOverridesWrapper overridesHolder, + final ClusterSettingsManager clusterSettingsManager, + final Setting setting) { + this.clusterSettingsManager = clusterSettingsManager; + this.overridesHolder = overridesHolder; + this.setting = setting; + } + + /** + * Handler that gets called when there is a new value for the setting that this listener is + * listening to. + * + * @param newSettingValue The value of the new setting. + */ + @Override + public void onSettingUpdate(String newSettingValue) { + try { + if (newSettingValue != null && !newSettingValue.isEmpty()) { + final ConfigOverrides newOverrides = ConfigOverridesHelper.deserialize(newSettingValue); + overridesHolder.setCurrentClusterConfigOverrides(newOverrides); + overridesHolder.setLastUpdatedTimestamp(System.currentTimeMillis()); + } else { + LOG.warn("Config override setting update called with empty string. Ignoring."); + } + } catch (IOException e) { + LOG.error("Unable to apply received cluster setting update: " + newSettingValue, e); + } + } + + /** + * Updates the cluster setting with the new set of config overrides. + * + * @param newOverrides The new set of overrides that need to be applied. + * @throws IOException if unable to serialize the setting. + */ + public void updateConfigOverrides(final ConfigOverrides newOverrides) throws IOException { + String newClusterSettingValue = buildClusterSettingValue(newOverrides); + LOG.debug("Updating cluster setting with new overrides string: {}", newClusterSettingValue); + clusterSettingsManager.updateSetting(setting, newClusterSettingValue); + } + + /** + * Generates a string representation of overrides. + * + * @param newOverrides The new overrides that need to be merged with the existing + * overrides. + * @return String value of the merged config overrides. + */ + private String buildClusterSettingValue(final ConfigOverrides newOverrides) throws IOException { + final ConfigOverrides mergedConfigOverrides = merge(overridesHolder.getCurrentClusterConfigOverrides(), newOverrides); + + return ConfigOverridesHelper.serialize(mergedConfigOverrides); + } + + /** + * Merges the current set of overrides with the new set and returns a new instance + * of the merged config overrides. + * + * @param other the other ConfigOverrides to merge from. + * @return A new instance of the ConfigOverrides representing the merged config + * override. + */ + private ConfigOverrides merge(final ConfigOverrides current, final ConfigOverrides other) { + final ConfigOverrides merged = new ConfigOverrides(); + ConfigOverrides.Overrides optionalCurrentEnabled = Optional.ofNullable(current.getEnable()) + .orElseGet(ConfigOverrides.Overrides::new); + ConfigOverrides.Overrides optionalCurrentDisabled = Optional.ofNullable(current.getDisable()) + .orElseGet(ConfigOverrides.Overrides::new); + ConfigOverrides.Overrides optionalNewEnable = Optional.ofNullable(other.getEnable()) + .orElseGet(ConfigOverrides.Overrides::new); + ConfigOverrides.Overrides optionalNewDisable = Optional.ofNullable(other.getDisable()) + .orElseGet(ConfigOverrides.Overrides::new); + + mergeRcas(merged, optionalCurrentEnabled, optionalNewEnable, optionalCurrentDisabled, optionalNewDisable); + mergeDeciders(merged, optionalCurrentEnabled, optionalNewEnable, optionalCurrentDisabled, optionalNewDisable); + mergeActions(merged, optionalCurrentEnabled, optionalNewEnable, optionalCurrentDisabled, optionalNewDisable); + + return merged; + } + + private void mergeRcas(final ConfigOverrides merged, + final ConfigOverrides.Overrides baseEnabled, + final ConfigOverrides.Overrides newEnabled, + final ConfigOverrides.Overrides baseDisabled, + final ConfigOverrides.Overrides newDisabled) { + List currentRcaEnabled = Optional.ofNullable(baseEnabled.getRcas()) + .orElseGet(ArrayList::new); + List currentRcaDisabled = Optional.ofNullable(baseDisabled.getRcas()) + .orElseGet(ArrayList::new); + List requestedRcasEnabled = Optional.ofNullable(newEnabled.getRcas()) + .orElseGet(ArrayList::new); + List requestedRcasDisabled = Optional.ofNullable(newDisabled.getRcas()) + .orElseGet(ArrayList::new); + + List mergedRcasEnabled = combineLists(currentRcaEnabled, requestedRcasEnabled, requestedRcasDisabled); + List mergedRcasDisabled = combineLists(currentRcaDisabled, requestedRcasDisabled, requestedRcasEnabled); + + merged.getEnable().setRcas(mergedRcasEnabled); + merged.getDisable().setRcas(mergedRcasDisabled); + } + + private void mergeDeciders(final ConfigOverrides merged, + final ConfigOverrides.Overrides baseEnabled, + final ConfigOverrides.Overrides newEnabled, + final ConfigOverrides.Overrides baseDisabled, + final ConfigOverrides.Overrides newDisabled) { + List currentDecidersEnabled = Optional.ofNullable(baseEnabled.getDeciders()) + .orElseGet(ArrayList::new); + List currentDecidersDisabled = Optional.ofNullable(baseDisabled.getDeciders()) + .orElseGet(ArrayList::new); + List requestedDecidersEnabled = Optional.ofNullable(newEnabled.getDeciders()) + .orElseGet(ArrayList::new); + List requestedDecidersDisabled = Optional.ofNullable(newDisabled.getDeciders()) + .orElseGet(ArrayList::new); + + List mergedDecidersEnabled = combineLists(currentDecidersEnabled, requestedDecidersEnabled, requestedDecidersDisabled); + List mergedDecidersDisabled = combineLists(currentDecidersDisabled, requestedDecidersDisabled, requestedDecidersEnabled); + + merged.getEnable().setDeciders(mergedDecidersEnabled); + merged.getDisable().setDeciders(mergedDecidersDisabled); + } + + private void mergeActions(final ConfigOverrides merged, + final ConfigOverrides.Overrides baseEnabled, + final ConfigOverrides.Overrides newEnabled, + final ConfigOverrides.Overrides baseDisabled, + final ConfigOverrides.Overrides newDisabled) { + List currentActionsEnabled = Optional.ofNullable(baseEnabled.getActions()) + .orElseGet(ArrayList::new); + List currentActionsDisabled = Optional.ofNullable(baseDisabled.getActions()) + .orElseGet(ArrayList::new); + List requestedActionsEnabled = Optional.ofNullable(newEnabled.getActions()) + .orElseGet(ArrayList::new); + List requestedActionsDisabled = Optional.ofNullable(newDisabled.getActions()) + .orElseGet(ArrayList::new); + + List mergedActionsEnabled = combineLists(currentActionsEnabled, requestedActionsEnabled, requestedActionsDisabled); + List mergedActionsDisabled = combineLists(currentActionsDisabled, requestedActionsDisabled, requestedActionsEnabled); + + merged.getEnable().setActions(mergedActionsEnabled); + merged.getDisable().setActions(mergedActionsDisabled); + } + + /** + * Combines three lists by adding all elements in the addList to the base list and + * removing all elements in the remove list from the combined list. + * // TODO: Add example here to clarify + * + * @param baseList The base list. + * @param addList The list whose contents need to added to the base list. + * @param removeList The list whose contents should be removed from the base list if present. + * @return The combined list as an immutable list. + */ + private List combineLists(List baseList, List addList, List removeList) { + Set combinedEnabled = new HashSet<>(baseList); + combinedEnabled.addAll(addList); + combinedEnabled.removeAll(removeList); + + return ImmutableList.copyOf(combinedEnabled); + } + +} diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/PerformanceAnalyzerClusterSettingHandler.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/PerformanceAnalyzerClusterSettingHandler.java index 1a536739..4eb00661 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/PerformanceAnalyzerClusterSettingHandler.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/PerformanceAnalyzerClusterSettingHandler.java @@ -165,7 +165,8 @@ private Integer getRcaSettingValueFromState(final boolean shouldEnable) { int clusterSetting = currentClusterSetting; if (shouldEnable) { - return controller.isPerformanceAnalyzerEnabled() ? setBit(clusterSetting, RCA_ENABLED_BIT_POS) : clusterSetting; + return checkBit(currentClusterSetting, PA_ENABLED_BIT_POS) + ? setBit(clusterSetting, RCA_ENABLED_BIT_POS) : clusterSetting; } else { return resetBit(clusterSetting, RCA_ENABLED_BIT_POS); } @@ -182,7 +183,8 @@ private Integer getLoggingSettingValueFromState(final boolean shouldEnable) { int clusterSetting = currentClusterSetting; if (shouldEnable) { - return controller.isPerformanceAnalyzerEnabled() ? setBit(clusterSetting, LOGGING_ENABLED_BIT_POS) : clusterSetting; + return checkBit(currentClusterSetting, PA_ENABLED_BIT_POS) + ? setBit(clusterSetting, LOGGING_ENABLED_BIT_POS) : clusterSetting; } else { return resetBit(clusterSetting, LOGGING_ENABLED_BIT_POS); } @@ -209,4 +211,15 @@ private int setBit(int number, int bitPosition) { private int resetBit(int number, int bitPosition) { return bitPosition < MAX_ALLOWED_BIT_POS ? (number & ~(1 << bitPosition)) : number; } + + /** + * Checks if the bit is set or not at the specified position. + * + * @param clusterSettingValue The number which needs to be checked. + * @param bitPosition The position of the bit in the clusterSettingValue + * @return true if the bit is set, false otherwise. + */ + private boolean checkBit(int clusterSettingValue, int bitPosition) { + return ((bitPosition < MAX_ALLOWED_BIT_POS) & (clusterSettingValue & (1 << bitPosition)) == ENABLED_VALUE); + } } diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerClusterConfigAction.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerClusterConfigAction.java index 98b3aa8f..4abeb188 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerClusterConfigAction.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerClusterConfigAction.java @@ -24,103 +24,113 @@ import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler.PerformanceAnalyzerClusterSettingHandler; /** - * Rest request handler for handling cluster-wide enabling and disabling of performance analyzer features. + * Rest request handler for handling cluster-wide enabling and disabling of performance analyzer + * features. */ public class PerformanceAnalyzerClusterConfigAction extends BaseRestHandler { - private static final Logger LOG = LogManager.getLogger(PerformanceAnalyzerClusterConfigAction.class); - private static final String PA_CLUSTER_CONFIG_PATH = "/_opendistro/_performanceanalyzer/cluster/config"; - private static final String RCA_CLUSTER_CONFIG_PATH = "/_opendistro/_performanceanalyzer/rca/cluster/config"; - private static final String LOGGING_CLUSTER_CONFIG_PATH = "/_opendistro/_performanceanalyzer/logging/cluster/config"; - private static final String ENABLED = "enabled"; - private static final String SHARDS_PER_COLLECTION = "shardsPerCollection"; - private static final String CURRENT = "currentPerformanceAnalyzerClusterState"; - private static final String NAME = "PerformanceAnalyzerClusterConfigAction"; + private static final Logger LOG = + LogManager.getLogger(PerformanceAnalyzerClusterConfigAction.class); + private static final String PA_CLUSTER_CONFIG_PATH = + "/_opendistro/_performanceanalyzer/cluster/config"; + private static final String RCA_CLUSTER_CONFIG_PATH = + "/_opendistro/_performanceanalyzer/rca/cluster/config"; + private static final String LOGGING_CLUSTER_CONFIG_PATH = + "/_opendistro/_performanceanalyzer/logging/cluster/config"; + private static final String ENABLED = "enabled"; + private static final String SHARDS_PER_COLLECTION = "shardsPerCollection"; + private static final String CURRENT = "currentPerformanceAnalyzerClusterState"; + private static final String NAME = "PerformanceAnalyzerClusterConfigAction"; - private static final List ROUTES = unmodifiableList(asList( - new Route(RestRequest.Method.GET, PA_CLUSTER_CONFIG_PATH), - new Route(RestRequest.Method.POST, PA_CLUSTER_CONFIG_PATH), - new Route(RestRequest.Method.GET, RCA_CLUSTER_CONFIG_PATH), - new Route(RestRequest.Method.POST, RCA_CLUSTER_CONFIG_PATH), - new Route(RestRequest.Method.GET, LOGGING_CLUSTER_CONFIG_PATH), - new Route(RestRequest.Method.POST, LOGGING_CLUSTER_CONFIG_PATH) - )); + private static final List ROUTES = + unmodifiableList( + asList( + new Route(RestRequest.Method.GET, PA_CLUSTER_CONFIG_PATH), + new Route(RestRequest.Method.POST, PA_CLUSTER_CONFIG_PATH), + new Route(RestRequest.Method.GET, RCA_CLUSTER_CONFIG_PATH), + new Route(RestRequest.Method.POST, RCA_CLUSTER_CONFIG_PATH), + new Route(RestRequest.Method.GET, LOGGING_CLUSTER_CONFIG_PATH), + new Route(RestRequest.Method.POST, LOGGING_CLUSTER_CONFIG_PATH))); - private final PerformanceAnalyzerClusterSettingHandler clusterSettingHandler; - private final NodeStatsSettingHandler nodeStatsSettingHandler; + private final PerformanceAnalyzerClusterSettingHandler clusterSettingHandler; + private final NodeStatsSettingHandler nodeStatsSettingHandler; - public PerformanceAnalyzerClusterConfigAction(final Settings settings, final RestController restController, - final PerformanceAnalyzerClusterSettingHandler clusterSettingHandler, - final NodeStatsSettingHandler nodeStatsSettingHandler) { - super(); - this.clusterSettingHandler = clusterSettingHandler; - this.nodeStatsSettingHandler = nodeStatsSettingHandler; - } + public PerformanceAnalyzerClusterConfigAction( + final Settings settings, + final RestController restController, + final PerformanceAnalyzerClusterSettingHandler clusterSettingHandler, + final NodeStatsSettingHandler nodeStatsSettingHandler) { + super(); + this.clusterSettingHandler = clusterSettingHandler; + this.nodeStatsSettingHandler = nodeStatsSettingHandler; + } - @Override - public List routes() { - return ROUTES; - } + @Override + public List routes() { + return ROUTES; + } - /** - * @return the name of this handler. The name should be human readable and - * should describe the action that will performed when this API is - * called. - */ - @Override - public String getName() { - return PerformanceAnalyzerClusterConfigAction.class.getSimpleName(); - } + /** + * @return the name of this handler. The name should be human readable and should describe the + * action that will performed when this API is called. + */ + @Override + public String getName() { + return PerformanceAnalyzerClusterConfigAction.class.getSimpleName(); + } - /** - * Prepare the request for execution. Implementations should consume all request params before - * returning the runnable for actual execution. Unconsumed params will immediately terminate - * execution of the request. However, some params are only used in processing the response; - * implementations can override {@link BaseRestHandler#responseParams()} to indicate such - * params. - * - * @param request the request to execute - * @param client client for executing actions on the local node - * @return the action to execute - * @throws IOException if an I/O exception occurred parsing the request and preparing for - * execution - */ - @Override - protected RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { - if (request.method() == RestRequest.Method.POST && request.content().length() > 0) { - Map map = XContentHelper.convertToMap(request.content(), false, XContentType.JSON).v2(); - Object value = map.get(ENABLED); - LOG.debug("PerformanceAnalyzer:Value (Object) Received as Part of Request: {} current value: {}", value, - clusterSettingHandler.getCurrentClusterSettingValue()); + /** + * Prepare the request for execution. Implementations should consume all request params before + * returning the runnable for actual execution. Unconsumed params will immediately terminate + * execution of the request. However, some params are only used in processing the response; + * implementations can override {@link BaseRestHandler#responseParams()} to indicate such params. + * + * @param request the request to execute + * @param client client for executing actions on the local node + * @return the action to execute + * @throws IOException if an I/O exception occurred parsing the request and preparing for + * execution + */ + @Override + protected RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) + throws IOException { + if (request.method() == RestRequest.Method.POST && request.content().length() > 0) { + Map map = + XContentHelper.convertToMap(request.content(), false, XContentType.JSON).v2(); + Object value = map.get(ENABLED); + LOG.debug( + "PerformanceAnalyzer:Value (Object) Received as Part of Request: {} current value: {}", + value, + clusterSettingHandler.getCurrentClusterSettingValue()); - if (value instanceof Boolean) { - if (request.path().contains(RCA_CLUSTER_CONFIG_PATH)) { - clusterSettingHandler.updateRcaSetting((Boolean) value); - } else if (request.path().contains(LOGGING_CLUSTER_CONFIG_PATH)) { - clusterSettingHandler.updateLoggingSetting((Boolean) value); - } else { - clusterSettingHandler.updatePerformanceAnalyzerSetting((Boolean) value); - } - } - // update node stats setting if exists - if (map.containsKey(SHARDS_PER_COLLECTION)) { - Object shardPerCollectionValue = map.get(SHARDS_PER_COLLECTION); - if (shardPerCollectionValue instanceof Integer) { - nodeStatsSettingHandler.updateNodeStatsSetting((Integer)shardPerCollectionValue); - } - } + if (value instanceof Boolean) { + if (request.path().contains(RCA_CLUSTER_CONFIG_PATH)) { + clusterSettingHandler.updateRcaSetting((Boolean) value); + } else if (request.path().contains(LOGGING_CLUSTER_CONFIG_PATH)) { + clusterSettingHandler.updateLoggingSetting((Boolean) value); + } else { + clusterSettingHandler.updatePerformanceAnalyzerSetting((Boolean) value); } - - return channel -> { - try { - XContentBuilder builder = channel.newBuilder(); - builder.startObject(); - builder.field(CURRENT, clusterSettingHandler.getCurrentClusterSettingValue()); - builder.field(SHARDS_PER_COLLECTION, nodeStatsSettingHandler.getNodeStatsSetting()); - builder.endObject(); - channel.sendResponse(new BytesRestResponse(RestStatus.OK, builder)); - } catch (IOException ioe) { - LOG.error("Error sending response", ioe); - } - }; + } + // update node stats setting if exists + if (map.containsKey(SHARDS_PER_COLLECTION)) { + Object shardPerCollectionValue = map.get(SHARDS_PER_COLLECTION); + if (shardPerCollectionValue instanceof Integer) { + nodeStatsSettingHandler.updateNodeStatsSetting((Integer) shardPerCollectionValue); + } + } } + + return channel -> { + try { + XContentBuilder builder = channel.newBuilder(); + builder.startObject(); + builder.field(CURRENT, clusterSettingHandler.getCurrentClusterSettingValue()); + builder.field(SHARDS_PER_COLLECTION, nodeStatsSettingHandler.getNodeStatsSetting()); + builder.endObject(); + channel.sendResponse(new BytesRestResponse(RestStatus.OK, builder)); + } catch (IOException ioe) { + LOG.error("Error sending response", ioe); + } + }; + } } diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerConfigAction.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerConfigAction.java index 7e21da4c..51df22e7 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerConfigAction.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerConfigAction.java @@ -36,116 +36,129 @@ @SuppressWarnings("deprecation") public class PerformanceAnalyzerConfigAction extends BaseRestHandler { - private static final Logger LOG = LogManager.getLogger(PerformanceAnalyzerConfigAction.class); - private static final String ENABLED = "enabled"; - private static final String SHARDS_PER_COLLECTION = "shardsPerCollection"; - private static final String PA_ENABLED = "performanceAnalyzerEnabled"; - private static final String RCA_ENABLED = "rcaEnabled"; - private static final String PA_LOGGING_ENABLED = "loggingEnabled"; - private static final String RCA_CONFIG_PATH = "/_opendistro/_performanceanalyzer/rca/config"; - private static final String PA_CONFIG_PATH = "/_opendistro/_performanceanalyzer/config"; - private static final String LOGGING_CONFIG_PATH = "/_opendistro/_performanceanalyzer/logging/config"; - private static PerformanceAnalyzerConfigAction instance = null; - private static final List ROUTES = unmodifiableList(asList( - new Route(RestRequest.Method.GET, PA_CONFIG_PATH), - new Route(RestRequest.Method.POST, PA_CONFIG_PATH), - new Route(RestRequest.Method.GET, RCA_CONFIG_PATH), - new Route(RestRequest.Method.POST, RCA_CONFIG_PATH), - new Route(RestRequest.Method.GET, LOGGING_CONFIG_PATH), - new Route(RestRequest.Method.POST, LOGGING_CONFIG_PATH) - )); - private final PerformanceAnalyzerController performanceAnalyzerController; + private static final Logger LOG = LogManager.getLogger(PerformanceAnalyzerConfigAction.class); + private static final String ENABLED = "enabled"; + private static final String SHARDS_PER_COLLECTION = "shardsPerCollection"; + private static final String PA_ENABLED = "performanceAnalyzerEnabled"; + private static final String RCA_ENABLED = "rcaEnabled"; + private static final String PA_LOGGING_ENABLED = "loggingEnabled"; + private static final String RCA_CONFIG_PATH = "/_opendistro/_performanceanalyzer/rca/config"; + private static final String PA_CONFIG_PATH = "/_opendistro/_performanceanalyzer/config"; + private static final String LOGGING_CONFIG_PATH = + "/_opendistro/_performanceanalyzer/logging/config"; + private static PerformanceAnalyzerConfigAction instance = null; + private static final List ROUTES = + unmodifiableList( + asList( + new Route(RestRequest.Method.GET, PA_CONFIG_PATH), + new Route(RestRequest.Method.POST, PA_CONFIG_PATH), + new Route(RestRequest.Method.GET, RCA_CONFIG_PATH), + new Route(RestRequest.Method.POST, RCA_CONFIG_PATH), + new Route(RestRequest.Method.GET, LOGGING_CONFIG_PATH), + new Route(RestRequest.Method.POST, LOGGING_CONFIG_PATH))); + private final PerformanceAnalyzerController performanceAnalyzerController; - public static PerformanceAnalyzerConfigAction getInstance() { - return instance; - } + public static PerformanceAnalyzerConfigAction getInstance() { + return instance; + } - public static void setInstance(PerformanceAnalyzerConfigAction performanceanalyzerConfigAction) { - instance = performanceanalyzerConfigAction; - } + public static void setInstance(PerformanceAnalyzerConfigAction performanceanalyzerConfigAction) { + instance = performanceanalyzerConfigAction; + } - @Inject - public PerformanceAnalyzerConfigAction(final RestController controller, - final PerformanceAnalyzerController performanceAnalyzerController) { - super(); - this.performanceAnalyzerController = performanceAnalyzerController; - LOG.info("PerformanceAnalyzer Enabled: {}", performanceAnalyzerController::isPerformanceAnalyzerEnabled); - } - @Override - public List routes() { - return ROUTES; - } + @Inject + public PerformanceAnalyzerConfigAction( + final RestController controller, + final PerformanceAnalyzerController performanceAnalyzerController) { + super(); + this.performanceAnalyzerController = performanceAnalyzerController; + LOG.info( + "PerformanceAnalyzer Enabled: {}", + performanceAnalyzerController::isPerformanceAnalyzerEnabled); + } - @Override - protected RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException { - if (request.method() == RestRequest.Method.POST && request.content().length() > 0) { - // Let's try to find the name from the body - Map map = XContentHelper.convertToMap(request.content(), false).v2(); - Object value = map.get(ENABLED); - LOG.debug("PerformanceAnalyzer:Value (Object) Received as Part of Request: {} current value: {}", value, - performanceAnalyzerController.isPerformanceAnalyzerEnabled()); - if (value instanceof Boolean) { - boolean shouldEnable = (Boolean) value; - if (request.path().contains(RCA_CONFIG_PATH)) { - // If RCA needs to be turned on, we need to have PA turned on also. - // If this is not the case, return error. - if (shouldEnable && !performanceAnalyzerController.isPerformanceAnalyzerEnabled()) { - return getChannelConsumerWithError("Error: PA not enabled. Enable PA before turning RCA on"); - } + @Override + public List routes() { + return ROUTES; + } - performanceAnalyzerController.updateRcaState(shouldEnable); - } else if (request.path().contains(LOGGING_CONFIG_PATH)) { - if (shouldEnable && !performanceAnalyzerController.isPerformanceAnalyzerEnabled()) { - return getChannelConsumerWithError("Error: PA not enabled. Enable PA before turning Logging on"); - } + @Override + protected RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) + throws IOException { + if (request.method() == RestRequest.Method.POST && request.content().length() > 0) { + // Let's try to find the name from the body + Map map = XContentHelper.convertToMap(request.content(), false).v2(); + Object value = map.get(ENABLED); + LOG.debug( + "PerformanceAnalyzer:Value (Object) Received as Part of Request: {} current value: {}", + value, + performanceAnalyzerController.isPerformanceAnalyzerEnabled()); + if (value instanceof Boolean) { + boolean shouldEnable = (Boolean) value; + if (request.path().contains(RCA_CONFIG_PATH)) { + // If RCA needs to be turned on, we need to have PA turned on also. + // If this is not the case, return error. + if (shouldEnable && !performanceAnalyzerController.isPerformanceAnalyzerEnabled()) { + return getChannelConsumerWithError( + "Error: PA not enabled. Enable PA before turning RCA on"); + } - performanceAnalyzerController.updateLoggingState(shouldEnable); - } else { - // Disabling Performance Analyzer should disable the RCA framework as well. - if (!shouldEnable) { - performanceAnalyzerController.updateRcaState(false); - performanceAnalyzerController.updateLoggingState(false); - } - performanceAnalyzerController.updatePerformanceAnalyzerState(shouldEnable); - } - } - // update node stats setting if exists - if (map.containsKey(SHARDS_PER_COLLECTION)) { - Object shardPerCollectionValue = map.get(SHARDS_PER_COLLECTION); - if (shardPerCollectionValue instanceof Integer) { - performanceAnalyzerController.updateNodeStatsShardsPerCollection((Integer)shardPerCollectionValue); - } - } - } + performanceAnalyzerController.updateRcaState(shouldEnable); + } else if (request.path().contains(LOGGING_CONFIG_PATH)) { + if (shouldEnable && !performanceAnalyzerController.isPerformanceAnalyzerEnabled()) { + return getChannelConsumerWithError( + "Error: PA not enabled. Enable PA before turning Logging on"); + } - return channel -> { - try { - XContentBuilder builder = channel.newBuilder(); - builder.startObject(); - builder.field(PA_ENABLED, performanceAnalyzerController.isPerformanceAnalyzerEnabled()); - builder.field(RCA_ENABLED, performanceAnalyzerController.isRcaEnabled()); - builder.field(PA_LOGGING_ENABLED, performanceAnalyzerController.isLoggingEnabled()); - builder.field(SHARDS_PER_COLLECTION, performanceAnalyzerController.getNodeStatsShardsPerCollection()); - builder.endObject(); - channel.sendResponse(new BytesRestResponse(RestStatus.OK, builder)); - } catch (IOException ioe) { - LOG.error("Error sending response", ioe); - } - }; + performanceAnalyzerController.updateLoggingState(shouldEnable); + } else { + // Disabling Performance Analyzer should disable the RCA framework as well. + if (!shouldEnable) { + performanceAnalyzerController.updateRcaState(false); + performanceAnalyzerController.updateLoggingState(false); + } + performanceAnalyzerController.updatePerformanceAnalyzerState(shouldEnable); + } + } + // update node stats setting if exists + if (map.containsKey(SHARDS_PER_COLLECTION)) { + Object shardPerCollectionValue = map.get(SHARDS_PER_COLLECTION); + if (shardPerCollectionValue instanceof Integer) { + performanceAnalyzerController.updateNodeStatsShardsPerCollection( + (Integer) shardPerCollectionValue); + } + } } - @Override - public String getName() { - return "PerformanceAnalyzer_Config_Action"; - } + return channel -> { + try { + XContentBuilder builder = channel.newBuilder(); + builder.startObject(); + builder.field(PA_ENABLED, performanceAnalyzerController.isPerformanceAnalyzerEnabled()); + builder.field(RCA_ENABLED, performanceAnalyzerController.isRcaEnabled()); + builder.field(PA_LOGGING_ENABLED, performanceAnalyzerController.isLoggingEnabled()); + builder.field( + SHARDS_PER_COLLECTION, performanceAnalyzerController.getNodeStatsShardsPerCollection()); + builder.endObject(); + channel.sendResponse(new BytesRestResponse(RestStatus.OK, builder)); + } catch (IOException ioe) { + LOG.error("Error sending response", ioe); + } + }; + } - private RestChannelConsumer getChannelConsumerWithError(String error) { - return restChannel -> { - XContentBuilder builder = restChannel.newErrorBuilder(); - builder.startObject(); - builder.field(error); - builder.endObject(); - restChannel.sendResponse(new BytesRestResponse(RestStatus.BAD_REQUEST, builder)); - }; - } + @Override + public String getName() { + return "PerformanceAnalyzer_Config_Action"; + } + + private RestChannelConsumer getChannelConsumerWithError(String error) { + return restChannel -> { + XContentBuilder builder = restChannel.newErrorBuilder(); + builder.startObject(); + builder.field(error); + builder.endObject(); + restChannel.sendResponse(new BytesRestResponse(RestStatus.BAD_REQUEST, builder)); + }; + } } diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerOverridesClusterConfigAction.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerOverridesClusterConfigAction.java new file mode 100644 index 00000000..60e2c501 --- /dev/null +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerOverridesClusterConfigAction.java @@ -0,0 +1,208 @@ +/* + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.http_action.config; + +import static java.util.Arrays.asList; +import static java.util.Collections.unmodifiableList; + +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverrides; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesHelper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesWrapper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler.ConfigOverridesClusterSettingHandler; +import java.util.List; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.elasticsearch.client.node.NodeClient; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.rest.BaseRestHandler; +import org.elasticsearch.rest.BytesRestResponse; +import org.elasticsearch.rest.RestController; +import org.elasticsearch.rest.RestRequest; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; +import java.util.Collections; + +/** Rest request handler for handling config overrides for various performance analyzer features. */ +public class PerformanceAnalyzerOverridesClusterConfigAction extends BaseRestHandler { + + private static final Logger LOG = + LogManager.getLogger(PerformanceAnalyzerOverridesClusterConfigAction.class); + private static final String PA_CONFIG_OVERRIDES_PATH = + "/_opendistro/_performanceanalyzer/override/cluster/config"; + private static final String OVERRIDES_FIELD = "overrides"; + private static final String REASON_FIELD = "reason"; + private static final String OVERRIDE_TRIGGERED_FIELD = "override triggered"; + + private static final List ROUTES = + unmodifiableList( + asList( + new Route(RestRequest.Method.GET, PA_CONFIG_OVERRIDES_PATH), + new Route(RestRequest.Method.POST, PA_CONFIG_OVERRIDES_PATH))); + + private final ConfigOverridesClusterSettingHandler configOverridesClusterSettingHandler; + private final ConfigOverridesWrapper overridesWrapper; + + public PerformanceAnalyzerOverridesClusterConfigAction( + final Settings settings, + final RestController restController, + final ConfigOverridesClusterSettingHandler configOverridesClusterSettingHandler, + final ConfigOverridesWrapper overridesWrapper) { + super(); + this.configOverridesClusterSettingHandler = configOverridesClusterSettingHandler; + this.overridesWrapper = overridesWrapper; + } + + @Override + public List routes() { + return ROUTES; + } + + /** @return the name of this handler. */ + @Override + public String getName() { + return PerformanceAnalyzerOverridesClusterConfigAction.class.getSimpleName(); + } + + /** + * Prepare the request for execution. Implementations should consume all request params before + * returning the runnable for actual execution. Unconsumed params will immediately terminate + * execution of the request. However, some params are only used in processing the response; + * implementations can override {@link BaseRestHandler#responseParams()} to indicate such params. + * + * @param request the request to execute + * @param client client for executing actions on the local node + * @return the action to execute + * @throws IOException if an I/O exception occurred parsing the request and preparing for + * execution + */ + @Override + protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) + throws IOException { + RestChannelConsumer consumer; + if (request.method() == RestRequest.Method.GET) { + consumer = handleGet(); + } else if (request.method() == RestRequest.Method.POST) { + consumer = handlePost(request); + } else { + String reason = + "Unsupported method:" + request.method().toString() + " Supported: [GET, POST]"; + consumer = sendErrorResponse(reason, RestStatus.METHOD_NOT_ALLOWED); + } + + return consumer; + } + + /** + * Handler for the GET method. + * + * @return RestChannelConsumer that sends the current config overrides when run. + */ + private RestChannelConsumer handleGet() { + return channel -> { + try { + final ConfigOverrides overrides = overridesWrapper.getCurrentClusterConfigOverrides(); + XContentBuilder builder = channel.newBuilder(); + builder.startObject(); + builder.field(OVERRIDES_FIELD, ConfigOverridesHelper.serialize(overrides)); + builder.endObject(); + channel.sendResponse(new BytesRestResponse(RestStatus.OK, builder)); + } catch (IOException ioe) { + LOG.error("Error sending response", ioe); + } + }; + } + + /** + * Handler for the POST method. + * + * @param request The POST request. + * @return RestChannelConsumer that updates the cluster setting with the requested config + * overrides when run. + * @throws IOException if an exception occurs trying to parse or execute the request. + */ + private RestChannelConsumer handlePost(final RestRequest request) throws IOException { + String jsonString = XContentHelper.convertToJson(request.content(), false, XContentType.JSON); + ConfigOverrides requestedOverrides = ConfigOverridesHelper.deserialize(jsonString); + + if (!validateOverrides(requestedOverrides)) { + String reason = "enable set and disable set should be disjoint"; + return sendErrorResponse(reason, RestStatus.BAD_REQUEST); + } + + configOverridesClusterSettingHandler.updateConfigOverrides(requestedOverrides); + return channel -> { + XContentBuilder builder = channel.newBuilder(); + builder.startObject(); + builder.field(OVERRIDE_TRIGGERED_FIELD, true); + builder.endObject(); + channel.sendResponse(new BytesRestResponse(RestStatus.OK, builder)); + }; + } + + private boolean validateOverrides(final ConfigOverrides requestedOverrides) { + boolean isValid = true; + + // Check if we have both enable and disable components + if (requestedOverrides.getDisable() == null || requestedOverrides.getEnable() == null) { + return true; + } + + // Check if any RCA nodes are present in both enabled and disabled lists. + if (requestedOverrides.getEnable().getRcas() != null + && requestedOverrides.getDisable().getRcas() != null) { + isValid = + Collections.disjoint( + requestedOverrides.getEnable().getRcas(), requestedOverrides.getDisable().getRcas()); + } + + // Check if any deciders are present in both enabled and disabled lists. + if (isValid + && requestedOverrides.getEnable().getDeciders() != null + && requestedOverrides.getDisable().getDeciders() != null) { + isValid = + Collections.disjoint( + requestedOverrides.getEnable().getDeciders(), + requestedOverrides.getDisable().getDeciders()); + } + + // Check if any remediation actions are in both enabled and disabled lists. + if (isValid + && requestedOverrides.getEnable().getActions() != null + && requestedOverrides.getDisable().getActions() != null) { + isValid = + Collections.disjoint( + requestedOverrides.getEnable().getActions(), + requestedOverrides.getDisable().getActions()); + } + + return isValid; + } + + private RestChannelConsumer sendErrorResponse(final String reason, final RestStatus status) { + return channel -> { + XContentBuilder errorBuilder = channel.newErrorBuilder(); + errorBuilder.startObject(); + errorBuilder.field(REASON_FIELD, reason); + errorBuilder.endObject(); + + channel.sendResponse(new BytesRestResponse(status, errorBuilder)); + }; + } +} diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerResourceProvider.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerResourceProvider.java index 0dd297a2..b55621eb 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerResourceProvider.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/config/PerformanceAnalyzerResourceProvider.java @@ -73,19 +73,18 @@ public PerformanceAnalyzerResourceProvider(Settings settings, RestController con if (isHttpsEnabled) { // skip host name verification // Create a trust manager that does not validate certificate chains - TrustManager[] trustAllCerts = new TrustManager[]{ - new X509TrustManager() { - public java.security.cert.X509Certificate[] getAcceptedIssuers() { - return null; - } + TrustManager[] trustAllCerts = + new TrustManager[] { + new X509TrustManager() { + public java.security.cert.X509Certificate[] getAcceptedIssuers() { + return null; + } - public void checkClientTrusted(X509Certificate[] certs, String authType) { - } + public void checkClientTrusted(X509Certificate[] certs, String authType) {} - public void checkServerTrusted(X509Certificate[] certs, String authType) { - } - } - }; + public void checkServerTrusted(X509Certificate[] certs, String authType) {} + } + }; // Install the all-trusting trust manager try { @@ -93,7 +92,9 @@ public void checkServerTrusted(X509Certificate[] certs, String authType) { sc.init(null, trustAllCerts, new java.security.SecureRandom()); HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory()); } catch (AccessControlException e) { - LOG.warn("SecurityManager forbids setting default SSL Socket Factory...using default settings", e); + LOG.warn( + "SecurityManager forbids setting default SSL Socket Factory...using default settings", + e); } catch (Exception e) { LOG.warn("Error encountered while initializing SSLContext...using default settings", e); } @@ -104,9 +105,12 @@ public void checkServerTrusted(X509Certificate[] certs, String authType) { try { HttpsURLConnection.setDefaultHostnameVerifier(allHostsValid); } catch (AccessControlException e) { - LOG.warn("SecurityManager forbids setting default hostname verifier...using default settings", e); + LOG.warn( + "SecurityManager forbids setting default hostname verifier...using default settings", + e); } catch (Exception e) { - LOG.warn("Error encountered while initializing hostname verifier...using default settings", e); + LOG.warn( + "Error encountered while initializing hostname verifier...using default settings", e); } } } @@ -115,16 +119,15 @@ public String getName() { return "PerformanceAnalyzer_ResourceProvider"; } - /** - * {@inheritDoc} - */ + /** {@inheritDoc} */ @Override public List routes() { - return ROUTES; + return ROUTES; } @Override - protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) throws IOException { + protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient client) + throws IOException { StringBuilder response = new StringBuilder(); String inputLine; int responseCode; @@ -137,12 +140,14 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli channel.sendResponse(finalResponse); }; } else { - HttpURLConnection httpURLConnection = isHttpsEnabled ? createHttpsURLConnection(url) : - createHttpURLConnection(url); - //Build Response in buffer + HttpURLConnection httpURLConnection = + isHttpsEnabled ? createHttpsURLConnection(url) : createHttpURLConnection(url); + // Build Response in buffer responseCode = httpURLConnection.getResponseCode(); - InputStream inputStream = (responseCode == HttpsURLConnection.HTTP_OK) ? - httpURLConnection.getInputStream() : httpURLConnection.getErrorStream(); + InputStream inputStream = + (responseCode == HttpsURLConnection.HTTP_OK) + ? httpURLConnection.getInputStream() + : httpURLConnection.getErrorStream(); try (BufferedReader in = new BufferedReader(new InputStreamReader(inputStream))) { while ((inputLine = in.readLine()) != null) { @@ -152,13 +157,15 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli } catch (Exception ex) { LOG.error("Error receiving response for Request Uri {} - {}", request.uri(), ex); return channel -> { - channel.sendResponse(new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR, - "Encountered error possibly with downstream APIs")); + channel.sendResponse( + new BytesRestResponse( + RestStatus.INTERNAL_SERVER_ERROR, + "Encountered error possibly with downstream APIs")); }; } - RestResponse finalResponse = new BytesRestResponse(RestStatus.fromCode(responseCode), - String.valueOf(response)); + RestResponse finalResponse = + new BytesRestResponse(RestStatus.fromCode(responseCode), String.valueOf(response)); LOG.debug("finalResponse: {}", finalResponse); return channel -> { @@ -167,11 +174,12 @@ protected RestChannelConsumer prepareRequest(RestRequest request, NodeClient cli for (Map.Entry> entry : map.entrySet()) { finalResponse.addHeader(entry.getKey(), entry.getValue().toString()); } - //Send Response back to callee + // Send Response back to callee channel.sendResponse(finalResponse); } catch (Exception ex) { LOG.error("Error sending response", ex); - channel.sendResponse(new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR, "Something went wrong")); + channel.sendResponse( + new BytesRestResponse(RestStatus.INTERNAL_SERVER_ERROR, "Something went wrong")); } }; } @@ -204,7 +212,8 @@ void setPortNumber(String portNumber) { public URL getAgentUri(RestRequest request) throws IOException { String redirectEndpoint = request.param("redirectEndpoint"); String urlScheme = isHttpsEnabled ? "https://" : "http://"; - String redirectBasePath = urlScheme + "localhost:" + portNumber + "/_opendistro/_performanceanalyzer/"; + String redirectBasePath = + urlScheme + "localhost:" + portNumber + "/_opendistro/_performanceanalyzer/"; // Need to register all params in ES request else es throws illegal_argument_exception for (String key : request.params().keySet()) { request.param(key); diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/whoami/WhoAmIAction.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/whoami/WhoAmIAction.java index c8765a48..f6d7290e 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/whoami/WhoAmIAction.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/http_action/whoami/WhoAmIAction.java @@ -20,9 +20,9 @@ public class WhoAmIAction extends ActionType { - public static final WhoAmIAction INSTANCE = new WhoAmIAction(); public static final String NAME = "cluster:admin/performanceanalyzer/whoami"; public static final Writeable.Reader responseReader = null; + public static final WhoAmIAction INSTANCE = new WhoAmIAction(); private WhoAmIAction() { super(NAME, responseReader); diff --git a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/util/Utils.java b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/util/Utils.java index e36e00eb..a8a7fefc 100644 --- a/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/util/Utils.java +++ b/src/main/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/util/Utils.java @@ -1,24 +1,108 @@ +/* + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + package com.amazon.opendistro.elasticsearch.performanceanalyzer.util; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.ESResources; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.CacheConfigMetricsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsConfiguration; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.CircuitBreakerCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.MasterServiceEventMetrics; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.MasterServiceMetrics; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeDetailsCollector; -import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsMetricsCollector; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsAllShardsMetricsCollector; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsFixedShardsMetricsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.ThreadPoolMetricsCollector; +import org.elasticsearch.action.admin.indices.stats.CommonStats; +import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags; +import org.elasticsearch.action.admin.indices.stats.IndexShardStats; +import org.elasticsearch.action.admin.indices.stats.ShardStats; +import org.elasticsearch.index.IndexService; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.shard.IndexShardState; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.IndicesService; + +import java.util.EnumSet; +import java.util.HashMap; +import java.util.Iterator; public class Utils { public static void configureMetrics() { MetricsConfiguration.MetricConfig cdefault = MetricsConfiguration.cdefault ; + MetricsConfiguration.CONFIG_MAP.put(CacheConfigMetricsCollector.class, cdefault); MetricsConfiguration.CONFIG_MAP.put(CircuitBreakerCollector.class, cdefault); MetricsConfiguration.CONFIG_MAP.put(ThreadPoolMetricsCollector.class, cdefault); MetricsConfiguration.CONFIG_MAP.put(NodeDetailsCollector.class, cdefault); - MetricsConfiguration.CONFIG_MAP.put(NodeStatsMetricsCollector.class, cdefault); + MetricsConfiguration.CONFIG_MAP.put(NodeStatsAllShardsMetricsCollector.class, cdefault); + MetricsConfiguration.CONFIG_MAP.put(NodeStatsFixedShardsMetricsCollector.class, cdefault); MetricsConfiguration.CONFIG_MAP.put(MasterServiceEventMetrics.class, new MetricsConfiguration.MetricConfig(1000, 0, 0)); MetricsConfiguration.CONFIG_MAP.put(MasterServiceMetrics.class, cdefault); } + // These methods are utility functions for the Node Stat Metrics Collectors. These methods are used by both the all + // shards collector and the few shards collector. + + /** + * This function is copied directly from IndicesService.java in elastic search as the original function is not public + * we need to collect stats per shard based instead of calling the stat() function to fetch all at once(which increases + * cpu usage on data nodes dramatically). + * @param indicesService Indices Services which keeps tracks of the indexes on the node + * @param indexShard Shard to fetch the metrics for + * @param flags The Metrics Buckets which needs to be fetched. + * @return stats given in the flags param for the shard given in the indexShard param. + */ + public static IndexShardStats indexShardStats(final IndicesService indicesService, final IndexShard indexShard, + final CommonStatsFlags flags) { + if (indexShard.routingEntry() == null) { + return null; + } + + return new IndexShardStats( + indexShard.shardId(), + new ShardStats[]{ + new ShardStats( + indexShard.routingEntry(), + indexShard.shardPath(), + new CommonStats(indicesService.getIndicesQueryCache(), indexShard, flags), + null, + null, + null) + }); + } + + public static HashMap getShards() { + HashMap shards = new HashMap<>(); + Iterator indexServices = ESResources.INSTANCE.getIndicesService().iterator(); + while (indexServices.hasNext()) { + Iterator indexShards = indexServices.next().iterator(); + while (indexShards.hasNext()) { + IndexShard shard = indexShards.next(); + shards.put(getUniqueShardIdKey(shard.shardId()), shard); + } + } + return shards; + } + + public static String getUniqueShardIdKey(ShardId shardId) { + return "[" + shardId.hashCode() + "][" + shardId.getId() + "]"; + } + + public static final EnumSet CAN_WRITE_INDEX_BUFFER_STATES = EnumSet.of( + IndexShardState.RECOVERING, IndexShardState.POST_RECOVERY, IndexShardState.STARTED); + } diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerIT.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerIT.java index 834750d5..00afdbc3 100644 --- a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerIT.java +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/PerformanceAnalyzerIT.java @@ -4,6 +4,7 @@ import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; +import java.util.Objects; import org.apache.http.HttpHost; import org.apache.http.HttpStatus; import org.apache.http.util.EntityUtils; @@ -14,7 +15,7 @@ import org.elasticsearch.client.RestClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.test.rest.ESRestTestCase; -import org.junit.AfterClass; +import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Test; @@ -28,7 +29,7 @@ public class PerformanceAnalyzerIT extends ESRestTestCase { private static final Logger LOG = LogManager.getLogger(PerformanceAnalyzerIT.class); - private static final int PORT = 9600; + private static final int PORT = Integer.parseInt(System.getProperty("tests.pa.port")); private static final ObjectMapper mapper = new ObjectMapper(); private static RestClient paClient; @@ -81,7 +82,8 @@ public static void ensurePaAndRcaEnabled() throws Exception { Response resp = client().performRequest(new Request("GET", "_opendistro/_performanceanalyzer/cluster/config")); Map respMap = mapper.readValue(EntityUtils.toString(resp.getEntity(), "UTF-8"), new TypeReference>(){}); - if (respMap.get("currentPerformanceAnalyzerClusterState").equals(3)) { + if (respMap.get("currentPerformanceAnalyzerClusterState").equals(3) && + !respMap.get("currentPerformanceAnalyzerClusterState").equals(7)) { break; } Thread.sleep(1000L); @@ -89,7 +91,8 @@ public static void ensurePaAndRcaEnabled() throws Exception { Response resp = client().performRequest(new Request("GET", "_opendistro/_performanceanalyzer/cluster/config")); Map respMap = mapper.readValue(EntityUtils.toString(resp.getEntity(), "UTF-8"), new TypeReference>(){}); - if (!respMap.get("currentPerformanceAnalyzerClusterState").equals(3)) { + if (!respMap.get("currentPerformanceAnalyzerClusterState").equals(3) && + !respMap.get("currentPerformanceAnalyzerClusterState").equals(7)) { throw new Exception("PA and RCA are not enabled on the target cluster!"); } } @@ -128,8 +131,22 @@ public void checkMetrics() throws Exception { }); } - @AfterClass - public static void closePaClient() throws Exception { + @Test + public void testRcaIsRunning() throws Exception { + ensurePaAndRcaEnabled(); + WaitFor.waitFor(() -> { + Request request = new Request("GET", "/_opendistro/_performanceanalyzer/rca"); + try { + Response resp = paClient.performRequest(request); + return Objects.equals(HttpStatus.SC_OK, resp.getStatusLine().getStatusCode()); + } catch (Exception e) { // 404, RCA context hasn't been set up yet + return false; + } + }, 2, TimeUnit.MINUTES); + } + + @After + public void closePaClient() throws Exception { ESRestTestCase.closeClients(); paClient.close(); LOG.debug("AfterClass has run"); diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/JsonKeyTests.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/JsonKeyTests.java index f911caef..e24d04c7 100644 --- a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/JsonKeyTests.java +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/JsonKeyTests.java @@ -23,21 +23,23 @@ import java.util.Set; import java.util.function.Function; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.CacheConfigMetricsCollector.CacheMaxSizeStatus; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.CircuitBreakerCollector.CircuitBreakerStatus; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.HeapMetricsCollector.HeapStatus; -import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.MasterServiceMetrics.MasterPendingStatus; -import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsMetricsCollector.NodeStatsMetricsStatus; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsAllShardsMetricsCollector.NodeStatsMetricsAllShardsPerCollectionStatus; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsFixedShardsMetricsCollector.NodeStatsMetricsFixedShardsPerCollectionStatus; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.ThreadPoolMetricsCollector.ThreadPoolStatus; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CircuitBreakerDimension; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CircuitBreakerValue; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CacheConfigDimension; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CacheConfigValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.DiskDimension; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.DiskValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.HeapDimension; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.HeapValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.IPDimension; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.IPValue; -import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.MasterPendingValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.ShardStatsValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.TCPDimension; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.TCPValue; @@ -82,7 +84,10 @@ public class JsonKeyTests { @Test public void testJsonKeyNames() throws NoSuchFieldException, SecurityException { - + verifyMethodWithJsonKeyNames(CacheMaxSizeStatus.class, + CacheConfigDimension.values(), + CacheConfigValue.values(), + getMethodJsonProperty); verifyMethodWithJsonKeyNames(CircuitBreakerStatus.class, CircuitBreakerDimension.values(), CircuitBreakerValue.values(), @@ -100,12 +105,11 @@ public void testJsonKeyNames() throws NoSuchFieldException, verifyMethodWithJsonKeyNames(ThreadPoolStatus.class, ThreadPoolDimension.values(), ThreadPoolValue.values(), getMethodJsonProperty); - verifyMethodWithJsonKeyNames(NodeStatsMetricsStatus.class, + verifyMethodWithJsonKeyNames(NodeStatsMetricsAllShardsPerCollectionStatus.class, new MetricDimension[] {}, ShardStatsValue.values(), getMethodJsonProperty); - verifyMethodWithJsonKeyNames(MasterPendingStatus.class, - new MetricDimension[] {}, - MasterPendingValue.values(), + verifyMethodWithJsonKeyNames(NodeStatsMetricsFixedShardsPerCollectionStatus.class, + new MetricDimension[] {}, ShardStatsValue.values(), getMethodJsonProperty); verifyNodeDetailJsonKeyNames(); } @@ -124,16 +128,21 @@ private void verifyMethodWithJsonKeyNames( } } - assertTrue(dimensions.length + metrics.length == jsonKeySet.size()); + assertTrue(dimensions.length + metrics.length >= jsonKeySet.size()); for (MetricDimension d : dimensions) { assertTrue(String.format("We need %s", d.toString()), jsonKeySet.contains(d.toString())); + jsonKeySet.remove(d.toString()); } - for (MetricValue v : metrics) { - assertTrue(String.format("We need %s", v.toString()), - jsonKeySet.contains(v.toString())); + Set s = new HashSet<>(); + for (MetricValue m : metrics) { + s.add(m.toString()); + } + for (String v : jsonKeySet) { + assertTrue(String.format("We need %s", v), + s.contains(v)); } } diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsMetricsCollectorTests.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollectorTests.java similarity index 74% rename from src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsMetricsCollectorTests.java rename to src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollectorTests.java index 6835020c..0778ad7f 100644 --- a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsMetricsCollectorTests.java +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsAllShardsMetricsCollectorTests.java @@ -1,5 +1,5 @@ /* - * Copyright <2019> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. @@ -26,17 +26,17 @@ import static org.junit.Assert.assertTrue; @Ignore -public class NodeStatsMetricsCollectorTests extends CustomMetricsLocationTestBase { +public class NodeStatsAllShardsMetricsCollectorTests extends CustomMetricsLocationTestBase { @Test public void testNodeStatsMetrics() { System.setProperty("performanceanalyzer.metrics.log.enabled", "False"); long startTimeInMills = 1253722339; - MetricsConfiguration.CONFIG_MAP.put(NodeStatsMetricsCollector.class, MetricsConfiguration.cdefault); + MetricsConfiguration.CONFIG_MAP.put(NodeStatsAllShardsMetricsCollector.class, MetricsConfiguration.cdefault); - NodeStatsMetricsCollector nodeStatsMetricsCollector = new NodeStatsMetricsCollector(null); - nodeStatsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex", "55"); + NodeStatsAllShardsMetricsCollector nodeStatsAllShardsMetricsCollector = new NodeStatsAllShardsMetricsCollector(null); + nodeStatsAllShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex", "55"); String fetchedValue = PerformanceAnalyzerMetrics.getMetric( @@ -47,28 +47,28 @@ public void testNodeStatsMetrics() { assertEquals("89123.23", fetchedValue); try { - nodeStatsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex"); + nodeStatsAllShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex"); assertTrue("Negative scenario test: Should have been a RuntimeException", true); } catch (RuntimeException ex) { //- expecting exception...only 1 values passed; 2 expected } try { - nodeStatsMetricsCollector.saveMetricValues("89123.23", startTimeInMills); + nodeStatsAllShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills); assertTrue("Negative scenario test: Should have been a RuntimeException", true); } catch (RuntimeException ex) { //- expecting exception...only 0 values passed; 2 expected } try { - nodeStatsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex", "55", "123"); + nodeStatsAllShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex", "55", "123"); assertTrue("Negative scenario test: Should have been a RuntimeException", true); } catch (RuntimeException ex) { //- expecting exception...only 3 values passed; 2 expected } try { - nodeStatsMetricsCollector.getNodeIndicesStatsByShardField(); + nodeStatsAllShardsMetricsCollector.getNodeIndicesStatsByShardField(); } catch (Exception exception) { assertTrue("There shouldn't be any exception in the code; Please check the reflection code for any changes", true); } diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollectorTests.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollectorTests.java new file mode 100644 index 00000000..ceed7f10 --- /dev/null +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/collectors/NodeStatsFixedShardsMetricsCollectorTests.java @@ -0,0 +1,78 @@ +/* + * Copyright <2019> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors; + +import org.junit.Ignore; +import org.junit.Test; + +import com.amazon.opendistro.elasticsearch.performanceanalyzer.CustomMetricsLocationTestBase; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.PluginSettings; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricsConfiguration; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.PerformanceAnalyzerMetrics; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +@Ignore +public class NodeStatsFixedShardsMetricsCollectorTests extends CustomMetricsLocationTestBase { + + @Test + public void testNodeStatsMetrics() { + System.setProperty("performanceanalyzer.metrics.log.enabled", "False"); + long startTimeInMills = 1253722339; + + MetricsConfiguration.CONFIG_MAP.put(NodeStatsFixedShardsMetricsCollector.class, MetricsConfiguration.cdefault); + + NodeStatsFixedShardsMetricsCollector nodeStatsFixedShardsMetricsCollector = new NodeStatsFixedShardsMetricsCollector(null); + nodeStatsFixedShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex", "55"); + + + String fetchedValue = PerformanceAnalyzerMetrics.getMetric( + PluginSettings.instance().getMetricsLocation() + + PerformanceAnalyzerMetrics.getTimeInterval(startTimeInMills)+"/indices/NodesStatsIndex/55/"); + PerformanceAnalyzerMetrics.removeMetrics(PluginSettings.instance().getMetricsLocation() + + PerformanceAnalyzerMetrics.getTimeInterval(startTimeInMills)); + assertEquals("89123.23", fetchedValue); + + + try { + nodeStatsFixedShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex"); + assertTrue("Negative scenario test: Should have been a RuntimeException", true); + } catch (RuntimeException ex) { + //- expecting exception...only 1 values passed; 2 expected + } + + try { + nodeStatsFixedShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills); + assertTrue("Negative scenario test: Should have been a RuntimeException", true); + } catch (RuntimeException ex) { + //- expecting exception...only 0 values passed; 2 expected + } + + try { + nodeStatsFixedShardsMetricsCollector.saveMetricValues("89123.23", startTimeInMills, "NodesStatsIndex", "55", "123"); + assertTrue("Negative scenario test: Should have been a RuntimeException", true); + } catch (RuntimeException ex) { + //- expecting exception...only 3 values passed; 2 expected + } + + try { + nodeStatsFixedShardsMetricsCollector.getNodeIndicesStatsByShardField(); + } catch (Exception exception) { + assertTrue("There shouldn't be any exception in the code; Please check the reflection code for any changes", true); + } + + } +} diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/ConfigOverridesTestHelper.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/ConfigOverridesTestHelper.java new file mode 100644 index 00000000..4677464b --- /dev/null +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/ConfigOverridesTestHelper.java @@ -0,0 +1,57 @@ +/* + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.config; + +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverrides; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; + +import java.util.Arrays; +import java.util.List; + +public class ConfigOverridesTestHelper { + private static final ObjectMapper MAPPER = new ObjectMapper(); + public static final String RCA1 = "rca1"; + public static final String RCA2 = "rca2"; + public static final String RCA3 = "rca3"; + public static final String RCA4 = "rca4"; + public static final String ACTION1 = "act1"; + public static final String ACTION2 = "act2"; + public static final String ACTION3 = "act3"; + public static final String ACTION4 = "act4"; + public static final String DECIDER1 = "dec1"; + public static final String DECIDER2 = "dec2"; + public static final String DECIDER3 = "dec3"; + public static final String DECIDER4 = "dec4"; + public static final List DISABLED_RCAS_LIST = Arrays.asList(RCA1, RCA2); + public static final List ENABLED_RCAS_LIST = Arrays.asList(RCA3, RCA4); + public static final List DISABLED_ACTIONS_LIST = Arrays.asList(ACTION1, ACTION2); + public static final List ENABLED_DECIDERS_LIST = Arrays.asList(DECIDER3, DECIDER4); + + public static ConfigOverrides buildValidConfigOverrides() { + ConfigOverrides overrides = new ConfigOverrides(); + overrides.getDisable().setRcas(DISABLED_RCAS_LIST); + overrides.getDisable().setActions(DISABLED_ACTIONS_LIST); + overrides.getEnable().setRcas(ENABLED_RCAS_LIST); + overrides.getEnable().setDeciders(ENABLED_DECIDERS_LIST); + + return overrides; + } + + public static String getValidConfigOverridesJson() throws JsonProcessingException { + return MAPPER.writeValueAsString(buildValidConfigOverrides()); + } +} diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/PerformanceAnalyzerClusterSettingHandlerTest.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/PerformanceAnalyzerClusterSettingHandlerTest.java index 7a1ee31e..cef66e9d 100644 --- a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/PerformanceAnalyzerClusterSettingHandlerTest.java +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/PerformanceAnalyzerClusterSettingHandlerTest.java @@ -1,5 +1,5 @@ /* - * Copyright <2019> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. @@ -66,6 +66,17 @@ public void paDisabledClusterStateTest() { assertEquals(0, clusterSettingHandler.getCurrentClusterSettingValue()); } + @Test + public void updateClusterStateTest() { + setControllerValues(ENABLED_STATE, ENABLED_STATE, DISABLED_STATE); + clusterSettingHandler = + new PerformanceAnalyzerClusterSettingHandler( + mockPerformanceAnalyzerController, mockClusterSettingsManager); + assertEquals(3, clusterSettingHandler.getCurrentClusterSettingValue()); + clusterSettingHandler.onSettingUpdate(0); + assertEquals(0, clusterSettingHandler.getCurrentClusterSettingValue()); + } + private void setControllerValues(final Boolean paEnabled, final Boolean rcaEnabled, final Boolean loggingEnabled) { when(mockPerformanceAnalyzerController.isPerformanceAnalyzerEnabled()).thenReturn(paEnabled); when(mockPerformanceAnalyzerController.isRcaEnabled()).thenReturn(rcaEnabled); diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandlerTests.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandlerTests.java new file mode 100644 index 00000000..59957622 --- /dev/null +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/config/setting/handler/ConfigOverridesClusterSettingHandlerTests.java @@ -0,0 +1,171 @@ +/* + * Copyright <2020> Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"). + * You may not use this file except in compliance with the License. + * A copy of the License is located at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * or in the "license" file accompanying this file. This file is distributed + * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either + * express or implied. See the License for the specific language governing + * permissions and limitations under the License. + */ + +package com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.handler; + +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverrides; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesWrapper; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.setting.ClusterSettingsManager; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import org.elasticsearch.common.settings.Setting; +import org.junit.Before; +import org.junit.Test; +import org.mockito.ArgumentCaptor; +import org.mockito.Captor; +import org.mockito.Mock; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; + +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.ACTION1; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.ACTION2; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.DECIDER1; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.DECIDER2; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.DECIDER3; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.DECIDER4; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.RCA1; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.RCA2; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.RCA3; +import static com.amazon.opendistro.elasticsearch.performanceanalyzer.config.ConfigOverridesTestHelper.RCA4; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.verify; +import static org.mockito.MockitoAnnotations.initMocks; + +public class ConfigOverridesClusterSettingHandlerTests { + + private static final ObjectMapper MAPPER = new ObjectMapper(); + private static final String TEST_KEY = "test key"; + private static final ConfigOverrides EMPTY_OVERRIDES = new ConfigOverrides(); + private ConfigOverridesClusterSettingHandler testClusterSettingHandler; + private ConfigOverridesWrapper testOverridesWrapper; + private Setting testSetting; + private ConfigOverrides testOverrides; + + @Mock + private ClusterSettingsManager mockClusterSettingsManager; + + @Captor + private ArgumentCaptor updatedClusterSettingCaptor; + + @Before + public void setUp() { + initMocks(this); + this.testSetting = Setting.simpleString(TEST_KEY); + this.testOverridesWrapper = new ConfigOverridesWrapper(); + this.testOverrides = ConfigOverridesTestHelper.buildValidConfigOverrides(); + testOverridesWrapper.setCurrentClusterConfigOverrides(EMPTY_OVERRIDES); + + this.testClusterSettingHandler = new ConfigOverridesClusterSettingHandler( + testOverridesWrapper, mockClusterSettingsManager, testSetting); + } + + @Test + public void onSettingUpdateSuccessTest() throws JsonProcessingException { + String updatedSettingValue = ConfigOverridesTestHelper.getValidConfigOverridesJson(); + testClusterSettingHandler.onSettingUpdate(updatedSettingValue); + + assertEquals(updatedSettingValue, MAPPER.writeValueAsString(testOverridesWrapper.getCurrentClusterConfigOverrides())); + } + + @Test + public void onSettingUpdateFailureTest() throws IOException { + String updatedSettingValue = "invalid json"; + ConfigOverridesWrapper failingOverridesWrapper = new ConfigOverridesWrapper(); + + testClusterSettingHandler = new ConfigOverridesClusterSettingHandler( + failingOverridesWrapper, mockClusterSettingsManager, testSetting); + + testClusterSettingHandler.onSettingUpdate(updatedSettingValue); + + assertEquals(MAPPER.writeValueAsString(EMPTY_OVERRIDES), + MAPPER.writeValueAsString(testOverridesWrapper.getCurrentClusterConfigOverrides())); + } + + @Test + public void onSettingUpdateEmptySettingsTest() throws IOException { + ConfigOverridesWrapper failingOverridesWrapper = new ConfigOverridesWrapper(); + + testClusterSettingHandler = new ConfigOverridesClusterSettingHandler( + failingOverridesWrapper, mockClusterSettingsManager, testSetting); + + testClusterSettingHandler.onSettingUpdate(null); + + assertEquals(MAPPER.writeValueAsString(EMPTY_OVERRIDES), + MAPPER.writeValueAsString(testOverridesWrapper.getCurrentClusterConfigOverrides())); + } + + @Test + public void updateConfigOverridesMergeSuccessTest() throws IOException { + testOverridesWrapper.setCurrentClusterConfigOverrides(testOverrides); + + ConfigOverrides expectedOverrides = new ConfigOverrides(); + ConfigOverrides additionalOverrides = new ConfigOverrides(); + // current enabled rcas: 3,4. current disabled rcas: 1,2 + additionalOverrides.getEnable().setRcas(Arrays.asList(RCA1, RCA1)); + + expectedOverrides.getEnable().setRcas(Arrays.asList(RCA1, RCA3, RCA4)); + expectedOverrides.getDisable().setRcas(Collections.singletonList(RCA2)); + + // current enabled deciders: 3,4. current disabled deciders: none + additionalOverrides.getDisable().setDeciders(Arrays.asList(DECIDER3, DECIDER1)); + additionalOverrides.getEnable().setDeciders(Collections.singletonList(DECIDER2)); + + expectedOverrides.getEnable().setDeciders(Arrays.asList(DECIDER2, DECIDER4)); + expectedOverrides.getDisable().setDeciders(Arrays.asList(DECIDER3, DECIDER1)); + + // current enabled actions: none. current disabled actions: 1,2 + additionalOverrides.getEnable().setActions(Arrays.asList(ACTION1, ACTION2)); + + expectedOverrides.getEnable().setActions(Arrays.asList(ACTION1, ACTION2)); + + testClusterSettingHandler.updateConfigOverrides(additionalOverrides); + verify(mockClusterSettingsManager).updateSetting(eq(testSetting), updatedClusterSettingCaptor.capture()); + + assertTrue(areEqual(expectedOverrides, MAPPER.readValue(updatedClusterSettingCaptor.getValue(), ConfigOverrides.class))); + } + + private boolean areEqual(final ConfigOverrides expected, final ConfigOverrides actual) { + Collections.sort(expected.getEnable().getRcas()); + Collections.sort(actual.getEnable().getRcas()); + assertEquals(expected.getEnable().getRcas(), actual.getEnable().getRcas()); + + Collections.sort(expected.getEnable().getActions()); + Collections.sort(actual.getEnable().getActions()); + assertEquals(expected.getEnable().getActions(), actual.getEnable().getActions()); + + Collections.sort(expected.getEnable().getDeciders()); + Collections.sort(actual.getEnable().getDeciders()); + assertEquals(expected.getEnable().getDeciders(), actual.getEnable().getDeciders()); + + Collections.sort(expected.getDisable().getRcas()); + Collections.sort(actual.getDisable().getRcas()); + assertEquals(expected.getDisable().getRcas(), actual.getDisable().getRcas()); + + Collections.sort(expected.getDisable().getActions()); + Collections.sort(actual.getDisable().getActions()); + assertEquals(expected.getDisable().getActions(), actual.getDisable().getActions()); + + Collections.sort(expected.getDisable().getDeciders()); + Collections.sort(actual.getDisable().getDeciders()); + assertEquals(expected.getDisable().getDeciders(), actual.getDisable().getDeciders()); + + return true; + } +} \ No newline at end of file diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/hwnet/CollectMetricsTests.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/hwnet/CollectMetricsTests.java index 07795b5e..09982043 100644 --- a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/hwnet/CollectMetricsTests.java +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/hwnet/CollectMetricsTests.java @@ -16,47 +16,6 @@ package com.amazon.opendistro.elasticsearch.performanceanalyzer.hwnet; -import static java.util.Collections.emptyMap; -import static java.util.Collections.singleton; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; -import static org.mockito.Mockito.when; - -import java.io.BufferedReader; -import java.io.File; -import java.io.FileOutputStream; -import java.io.FileReader; -import java.io.PrintStream; -import java.lang.reflect.Field; -import java.net.InetAddress; -import java.net.UnknownHostException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -import org.elasticsearch.Version; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodeRole; -import org.elasticsearch.cluster.node.DiscoveryNodes; -import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.transport.TransportAddress; -import org.elasticsearch.indices.breaker.AllCircuitBreakerStats; -import org.elasticsearch.indices.breaker.CircuitBreakerService; -import org.elasticsearch.indices.breaker.CircuitBreakerStats; -import org.junit.Before; -//import org.junit.Test; -import org.junit.runner.RunWith; -import org.mockito.Mock; -import org.mockito.Mockito; -import org.mockito.stubbing.Answer; -import org.powermock.api.mockito.PowerMockito; -import org.powermock.core.classloader.annotations.PowerMockIgnore; -import org.powermock.core.classloader.annotations.PrepareForTest; -import org.powermock.core.classloader.annotations.SuppressStaticInitializationFor; -import org.powermock.modules.junit4.PowerMockRunner; - import com.amazon.opendistro.elasticsearch.performanceanalyzer.AbstractTests; import com.amazon.opendistro.elasticsearch.performanceanalyzer.ESResources; import com.amazon.opendistro.elasticsearch.performanceanalyzer.OSMetricsGeneratorFactory; @@ -66,6 +25,7 @@ import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NetworkInterfaceCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeDetailsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.PluginSettings; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.config.overrides.ConfigOverridesWrapper; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CircuitBreakerDimension; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.CircuitBreakerValue; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.IPDimension; @@ -79,6 +39,47 @@ import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics_generator.TCPMetricsGenerator; import com.amazon.opendistro.elasticsearch.performanceanalyzer.os.OSGlobals; import com.amazon.opendistro.elasticsearch.performanceanalyzer.util.JsonConverter; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodeRole; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.transport.TransportAddress; +import org.elasticsearch.indices.breaker.AllCircuitBreakerStats; +import org.elasticsearch.indices.breaker.CircuitBreakerService; +import org.elasticsearch.indices.breaker.CircuitBreakerStats; +import org.junit.Before; +import org.junit.runner.RunWith; +import org.mockito.Mock; +import org.mockito.Mockito; +import org.mockito.stubbing.Answer; +import org.powermock.api.mockito.PowerMockito; +import org.powermock.core.classloader.annotations.PowerMockIgnore; +import org.powermock.core.classloader.annotations.PrepareForTest; +import org.powermock.core.classloader.annotations.SuppressStaticInitializationFor; +import org.powermock.modules.junit4.PowerMockRunner; + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileOutputStream; +import java.io.FileReader; +import java.io.PrintStream; +import java.lang.reflect.Field; +import java.net.InetAddress; +import java.net.UnknownHostException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.singleton; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; +import static org.mockito.Mockito.when; + +//import org.junit.Test; @PowerMockIgnore({ "org.apache.logging.log4j.*" }) @RunWith(PowerMockRunner.class) @@ -488,6 +489,8 @@ public void testNodeDetails() throws Exception { String nodeId2 = "Zn1QcSUGT--DciD1Em5wRg"; InetAddress address2 = InetAddress.getByName("10.212.52.241"); + ConfigOverridesWrapper testOverridesWrapper = new ConfigOverridesWrapper(); + List nodeList = buildNumNodes(nodeId1, nodeId2, address1, address2); for (DiscoveryNode node : nodeList) { @@ -512,7 +515,7 @@ public void testNodeDetails() throws Exception { long timeBeforeCollectorWriting = System.currentTimeMillis(); - NodeDetailsCollector collector = new NodeDetailsCollector(); + NodeDetailsCollector collector = new NodeDetailsCollector(testOverridesWrapper); NodeDetailsCollector spyCollector = Mockito.spy(collector); String metricFilePath = rootLocation + File.separator + PerformanceAnalyzerMetrics.sNodesPath; diff --git a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/reader/AbstractReaderTests.java b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/reader/AbstractReaderTests.java index 9c692fe6..64131bf7 100644 --- a/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/reader/AbstractReaderTests.java +++ b/src/test/java/com/amazon/opendistro/elasticsearch/performanceanalyzer/reader/AbstractReaderTests.java @@ -21,7 +21,8 @@ import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.HeapMetricsCollector.HeapStatus; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.MasterServiceMetrics.MasterPendingStatus; import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeDetailsCollector.NodeDetailsStatus; -import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsMetricsCollector; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsAllShardsMetricsCollector; +import com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.NodeStatsFixedShardsMetricsCollector; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.GCType; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.AllMetrics.NodeRole; import com.amazon.opendistro.elasticsearch.performanceanalyzer.metrics.MetricDimension; @@ -124,37 +125,40 @@ protected String createShardStatMetrics(long indexingThrottleTime, long indexWriterMemory, long versionMapMemory, long bitsetMemory, long shardSizeInBytes, FailureCondition condition) { // dummyCollector is only used to create the json string - NodeStatsMetricsCollector dummyCollector = new NodeStatsMetricsCollector(null); - String str = (dummyCollector.new NodeStatsMetricsStatus( + NodeStatsFixedShardsMetricsCollector dummyCollectorFewShards = new NodeStatsFixedShardsMetricsCollector(null); + String str = (dummyCollectorFewShards.new NodeStatsMetricsFixedShardsPerCollectionStatus( indexingThrottleTime, - queryCacheHitCount, - queryCacheMissCount, - queryCacheInBytes, - fieldDataEvictions, - fieldDataInBytes, - requestCacheHitCount, - requestCacheMissCount, - requestCacheEvictions, - requestCacheInBytes, - refreshCount, - refreshTime, - flushCount, - flushTime, - mergeCount, - mergeTime, - mergeCurrent, - indexBufferBytes, - segmentCount, - segmentsMemory, - termsMemory, - storedFieldsMemory, - termVectorsMemory, - normsMemory, - pointsMemory, - docValuesMemory, - indexWriterMemory, - versionMapMemory, - bitsetMemory, shardSizeInBytes)).serialize(); + refreshCount, + refreshTime, + flushCount, + flushTime, + mergeCount, + mergeTime, + mergeCurrent, + indexBufferBytes, + segmentCount, + segmentsMemory, + termsMemory, + storedFieldsMemory, + termVectorsMemory, + normsMemory, + pointsMemory, + docValuesMemory, + indexWriterMemory, + versionMapMemory, + bitsetMemory, shardSizeInBytes)).serialize(); + + NodeStatsAllShardsMetricsCollector dummyCollectorAllShards = new NodeStatsAllShardsMetricsCollector(null); + str += (dummyCollectorAllShards.new NodeStatsMetricsAllShardsPerCollectionStatus( + queryCacheHitCount, + queryCacheMissCount, + queryCacheInBytes, + fieldDataEvictions, + fieldDataInBytes, + requestCacheHitCount, + requestCacheMissCount, + requestCacheEvictions, + requestCacheInBytes)).serialize(); if (condition == FailureCondition.INVALID_JSON_METRIC) { str = str.substring(1);