-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unexpected log messages during alerting stress testing #54508
Comments
Pinging @elastic/kibana-alerting-services (Team:Alerting Services) |
The API key and missing auth creds messages are likely the same as reported here: #54125 - no need to do more work on it in this issue. I've not seen "Task has been claimed by another Kibana service" since doing other testing, but worthing looking into, let's focus on that. I will note that I often stop and start Kibana during stress testing - it just picks up from where it left off, no usual problems, but perhaps the message was caused by the restart. Why does it think another Kibana service claimed the test? |
It's worth noting that the Task has been claimed by another Kibana service message appears whenever there's a version conflict- we're just assuming that's what happened, but it might be a version conflict due to something else 🤔 |
ah, I didn't realize it was an optimistic locking version thing, but tracing it back, looks like it comes from here: kibana/x-pack/plugins/task_manager/server/task_runner.ts Lines 178 to 182 in 7ca858e
Since this could have been from a Kibana restart, makes sense that the task could have been left in a funky state when Kibana shutdown, then when it started back up, hit the 409. That's not great - you'd like to think TM could deal with a restart cleanly, but given it's complexity, feels understandable. I'm going to close this, but will keep an eye out for more of these now I know what that it is. |
Kibana version: master, a few days before 7.6 feature freeze
Elasticsearch version: snapshot from
yarn es snapshot
, from Kibana version ^^^Describe the bug:
During stress testing of alerting, when 100 alert deletions are happening, a few odd messages appeared in the Kibana and ES console outputs.
Kibana
Steps to reproduce:
whole-lotta-alerts.sh
Expected behavior:
Nothing unusual in the ES or Kibana logs.
Provide logs and/or server output (if relevant):
The following message was repeated ~50 times in the ES console:
The following message occurred one time every time I deleted all 100 alerts:
The following message was repeated ~25 times in the Kibana console - note that much of the message was a JSON encoded string, which I've decoded here:
missing authentication credentials for REST request
The text was updated successfully, but these errors were encountered: