Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[O11y][Apache Spark] Remove unnecessary filter from the dashboard and update aggregation for visualizations #7409

Closed
harnish-elastic opened this issue Aug 16, 2023 · 0 comments · Fixed by #7467
Assignees
Labels
bug Something isn't working, use only for issues Integration:apache_spark Apache Spark

Comments

@harnish-elastic
Copy link
Contributor

In the Apache Spark overview dashboard,

For the Number of Threadpool tasks over time [Metrics Apache Spark] visualization, due to apache_spark.executor.threadpool.complete_tasks: exists filter, the other two values are not populating. Hence need to remove unnecessary filter from the visualization. Please refer the below screenshot.

image

For the Memory Used [Metrics Apache Spark] visualization, the field value is fluctuated based on the memory usage but the aggregation is maximum. Hence need to update the aggregation from maximum to last_value.

@harnish-elastic harnish-elastic added bug Something isn't working, use only for issues Integration:apache_spark Apache Spark labels Aug 16, 2023
@harnish-elastic harnish-elastic self-assigned this Aug 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working, use only for issues Integration:apache_spark Apache Spark
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant