-
Notifications
You must be signed in to change notification settings - Fork 12.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Influxdb storage for dashboard leads to generate a lot of shard in influxdb in the past #663
Comments
Why does it generate a new shard? I use 1000000000000000 as fixed time to save dashboards to (so I can overwrite them). When I write the same timestamp and sequence nr why create new shards all the time? |
ping @pauldix |
Hi, My guess is that writing a data point in the past triggers some internal routine in influxdb. It's like influxdb fill the gap between the older shard and the newer one. @pauldix could certainly clarify this point. |
In Influx 0.7, once you create a shard in either the short term or long term space, it will then create shards for time blocks going forward from This could get a little tricky with 0.8 because people can define their own shard spaces and retention policies. So They'd need to make sure that there's a shard space that keeps data around forever that the Grafana dashboards should get written to. What you're seeing in the logs is odd. I'll have to look into it. Influx shouldn't be creating shards for the time periods between the dashboard time and current time. |
I saw the same behavior. |
@pauldix I'm using the default shard configuration from influxdb 0.8.0rc4. I known that Grafana dashboard will be removed after 7 days. I'm still testing influxdb/grafana, so it is not an issue for the moment. |
any news/ideas? |
@michail-nikolaev no, I am still unsure how reliably this will be. Worried that people will loose dashboards because of shard retention policies. |
with 0.8 and the ability to define storage periods it makes it a non issue. Just in the tutorial for setup require a share space be infinte for the matching dashboard storage name. |
@jordanrinke I'm using influxdb 0.8.0rc4 with the default config. The default shard config seems to be infinite retention. So yes, with the default shard config in 0.8 grafana dashboard will not be erased by influxdb shard management. But this is not the issue I have reported. The issue is that influxdb generate a lot a (empty?) shards to fill the gap between 1000000000000000 timestamp and the newest shard. With the default config one shard per 7 days... I agree that the issue/bug is not in grafana. But I'm sure that grafana/influxdb users don't want to have all those shards created in their influxdb. |
any update? I want to switch to influxdb for dashboard storage |
@huhongbo maybe ask in #influxdb on freenode or open an issue in the influxdb repo (if there is non already) |
@torkelo OK , or maybe change the 1000000000000000 to current ,bcs the 0.8 influxdb default set inf on the shard |
I can't change to current, It needs to fixed, or else each save will create a new dashboard. I could change the fixed value to be more current date, but will only push the problem into the future. |
I am also seeing this, Influx DB has created 118 shards of ~9 MB each just for the Grafana database, which currently has 1 data point in it. Why not just use the current time and query the latest copy of the dashboard using sorting and LIMIT 1? |
@torkelo Maybe creating new dashboards (and thus keeping around old ones) is a better solution than writing to (and updating) an arbitrary timestamp. Then you could just pull back the most recent dashboard with a I can also imagine that some simple method of culling old dashboards could be concocted as well. |
How would search work then? So you only get the latest versions |
I think this problem can be avoid by setting the shard during to 365d or more large |
Good idea @huhongbo I don't know the exact scheme grafana is storing dashboards but if you make a series per dashboard A query as simple as this will give you the most recent point for every series:
|
This was fixed on master, see influxdata/influxdb#954 for more discussion and workarounds. |
Just to be more clear, InfluxDB won't create more shards than necessary when grafana stores the dashboard using an old timestamp. |
* Change ListMetricsWithPageLimit response to slice of non-pointers * Change GetAccountsForCurrentUserOrRole response to be not pointer * Clean test Cleanup calls in test * Remove CloudWatchAPI as part of mock
* Change ListMetricsWithPageLimit response to slice of non-pointers * Change GetAccountsForCurrentUserOrRole response to be not pointer * Clean test Cleanup calls in test * Remove CloudWatchAPI as part of mock
* Lattice: Point to private prerelease of aws-sdk-go (#515) * point to private prerelease of aws-sdk-go * fix build issue * Lattice: Adding a feature toggle (#549) * Adding a feature toggle for lattice * Change name of feature toggle * Lattice: List accounts (#543) * Separate layers * Introduce testify/mock library Co-authored-by: Shirley Leu <[email protected]> * point to version that includes metric api changes (#574) * add accounts component (#575) * Test refactor: remove unneeded clientFactoryMock (#581) * Lattice: Add monitoring badge (#576) * add monitoring badge * fix tests * solve conflict * Lattice: Add dynamic label for account display name (#579) * Build: Automatically sync lattice-main with OSS * Lattice: Point to private prerelease of aws-sdk-go (#515) * point to private prerelease of aws-sdk-go * fix build issue * Lattice: Adding a feature toggle (#549) * Adding a feature toggle for lattice * Change name of feature toggle * Lattice: List accounts (#543) * Separate layers * Introduce testify/mock library Co-authored-by: Shirley Leu <[email protected]> * point to version that includes metric api changes (#574) * add accounts component (#575) * Test refactor: remove unneeded clientFactoryMock (#581) * Lattice: Add monitoring badge (#576) * add monitoring badge * fix tests * solve conflict * add account label Co-authored-by: Shirley Leu <[email protected]> Co-authored-by: Sarah Zinger <[email protected]> * fix import * solve merge related problem * add account info (#608) * add back namespaces handler * Lattice: Parse account id and return it to frontend (#609) * parse account id and return to frontend * fix route test * only show badge when feature toggle is enabled (#615) * Lattice: Refactor resource response type and return account (#613) * refactor resource response type * remove not used file. * go lint * fix tests * remove commented code * Lattice: Use account as input when listing metric names and dimensions (#611) * use account in resource requests * add account to response * revert accountInfo to accountId * PR feedback * unit test account in list metrics response * remove not used asserts * don't assert on response that is not relevant to the test * removed dupe test * pr feedback * rename request package (#626) * Lattice: Move account component and add tooltip (#630) * move accounts component to the top of metric stat editor * add tooltip * CloudWatch: add account to GetMetricData queries (#627) * Add AccountId to metric stat query * Lattice: Account variable support (#625) * add variable support in accounts component * add account variable query type * update variables * interpolate variable before its sent to backend * handle variable change in hooks * remove not used import * Update public/app/plugins/datasource/cloudwatch/components/Account.tsx Co-authored-by: Sarah Zinger <[email protected]> * Update public/app/plugins/datasource/cloudwatch/hooks.ts Co-authored-by: Sarah Zinger <[email protected]> * add one more unit test Co-authored-by: Sarah Zinger <[email protected]> * cleanup (#629) * Set account Id according to crossAccountQuerying feature flag in backend (#632) * CloudWatch: Change spelling of feature-toggle (#634) * Lattice Logs (#631) * Lattice Logs * Fixes after CR * Lattice: Bug: fix dimension keys request (#644) * fix dimension keys * fix lint * more lint * CloudWatch: Add tests for QueryData with AccountId (#637) * Update from breaking change (#645) * Update from breaking change * Remove extra interface and methods Co-authored-by: Shirley Leu <[email protected]> * CloudWatch: Add business logic layer for getting log groups (#642) Co-authored-by: Sarah Zinger <[email protected]> * Lattice: Fix - unset account id in region change handler (#646) * move reset of account to region change handler * fix broken test * Lattice: Add account id to metric stat query deep link (#656) add account id to metric stat link * CloudWatch: Add new log groups handler for cross-account querying (#643) * Lattice: Add feature tracking (#660) * add tracking for account id prescense in metrics query * also check feature toggle * fix broken test * CloudWatch: Add route for DescribeLogGroups for cross-account querying (#647) Co-authored-by: Erik Sundell <[email protected]> * Lattice: Handle account id default value (#662) * make sure right type is returned * set right default values * Suggestions to lattice changes (#663) * Change ListMetricsWithPageLimit response to slice of non-pointers * Change GetAccountsForCurrentUserOrRole response to be not pointer * Clean test Cleanup calls in test * Remove CloudWatchAPI as part of mock * Resolve conflicts * Add Latest SDK (#672) * add tooltip (#674) * Docs: Add documentation for CloudWatch cross account querying (#676) * wip docs * change wordings * add sections about metrics and logs * change from monitoring to observability * Update docs/sources/datasources/aws-cloudwatch/_index.md Co-authored-by: Sarah Zinger <[email protected]> * Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md Co-authored-by: Fiona Artiaga <[email protected]> * Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md Co-authored-by: Fiona Artiaga <[email protected]> * Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md Co-authored-by: Sarah Zinger <[email protected]> * Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md Co-authored-by: Fiona Artiaga <[email protected]> * apply pr feedback * fix file name * more pr feedback * pr feedback Co-authored-by: Sarah Zinger <[email protected]> Co-authored-by: Fiona Artiaga <[email protected]> * use latest version of the aws-sdk-go * Fix tests' mock response type * Remove change in Azure Monitor Co-authored-by: Sarah Zinger <[email protected]> Co-authored-by: Shirley Leu <[email protected]> Co-authored-by: Fiona Artiaga <[email protected]>
SDK release notes: * Logs contract: ignore remaining fields by @gabor in #659 * Logs contract: more robust field finding, explicit approach by @gabor in #660 * Tracing: Support multiple OTel propagators by @xnyo in #663 * Tracing: Add more details to HTTP Outgoing Request by @xnyo in #664 * Data: Encode Nanosecond into JSON by @kylebrandt in #647 * Data: cmp tests using FrameTestCompareOptions() will no longer ignore time differences beyond millisecond resolution
Hi,
I'm running grafana 1.7.0rc1 with influxdb 0.8.0rc4.
After storing my dashboard into my influxdb, I started to see influxdb generating a new shard in the past every 10 minutes. My guess is that it is related to the time provided to store the dashboard.
In my case 1000000000000000u (Sunday 09 Sep 2001, 03:46:40). All my system clock are in sync with ntp servers.
Here is the log of the influxdb dashboard creation:
Here are some log of shard creating in the past:
The text was updated successfully, but these errors were encountered: