-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support approximate hypertable size #6463
Conversation
d8639e1
to
72721ac
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #6463 +/- ##
==========================================
+ Coverage 79.80% 79.83% +0.02%
==========================================
Files 190 190
Lines 37148 37218 +70
Branches 9418 9427 +9
==========================================
+ Hits 29646 29713 +67
- Misses 3118 3122 +4
+ Partials 4384 4383 -1 ☔ View full report in Codecov by Sentry. |
6e42a67
to
387208b
Compare
64f6866
to
81896d0
Compare
One missing thing here is tests for OSM chunks (you can check test/sql/chunk_utils_internal.sql). |
Added in that file. |
1daebc4
to
eaec3fa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, but it would be good to test and ensure that it behaves sanely with a view or a foreign table (or other weird objects such as composite types or sequences) and also return NULL if the table does not exist (similar to how pg_relation_size
works).
-- check that approx size function works. We call VACUUM to ensure all forks exist | ||
VACUUM public.table_to_compress; | ||
SELECT * FROM hypertable_approximate_size('public.table_to_compress'); | ||
SELECT * FROM hypertable_size('public.table_to_compress'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since vacuum is needed to ensure that all forks exist, it might be a good idea to ensure that it works also when that is not the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Vacuum is only needed on PG13. I added this only for the compression case for PG13. In all other uses, it works without VACUUM. Newer versions pre-create all the forks
abb56fd
to
ee83555
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
adf44b4
to
1c6f8c7
Compare
99cabdc
to
fa2d99a
Compare
If a lot of chunks are involved then the current pl/pgsql function to compute the size of each chunk via a nested loop is pretty slow. Additionally, the current functionality makes a system call to get the file size on disk for each chunk everytime this function is called. That again slows things down. We now have an approximate function which is implemented in C to avoid the issues in the pl/pgsql function. Additionally, this function also uses per backend caching using the smgr layer to compute the approximate size cheaply. The PG cache invalidation clears off the cached size for a chunk when DML happens into it. That size cache is thus able to get the latest size in a matter of minutes. Also, due to the backend caching, any long running session will only fetch latest data for new or modified chunks and can use the cached data (which is calculated afresh the first time around) effectively for older chunks.
Automated backport to 2.13.x not done: cherry-pick failed. Git status
|
This release contains performance improvements, an improved hypertable DDL API and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations (100k limit, configurable) * Helper functions for determining compression settings **Removal notice: Multi-node support** TimescaleDB 2.13 is the last version that includes multi-node support. Multi-node support is effectively removed in 2.14 version. Learn more about it [here](docs/MultiNodeDeprecation.md). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include recompress_chunk procedure. Its function will be replaced by compress_chunk function which should work on uncompressed and partially compressed chunk. It should be used to fully compress all chunks. **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * timescale#6325 Add plan-time chunk exclusion for real-time CAggs * timescale#6360 Remove support for creating Continuous Aggregates with old format * timescale#6386 Add functions for determining compression defaults * timescale#6410 Remove multinode public API * timescale#6440 Allow SQLValueFunction pushdown into compressed scan * timescale#6463 Support approximate hypertable size * timescale#6513 Make compression settings per chunk * timescale#6529 Remove reindex_relation from recompression * timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * timescale#6545 Remove restrictions for changing compression settings * timescale#6566 Limit tuple decompression during DML operations * timescale#6579 Change compress_chunk and decompress_chunk to idempotent version by default * timescale#6608 Add LWLock for OSM usage in loader * timescale#6609 Deprecate recompress_chunk * timescale#6609 Add optional recompress argument to compress_chunk **Bugfixes** * timescale#6541 Inefficient join plans on compressed hypertables. * timescale#6491 Enable now() plantime constification with BETWEEN * timescale#6494 Fix create_hypertable referenced by fk succeeds * timescale#6498 Suboptimal query plans when using time_bucket with query parameters * timescale#6507 time_bucket_gapfill with timezones doesn't handle daylight savings * timescale#6509 Make extension state available through function * timescale#6512 Log extension state changes * timescale#6522 Disallow triggers on CAggs * timescale#6523 Reduce locking level on compressed chunk index during segmentwise recompression * timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * timescale#6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * timescale#6575 Fix compressed chunk not found during upserts * timescale#6592 Fix recompression policy ignoring partially compressed chunks * timescale#6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
If a lot of chunks are involved then the current pl/pgsql function to compute the size of each chunk via a nested loop is pretty slow. Additionally, the current functionality makes a system call to get the file size on disk for each chunk everytime this function is called. That again slows things down. We now have an approximate function which is implemented in C to avoid the issues in the pl/pgsql function. Additionally, this function also uses per backend caching using the smgr layer to compute the approximate size cheaply. The PG cache invalidation clears off the cached size for a chunk when DML happens into it. That size cache is thus able to get the latest size in a matter of minutes. Also, due to the backend caching, any long running session will only fetch latest data for new or modified chunks and can use the cached data (which is calculated afresh the first time around) effectively for older chunks.