-
Notifications
You must be signed in to change notification settings - Fork 882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix compressed chunk not found during upserts #6575
Fix compressed chunk not found during upserts #6575
Conversation
bea1e09
to
edb0d11
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #6575 +/- ##
==========================================
+ Coverage 79.80% 79.84% +0.04%
==========================================
Files 190 190
Lines 36949 36904 -45
Branches 9357 9338 -19
==========================================
- Hits 29487 29466 -21
+ Misses 3110 3100 -10
+ Partials 4352 4338 -14 ☔ View full report in Codecov by Sentry. |
Add an isolation test that reproduces a "chunk not found" error that happen during upserts on compressed chunks.
edb0d11
to
0f2a3b9
Compare
0f2a3b9
to
252d493
Compare
Note that the first commit reproduces the bug, which is then fixed in the second commit. |
|
||
ts_chunk_validate_chunk_status_for_operation(chunk, CHUNK_INSERT, true); | ||
DEBUG_WAITPOINT("chunk_insert_before_lock"); | ||
rel = table_open(chunk_relid, RowExclusiveLock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct that the compression code locks the source chunk with an ExclusiveLock
(which conflicts with our RowExclusiveLock
). So, the chunk meta data cannot be changed after we have obtained this lock?
Maybe you could add a comment to the code to make this behavior more clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't about lock level. The PR doesn't change the locks taken only the place where it is taken.
The issue is that you cannot trust that metadata is up-to-date unless you read it after taking the lock since the chunk could be altered between reading the metadata and taking the lock.
There are many operations that can alter metadata, so compression/decompression is only one such case that happened to trigger the bug.
I can add a comment about the specific case of concurrency between compression and insert, but this is actually more general than that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation and sorry for the confusion. It seems my question was not precisely formulated. I was wondering if another process could change the metadata even after taking the lock in line 585 and reading the metadata.
RowExclusiveLock
is one of the weaker locks and it is not self-conflicting. However, the compression code takes a stronger ExclusiveLock
, which conflicts with our RowExclusiveLock
. So, a parallel running compression job should block and the metadata will not be modified by this job in parallel. Therefore, the change seems to be correct for this case.
However, if another process takes a weaker, non-conflicting lock, like RowExclusiveLock,
and modifies the metadata, I am wondering if we could still face the same problem or if there is something else that prevents these situations? If we just rely on the fact that all other code paths that modify the metadata have to conflict with our RowExclusiveLock
, we should document this assumption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jnidzwetzki than you for the clarification.
I added an extra note where the RowExclusiveLock
is taken. Is this enough documentation for now?
And to answer this question:
I was wondering if another process could change the metadata even after taking the lock in line 585 and reading the metadata.
Yes, another process could change metadata if it doesn't take the right lock. But that would be a bug. Any process changing metadata should take locks according to the section "Explicit locking" in PG docs, I guess.
Just to be clear though: I haven't changed any locks or lock-levels in this PR and conflicts were already happening.
The change I've made is about how metadata is used around locks: The PR only ensures we avoid using metadata that was read prior to locking and then consistently use the up-to-date metadata after locking. There was already a comment in the code about the locks taken and how it aimed to conflict with compression which I tweaked a bit.
Also, it might be worth noting that, while re-compression starts with taking an ExclusiveLock
, I believe it will at some point take an AccessExclusiveLock
to drop the relation and create a new one. This conflicts with every other lock, including AccessShareLock
.
Thus, a process that doesn't change metadata should be able to get away with an AccessShareLock
. The INSERT process doesn't change metadata, but takes a RowExclusiveLock
because it modifies data (not metadata) in the relation.
If we have code that modifies a releation's metadata with a RowExclusiveLock
or lower it would be a bug I guess. I believe this is part of the principles of PG itself, if you read the section on "Explicit Locking" where "higher" locks are always necessary to do "maintenance" type things on relations (i.e., change metadata, except for maybe stats).
In the end, it is up to our code to "do the right thing" according to established principles around the use of locks. Unfortunately, we currently don't do this in a clean way as the code has evolved organically by many people. It is probably time for a major refactor where we also establish stricter principles and better interfaces. But this is out-of-scope of this fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@erimatnor Thanks for the explanation.
252d493
to
0ecab83
Compare
When inserting into a compressed chunk with an ON CONFLICT clause, and there is a concurrent recompression of the chunk, an error can occur due to a failed lookup of the compressed chunk. The error happens because the chunk information is re-read from an outdated cache holding a compressed chunk ID that no longer exists after the chunk was recompressed (recompression effectively removes the compressed chunk and creates it again). Fix this issue by ensuring that the insert path only uses up-to-date chunk metadata read _after_ the lock on the chunk was taken.
0ecab83
to
93db52d
Compare
Automated backport to 2.13.x not done: cherry-pick failed. Git status
|
This release contains performance improvements, an improved hypertable DDL API and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations (100k limit, configurable) * Helper functions for determining compression settings **Removal notice: Multi-node support** TimescaleDB 2.13 is the last version that includes multi-node support. Multi-node support is effectively removed in 2.14 version. Learn more about it [here](docs/MultiNodeDeprecation.md). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include recompress_chunk procedure. Its function will be replaced by compress_chunk function which should work on uncompressed and partially compressed chunk. It should be used to fully compress all chunks. **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * timescale#6325 Add plan-time chunk exclusion for real-time CAggs * timescale#6360 Remove support for creating Continuous Aggregates with old format * timescale#6386 Add functions for determining compression defaults * timescale#6410 Remove multinode public API * timescale#6440 Allow SQLValueFunction pushdown into compressed scan * timescale#6463 Support approximate hypertable size * timescale#6513 Make compression settings per chunk * timescale#6529 Remove reindex_relation from recompression * timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * timescale#6545 Remove restrictions for changing compression settings * timescale#6566 Limit tuple decompression during DML operations * timescale#6579 Change compress_chunk and decompress_chunk to idempotent version by default * timescale#6608 Add LWLock for OSM usage in loader * timescale#6609 Deprecate recompress_chunk * timescale#6609 Add optional recompress argument to compress_chunk **Bugfixes** * timescale#6541 Inefficient join plans on compressed hypertables. * timescale#6491 Enable now() plantime constification with BETWEEN * timescale#6494 Fix create_hypertable referenced by fk succeeds * timescale#6498 Suboptimal query plans when using time_bucket with query parameters * timescale#6507 time_bucket_gapfill with timezones doesn't handle daylight savings * timescale#6509 Make extension state available through function * timescale#6512 Log extension state changes * timescale#6522 Disallow triggers on CAggs * timescale#6523 Reduce locking level on compressed chunk index during segmentwise recompression * timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * timescale#6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * timescale#6575 Fix compressed chunk not found during upserts * timescale#6592 Fix recompression policy ignoring partially compressed chunks * timescale#6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
This release contains performance improvements and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next available opportunity. In addition, it includes these noteworthy features: * Ability to change compression settings on existing compressed hypertables at any time. New compression settings take effect on any new chunks that are compressed after the change. * Reduced locking requirements during chunk recompression * Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable) * Helper functions for determining compression settings **For this release only**, you will need to restart the database before running `ALTER EXTENSION` **Multi-node support removal announcement** Following the deprecation announcement for Multi-node in TimescaleDB 2.13, Multi-node is no longer supported starting with TimescaleDB 2.14. TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md). If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the [migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/). **Deprecation notice: recompress_chunk procedure** TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14, works on both uncompressed and partially compressed chunks. The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter). **Features** * #6325 Add plan-time chunk exclusion for real-time CAggs * #6360 Remove support for creating Continuous Aggregates with old format * #6386 Add functions for determining compression defaults * #6410 Remove multinode public API * #6440 Allow SQLValueFunction pushdown into compressed scan * #6463 Support approximate hypertable size * #6513 Make compression settings per chunk * #6529 Remove reindex_relation from recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6545 Remove restrictions for changing compression settings * #6566 Limit tuple decompression during DML operations * #6579 Change compress_chunk and decompress_chunk to idempotent version by default * #6608 Add LWLock for OSM usage in loader * #6609 Deprecate recompress_chunk * #6609 Add optional recompress argument to compress_chunk **Bugfixes** * #6541 Inefficient join plans on compressed hypertables. * #6491 Enable now() plantime constification with BETWEEN * #6494 Fix create_hypertable referenced by fk succeeds * #6498 Suboptimal query plans when using time_bucket with query parameters * #6507 time_bucket_gapfill with timezones doesn't handle daylight savings * #6509 Make extension state available through function * #6512 Log extension state changes * #6522 Disallow triggers on CAggs * #6523 Reduce locking level on compressed chunk index during segmentwise recompression * #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets * #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code * #6575 Fix compressed chunk not found during upserts * #6592 Fix recompression policy ignoring partially compressed chunks * #6610 Ensure qsort comparison function is transitive **Thanks** * @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables. * @HollowMan6 for reporting triggers not working on materialized views of CAggs * @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters * @JerkoNikolic for reporting the issue with gapfill and DST * @pdipesh02 for working on removing the old Continuous Aggregate format * @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
When inserting into a compressed chunk with an ON CONFLICT clause, and there is a concurrent recompression of the chunk, an error can occur due to a failed lookup of the compressed chunk.
The error happens because the chunk information is re-read from an outdated cache holding a compressed chunk ID that no longer exists after the chunk was recompressed (recompression effectively removes the compressed chunk and creates it again).
Fix this issue by ensuring that the insert path only uses up-to-date chunk metadata read after the lock on the chunk was taken.
Disable-Check: commit-count