Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix compressed chunk not found during upserts #6575

Merged

Conversation

erimatnor
Copy link
Contributor

@erimatnor erimatnor commented Jan 30, 2024

When inserting into a compressed chunk with an ON CONFLICT clause, and there is a concurrent recompression of the chunk, an error can occur due to a failed lookup of the compressed chunk.

The error happens because the chunk information is re-read from an outdated cache holding a compressed chunk ID that no longer exists after the chunk was recompressed (recompression effectively removes the compressed chunk and creates it again).

Fix this issue by ensuring that the insert path only uses up-to-date chunk metadata read after the lock on the chunk was taken.

Disable-Check: commit-count

@erimatnor erimatnor added the bug label Jan 30, 2024
@erimatnor erimatnor force-pushed the fix-chunk-not-found-on-insert-sdc-1529 branch 2 times, most recently from bea1e09 to edb0d11 Compare January 30, 2024 10:27
Copy link

codecov bot commented Jan 30, 2024

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (47e28a8) 79.80% compared to head (93db52d) 79.84%.
Report is 2 commits behind head on main.

Files Patch % Lines
src/nodes/chunk_dispatch/chunk_insert_state.c 84.61% 0 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6575      +/-   ##
==========================================
+ Coverage   79.80%   79.84%   +0.04%     
==========================================
  Files         190      190              
  Lines       36949    36904      -45     
  Branches     9357     9338      -19     
==========================================
- Hits        29487    29466      -21     
+ Misses       3110     3100      -10     
+ Partials     4352     4338      -14     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Add an isolation test that reproduces a "chunk not found" error that
happen during upserts on compressed chunks.
@erimatnor erimatnor force-pushed the fix-chunk-not-found-on-insert-sdc-1529 branch from edb0d11 to 0f2a3b9 Compare January 30, 2024 11:57
@erimatnor erimatnor changed the title Add test to reproduce SDC 1529 Fix compressed chunk not found during upserts Jan 30, 2024
@erimatnor erimatnor force-pushed the fix-chunk-not-found-on-insert-sdc-1529 branch from 0f2a3b9 to 252d493 Compare January 30, 2024 13:36
@erimatnor erimatnor marked this pull request as ready for review January 30, 2024 14:16
@erimatnor
Copy link
Contributor Author

Note that the first commit reproduces the bug, which is then fixed in the second commit.

src/nodes/chunk_dispatch/chunk_dispatch.c Outdated Show resolved Hide resolved

ts_chunk_validate_chunk_status_for_operation(chunk, CHUNK_INSERT, true);
DEBUG_WAITPOINT("chunk_insert_before_lock");
rel = table_open(chunk_relid, RowExclusiveLock);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is my understanding correct that the compression code locks the source chunk with an ExclusiveLock (which conflicts with our RowExclusiveLock). So, the chunk meta data cannot be changed after we have obtained this lock?

Maybe you could add a comment to the code to make this behavior more clear.

Copy link
Contributor Author

@erimatnor erimatnor Jan 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't about lock level. The PR doesn't change the locks taken only the place where it is taken.

The issue is that you cannot trust that metadata is up-to-date unless you read it after taking the lock since the chunk could be altered between reading the metadata and taking the lock.

There are many operations that can alter metadata, so compression/decompression is only one such case that happened to trigger the bug.

I can add a comment about the specific case of concurrency between compression and insert, but this is actually more general than that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation and sorry for the confusion. It seems my question was not precisely formulated. I was wondering if another process could change the metadata even after taking the lock in line 585 and reading the metadata.

RowExclusiveLock is one of the weaker locks and it is not self-conflicting. However, the compression code takes a stronger ExclusiveLock, which conflicts with our RowExclusiveLock. So, a parallel running compression job should block and the metadata will not be modified by this job in parallel. Therefore, the change seems to be correct for this case.

However, if another process takes a weaker, non-conflicting lock, like RowExclusiveLock, and modifies the metadata, I am wondering if we could still face the same problem or if there is something else that prevents these situations? If we just rely on the fact that all other code paths that modify the metadata have to conflict with our RowExclusiveLock, we should document this assumption.

Copy link
Contributor Author

@erimatnor erimatnor Jan 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jnidzwetzki than you for the clarification.

I added an extra note where the RowExclusiveLock is taken. Is this enough documentation for now?

And to answer this question:

I was wondering if another process could change the metadata even after taking the lock in line 585 and reading the metadata.

Yes, another process could change metadata if it doesn't take the right lock. But that would be a bug. Any process changing metadata should take locks according to the section "Explicit locking" in PG docs, I guess.

Just to be clear though: I haven't changed any locks or lock-levels in this PR and conflicts were already happening.

The change I've made is about how metadata is used around locks: The PR only ensures we avoid using metadata that was read prior to locking and then consistently use the up-to-date metadata after locking. There was already a comment in the code about the locks taken and how it aimed to conflict with compression which I tweaked a bit.

Also, it might be worth noting that, while re-compression starts with taking an ExclusiveLock, I believe it will at some point take an AccessExclusiveLock to drop the relation and create a new one. This conflicts with every other lock, including AccessShareLock.

Thus, a process that doesn't change metadata should be able to get away with an AccessShareLock. The INSERT process doesn't change metadata, but takes a RowExclusiveLock because it modifies data (not metadata) in the relation.

If we have code that modifies a releation's metadata with a RowExclusiveLock or lower it would be a bug I guess. I believe this is part of the principles of PG itself, if you read the section on "Explicit Locking" where "higher" locks are always necessary to do "maintenance" type things on relations (i.e., change metadata, except for maybe stats).

In the end, it is up to our code to "do the right thing" according to established principles around the use of locks. Unfortunately, we currently don't do this in a clean way as the code has evolved organically by many people. It is probably time for a major refactor where we also establish stricter principles and better interfaces. But this is out-of-scope of this fix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@erimatnor Thanks for the explanation.

@erimatnor erimatnor force-pushed the fix-chunk-not-found-on-insert-sdc-1529 branch from 252d493 to 0ecab83 Compare January 31, 2024 09:16
When inserting into a compressed chunk with an ON CONFLICT clause, and
there is a concurrent recompression of the chunk, an error can occur
due to a failed lookup of the compressed chunk.

The error happens because the chunk information is re-read from an
outdated cache holding a compressed chunk ID that no longer exists
after the chunk was recompressed (recompression effectively removes
the compressed chunk and creates it again).

Fix this issue by ensuring that the insert path only uses up-to-date
chunk metadata read _after_ the lock on the chunk was taken.
@erimatnor erimatnor force-pushed the fix-chunk-not-found-on-insert-sdc-1529 branch from 0ecab83 to 93db52d Compare January 31, 2024 09:31
@erimatnor erimatnor merged commit 101acb1 into timescale:main Jan 31, 2024
43 of 44 checks passed
@erimatnor erimatnor deleted the fix-chunk-not-found-on-insert-sdc-1529 branch January 31, 2024 16:07
@timescale-automation
Copy link

Automated backport to 2.13.x not done: cherry-pick failed.

Git status

HEAD detached at origin/2.13.x
You are currently cherry-picking commit 09be6df8a.
  (fix conflicts and run "git cherry-pick --continue")
  (use "git cherry-pick --skip" to skip this patch)
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Unmerged paths:
  (use "git add/rm <file>..." as appropriate to mark resolution)
	both modified:   src/nodes/chunk_dispatch/chunk_insert_state.c
	deleted by us:   tsl/test/isolation/expected/compression_recompress.out
	deleted by us:   tsl/test/isolation/specs/compression_recompress.spec

no changes added to commit (use "git add" and/or "git commit -a")

Job log

@timescale-automation timescale-automation added the auto-backport-not-done Automated backport of this PR has failed non-retriably (e.g. conflicts) label Jan 31, 2024
@fabriziomello fabriziomello added this to the TimescaleDB 2.14 milestone Jan 31, 2024
antekresic added a commit that referenced this pull request Feb 7, 2024
This release contains performance improvements, an improved hypertable DDL API
and bug fixes since the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations (100k limit, configurable)
* Helper functions for determining compression settings

**Removal notice: Multi-node support**
TimescaleDB 2.13 is the last version that includes multi-node support. Multi-node
support is effectively removed in 2.14 version. Learn more about it
[here](docs/MultiNodeDeprecation.md).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include recompress_chunk procedure. Its
function will be replaced by compress_chunk function which should work on uncompressed
and partially compressed chunk. It should be used to fully compress all chunks.

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
@antekresic antekresic mentioned this pull request Feb 7, 2024
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit to antekresic/timescaledb that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* timescale#6325 Add plan-time chunk exclusion for real-time CAggs
* timescale#6360 Remove support for creating Continuous Aggregates with old format
* timescale#6386 Add functions for determining compression defaults
* timescale#6410 Remove multinode public API
* timescale#6440 Allow SQLValueFunction pushdown into compressed scan
* timescale#6463 Support approximate hypertable size
* timescale#6513 Make compression settings per chunk
* timescale#6529 Remove reindex_relation from recompression
* timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* timescale#6545 Remove restrictions for changing compression settings
* timescale#6566 Limit tuple decompression during DML operations
* timescale#6579 Change compress_chunk and decompress_chunk to idempotent version by default
* timescale#6608 Add LWLock for OSM usage in loader
* timescale#6609 Deprecate recompress_chunk
* timescale#6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* timescale#6541 Inefficient join plans on compressed hypertables.
* timescale#6491 Enable now() plantime constification with BETWEEN
* timescale#6494 Fix create_hypertable referenced by fk succeeds
* timescale#6498 Suboptimal query plans when using time_bucket with query parameters
* timescale#6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* timescale#6509 Make extension state available through function
* timescale#6512 Log extension state changes
* timescale#6522 Disallow triggers on CAggs
* timescale#6523 Reduce locking level on compressed chunk index during segmentwise recompression
* timescale#6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* timescale#6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* timescale#6575 Fix compressed chunk not found during upserts
* timescale#6592 Fix recompression policy ignoring partially compressed chunks
* timescale#6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
antekresic added a commit that referenced this pull request Feb 8, 2024
This release contains performance improvements and bug fixes since
the 2.13.1 release. We recommend that you upgrade at the next
available opportunity.

In addition, it includes these noteworthy features:

* Ability to change compression settings on existing compressed hypertables at any time.
New compression settings take effect on any new chunks that are compressed after the change.
* Reduced locking requirements during chunk recompression
* Limiting tuple decompression during DML operations to avoid decompressing a lot of tuples and causing storage issues (100k limit, configurable)
* Helper functions for determining compression settings

**For this release only**, you will need to restart the database before running `ALTER EXTENSION`

**Multi-node support removal announcement**
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).

**Deprecation notice: recompress_chunk procedure**
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
functionality will be replaced by the compress_chunk function, which, starting on TimescaleDB 2.14,
works on both uncompressed and partially compressed chunks.
The compress_chunk function should be used going forward to fully compress all types of chunks or even recompress
old fully compressed chunks using new compression settings (through the newly introduced recompress optional parameter).

**Features**
* #6325 Add plan-time chunk exclusion for real-time CAggs
* #6360 Remove support for creating Continuous Aggregates with old format
* #6386 Add functions for determining compression defaults
* #6410 Remove multinode public API
* #6440 Allow SQLValueFunction pushdown into compressed scan
* #6463 Support approximate hypertable size
* #6513 Make compression settings per chunk
* #6529 Remove reindex_relation from recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6545 Remove restrictions for changing compression settings
* #6566 Limit tuple decompression during DML operations
* #6579 Change compress_chunk and decompress_chunk to idempotent version by default
* #6608 Add LWLock for OSM usage in loader
* #6609 Deprecate recompress_chunk
* #6609 Add optional recompress argument to compress_chunk

**Bugfixes**
* #6541 Inefficient join plans on compressed hypertables.
* #6491 Enable now() plantime constification with BETWEEN
* #6494 Fix create_hypertable referenced by fk succeeds
* #6498 Suboptimal query plans when using time_bucket with query parameters
* #6507 time_bucket_gapfill with timezones doesn't handle daylight savings
* #6509 Make extension state available through function
* #6512 Log extension state changes
* #6522 Disallow triggers on CAggs
* #6523 Reduce locking level on compressed chunk index during segmentwise recompression
* #6531 Fix if_not_exists behavior for CAgg policy with NULL offsets
* #6571 Fix pathtarget adjustment for MergeAppend paths in aggregation pushdown code
* #6575 Fix compressed chunk not found during upserts
* #6592 Fix recompression policy ignoring partially compressed chunks
* #6610 Ensure qsort comparison function is transitive

**Thanks**
* @coney21 and @GStechschulte for reporting the problem with inefficient join plans on compressed hypertables.
* @HollowMan6 for reporting triggers not working on materialized views of
CAggs
* @jbx1 for reporting suboptimal query plans when using time_bucket with query parameters
* @JerkoNikolic for reporting the issue with gapfill and DST
* @pdipesh02 for working on removing the old Continuous Aggregate format
* @raymalt and @martinhale for reporting very slow query plans on realtime CAggs queries
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport-not-done Automated backport of this PR has failed non-retriably (e.g. conflicts) bug compression
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants