-
Notifications
You must be signed in to change notification settings - Fork 884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix REASSIGN OWNED BY for background jobs #6987
Conversation
81d6e8f
to
b3e5136
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6987 +/- ##
==========================================
+ Coverage 80.06% 81.77% +1.70%
==========================================
Files 190 203 +13
Lines 37181 38002 +821
Branches 9450 9851 +401
==========================================
+ Hits 29770 31075 +1305
+ Misses 2997 2967 -30
+ Partials 4414 3960 -454 ☔ View full report in Codecov by Sentry. |
bef586c
to
5d92c5a
Compare
5d92c5a
to
bcaebbf
Compare
fbf1e05
to
24f3486
Compare
Please add a test for reassigning to postgres. |
I assume you mean a user with SUPERUSER privileges: we do not have the "postgres" user in the test suite. |
24f3486
to
d9ad469
Compare
I updated it with a test that demonstrates that assigning the job to a different owner does not work unless you've got superuser privileges. Don't think this is how we want it to work since an administrator should be able to reassign jobs without having superuser privileges. |
I wanted to see that it doesnt work, anything else would be a security vulnerability. |
Yes, obviously, but in this form it is also a usability issue. I am not sure how to have some sort of administrative user that does not have "superuser" privileges but still can run a "reassigned owned by" in a limited fashion. Not even sure it is possible to define a "safe" rule for this. Something we need to think about. |
adfd6ac
to
3ae9827
Compare
3ae9827
to
912f217
Compare
CREATE OR REPLACE FUNCTION insert_job( | ||
application_name NAME, | ||
job_type NAME, | ||
schedule_interval INTERVAL, | ||
max_runtime INTERVAL, | ||
retry_period INTERVAL, | ||
owner regrole DEFAULT CURRENT_ROLE::regrole, | ||
scheduled BOOL DEFAULT true, | ||
fixed_schedule BOOL DEFAULT false | ||
) RETURNS INT LANGUAGE SQL SECURITY DEFINER AS | ||
$$ | ||
INSERT INTO _timescaledb_config.bgw_job(application_name,schedule_interval,max_runtime,max_retries, | ||
retry_period,proc_name,proc_schema,owner,scheduled,fixed_schedule) | ||
VALUES($1,$3,$4,5,$5,$2,'public',$6,$7,$8) RETURNING id; | ||
$$; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're using it in other suite tests as well what about move it to utils/testsupport.sql
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're using it in other suite tests as well what about move it to
utils/testsupport.sql
?
It makes sense but since this is just copied from the other BGW tests it would be sensible to take it as a separate refactoring PR and fix the other uses as well.
20a00ce
to
055b1ac
Compare
Using `REASSIGN OWNED BY` for background jobs do not work because it does not change the owner of the job. This commit fixes this by capturing the utility command and makes the necessary changes to the `bgw_job` table. It also factors out background jobs DDL tests into a separate file.
055b1ac
to
01acf14
Compare
This release contains performance improvements and bug fixes since the 2.15.3 release. We recommend that you upgrade at the next available opportunity. **Features** * timescale#6880: Add support for the array operators used for compressed DML batch filtering. * timescale#6895: Improve the compressed DML expression pushdown. * timescale#6897: Add support for replica identity on compressed hypertables. * timescale#6918: Remove support for PG13. * timescale#6920: Rework compression activity wal markers. * timescale#6989: Add support for foreign keys when converting plain tables to hypertables. * timescale#7020: Add support for the chunk column statistics tracking. * timescale#7048: Add an index scan for INSERT DML decompression. * timescale#7075: Reduce decompression on the compressed INSERT. * timescale#7101: Reduce decompressions for the compressed UPDATE/DELETE. * timescale#7108 Reduce decompressions for INSERTs with UNIQUE constraints * timescale#7116 Use DELETE instead of TRUNCATE after compression * timescale#7134 Refactor foreign key handling for compressed hypertables * timescale#7161 Fix `mergejoin input data is out of order` **Bugfixes** * timescale#6987 Fix REASSIGN OWNED BY for background jobs * timescale#7018: Fix `search_path` quoting in the compression defaults function. * timescale#7046: Prevent locking for compressed tuples. * timescale#7055: Fix the `scankey` for `segment by` columns, where the type `constant` is different to `variable`. * timescale#7064: Fix the bug in the default `order by` calculation in compression. * timescale#7069: Fix the index column name usage. * timescale#7074: Fix the bug in the default `segment by` calculation in compression. **Thanks** * @jledentu For reporting a problem with mergejoin input order
This release contains significant performance improvements when working with compressed data, extended join support in continuous aggregates, and the ability to define foreign keys from regular tables towards hypertables. We recommend that you upgrade at the next available opportunity. In TimescaleDB v2.16.0 we: * Introduce multiple performance focused optimizations for data manipulation operations (DML) over compressed chunks. Improved upsert performance by more than 100x in some cases and more than 1000x in some update/delete scenarios. * Add the ability to define chunk skipping indexes on non-partitioning columns of compressed hypertables TimescaleDB v2.16.0 extends chunk exclusion to use those skipping (sparse) indexes when queries filter on the relevant columns, and prune chunks that do not include any relevant data for calculating the query response. * Offer new options for use cases that require foreign keys defined. You can now add foreign keys from regular tables towards hypertables. We have also removed some really annoying locks in the reverse direction that blocked access to referenced tables while compression was running. * Extend Continuous Aggregates to support more types of analytical queries. More types of joins are supported, additional equality operators on join clauses, and support for joins between multiple regular tables. **Highlighted features in this release** * Improved query performance through chunk exclusion on compressed hypertables. You can now define chunk skipping indexes on compressed chunks for any column with one of the following integer data types: `smallint`, `int`, `bigint`, `serial`, `bigserial`, `date`, `timestamp`, `timestamptz`. After you call `enable_chunk_skipping` on a column, TimescaleDB tracks the min and max values for that column. TimescaleDB uses that information to exclude chunks for queries that filter on that column, and would not find any data in those chunks. * Improved upsert performance on compressed hypertables. By using index scans to verify constraints during inserts on compressed chunks, TimescaleDB speeds up some ON CONFLICT clauses by more than 100x. * Improved performance of updates, deletes, and inserts on compressed hypertables. By filtering data while accessing the compressed data and before decompressing, TimescaleDB has improved performance for updates and deletes on all types of compressed chunks, as well as inserts into compressed chunks with unique constraints. By signaling constraint violations without decompressing, or decompressing only when matching records are found in the case of updates, deletes and upserts, TimescaleDB v2.16.0 speeds up those operations more than 1000x in some update/delete scenarios, and 10x for upserts. * You can add foreign keys from regular tables to hypertables, with support for all types of cascading options. This is useful for hypertables that partition using sequential IDs, and need to reference those IDs from other tables. * Lower locking requirements during compression for hypertables with foreign keys Advanced foreign key handling removes the need for locking referenced tables when new chunks are compressed. DML is no longer blocked on referenced tables while compression runs on a hypertable. * Improved support for queries on Continuous Aggregates `INNER/LEFT` and `LATERAL` joins are now supported. Plus, you can now join with multiple regular tables, and you can have more than one equality operator on join clauses. **PostgreSQL 13 support removal announcement** Following the deprecation announcement for PostgreSQL 13 in TimescaleDB v2.13, PostgreSQL 13 is no longer supported in TimescaleDB v2.16. The Currently supported PostgreSQL major versions are 14, 15 and 16. **Features** * #6880: Add support for the array operators used for compressed DML batch filtering. * #6895: Improve the compressed DML expression pushdown. * #6897: Add support for replica identity on compressed hypertables. * #6918: Remove support for PG13. * #6920: Rework compression activity wal markers. * #6989: Add support for foreign keys when converting plain tables to hypertables. * #7020: Add support for the chunk column statistics tracking. * #7048: Add an index scan for INSERT DML decompression. * #7075: Reduce decompression on the compressed INSERT. * #7101: Reduce decompressions for the compressed UPDATE/DELETE. * #7108 Reduce decompressions for INSERTs with UNIQUE constraints * #7116 Use DELETE instead of TRUNCATE after compression * #7134 Refactor foreign key handling for compressed hypertables * #7161 Fix `mergejoin input data is out of order` **Bugfixes** * #6987 Fix REASSIGN OWNED BY for background jobs * #7018: Fix `search_path` quoting in the compression defaults function. * #7046: Prevent locking for compressed tuples. * #7055: Fix the `scankey` for `segment by` columns, where the type `constant` is different to `variable`. * #7064: Fix the bug in the default `order by` calculation in compression. * #7069: Fix the index column name usage. * #7074: Fix the bug in the default `segment by` calculation in compression. **Thanks** * @jledentu For reporting a problem with mergejoin input order
Using
REASSIGN OWNED BY
for background jobs do not work because it does not change the owner of the job. This commit fixes this by capturing the utility command and makes the necessary changes to thebgw_job
table.It also factors out background jobs DDL tests into a separate file.