Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor version update to PostgreSQL 17.1 #524

Merged
merged 100 commits into from
Nov 13, 2024
Merged

Conversation

hlinnaka
Copy link
Contributor

No description provided.

tglsfdc and others added 30 commits September 24, 2024 17:21
Commit d1379eb carelessly broke printACLColumn for pre-9.4 servers,
by using the cardinality() function which we introduced in 9.4.
We expect psql's describe-related commands to work back to 9.2, so
this is bad.  Use the longstanding array_length() function instead.

Per report from Christoph Berg.  Back-patch to v17.

Discussion: https://postgr.es/m/[email protected]
The current use always releases this locktag.  A planned use will
continue that intent.  It will involve more areas of code, making unlock
omissions easier.  Warn under debug_assertions, like we do for various
resource leaks.  Back-patch to v12 (all supported versions), the plan
for the commit of the new use.

Reviewed by Heikki Linnakangas.

Discussion: https://postgr.es/m/[email protected]
As previously-added tests demonstrated, heap_inplace_update() could
instead update an unrelated tuple of the same catalog.  It could lose
the update.  Losing relhasindex=t was a source of index corruption.
Inplace-updating commands like VACUUM will now wait for heap_update()
commands like GRANT TABLE and GRANT DATABASE.  That isn't ideal, but a
long-running GRANT already hurts VACUUM progress more just by keeping an
XID running.  The VACUUM will behave like a DELETE or UPDATE waiting for
the uncommitted change.

For implementation details, start at the systable_inplace_update_begin()
header comment and README.tuplock.  Back-patch to v12 (all supported
versions).  In back branches, retain a deprecated heap_inplace_update(),
for extensions.

Reported by Smolkin Grigory.  Reviewed by Nitin Motiani, (in earlier
versions) Heikki Linnakangas, and (in earlier versions) Alexander
Lakhin.

Discussion: https://postgr.es/m/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com
The previous commit fixed some ways of losing an inplace update.  It
remained possible to lose one when a backend working toward a
heap_update() copied a tuple into memory just before inplace update of
that tuple.  In catalogs eligible for inplace update, use LOCKTAG_TUPLE
to govern admission to the steps of copying an old tuple, modifying it,
and issuing heap_update().  This includes MERGE commands.  To avoid
changing most of the pg_class DDL, don't require LOCKTAG_TUPLE when
holding a relation lock sufficient to exclude inplace updaters.
Back-patch to v12 (all supported versions).  In v13 and v12, "UPDATE
pg_class" or "UPDATE pg_database" can still lose an inplace update.  The
v14+ UPDATE fix needs commit 86dc900,
and it wasn't worth reimplementing that fix without such infrastructure.

Reviewed by Nitin Motiani and (in earlier versions) Heikki Linnakangas.

Discussion: https://postgr.es/m/[email protected]
Running vacuumdb with a non-superuser while another user has created a
temporary table would lead to a mid-flight permission failure,
interrupting the operation.  vacuum_rel() skips temporary relations of
other backends, and it makes no sense for vacuumdb to know about these
relations, so let's switch it to ignore temporary relations entirely.

Adding a qual in the query based on relpersistence simplifies the
generation of its WHERE clause in vacuum_one_database(), per se the
removal of "has_where".

Author: VaibhaveS, Michael Paquier
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CAM_eQjwfAR=y3G1fGyS1U9FTmc+FyJm9amNfY2QCZBnDDbNPZg@mail.gmail.com
Backpatch-through: 12
There's no need to add another level of indentation to this status
message.  pg_log() will put it in the right place.

Oversight in commit 347758b.

Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/ZunW7XHLd2uTts4f%40nathan
Backpatch-through: 17
Commit e08d74c removed that restriction, but missed updating
the documentation about it.  Noted by Egor Rogov.

Discussion: https://postgr.es/m/[email protected]
An invalid toast index is skipped in reindex_relation().  These would be
remnants of a failed REINDEX CONCURRENTLY and they should never been
rebuilt as there can only be one valid toast index at a time.

REINDEX_REL_SUPPRESS_INDEX_USE, used by CLUSTER and VACUUM FULL, needs
to maintain a list of the indexes being processed.  The list of indexes
is retrieved from the relation cache, and includes invalid indexes.  The
code has missed that invalid toast indexes are ignored in
reindex_relation() as this leads to a hard failure in reindex_index(),
and they were left in the reindex pending list, making the list
inconsistent when rechecked.  The incorrect memory access was happening
when scanning pg_class for the refresh of pg_database.datfrozenxid, when
doing a scan of pg_class.

This issue exists since REINDEX CONCURRENTLY exists, where invalid toast
indexes can exist, so backpatch all the way down.

Reported-by: Alexander Lakhin
Author: Tender Wang
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 12
Since v17, CREATE MATERIALIZED VIEW has set search_path to
"pg_catalog, pg_temp" while running the query.  The docs for the
other commands that restrict search_path mention it, but the page
for CREATE MATERIALIZED VIEW does not.  Fix that.

Oversight in commit 4b74ebf.

Author: Yugo Nagata
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20240805160502.d2a4975802a832b1e04afb80%40sraoss.co.jp
Backpatch-through: 17
Back-patch to v12 (all supported versions).
Defect in last week's commit aac2c9b,
per Coverity.  Reaching this would need catalog corruption.  Back-patch
to v12, like that commit.
Reindexing temp tables or indexes of other sessions is not allowed.
However, reindexdb in parallel mode previously listed them as
the objects to process, leading to failures.

This commit ensures reindexdb in parallel mode skips temporary tables
and indexes by adding a condition based on the relpersistence column
in pg_class to the object listing queries, preventing these issues.

Note that this commit does not affect reindexdb when temporary tables
or indexes are explicitly specified using the -t or -j options;
reindexdb in that case still does not skip them and can cause an error.

Back-patch to v13 where parallel mode was introduced in reindexdb.

Author: Fujii Masao
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/[email protected]
For some reason this listed "-f" and "-w" as valid switches, though
the code doesn't implement any such thing nor do the docs mention
them.  The effect of this was that if you tried to use one of these
switches, you'd get an unhelpful error message.

Yusuke Sugie

Discussion: https://postgr.es/m/[email protected]
COMMIT PREPARED removes on-disk 2PC files near its end, but the state
checked if a file is on-disk or not gets read from shared memory while
not holding the two-phase state lock.

Because of that, there was a small window where a second backend doing a
PREPARE TRANSACTION could reuse the GlobalTransaction put back into the
2PC free list by the COMMIT PREPARED, overwriting the "ondisk" flag read
afterwards by the COMMIT PREPARED to decide if its on-disk two-phase
state file should be removed, preventing the file deletion.

This commit fixes this issue so as the "ondisk" flag in the
GlobalTransaction is read while holding the two-phase state lock, not
from shared memory after its entry has been added to the free list.

Orphaned two-phase state files flushed to disk after a checkpoint are
discarded at the beginning of recovery.  However, a truncation of
pg_xact/ would make the startup process issue a FATAL when it cannot
read the SLRU page holding the state of the transaction whose 2PC file
was orphaned, which is a necessary step to decide if the 2PC file should
be removed or not.  Removing manually the file would be necessary in
this case.

Issue introduced by effe7d9, so backpatch all the way down.

Mea culpa.

Author: wuchengwen
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 12
There were unnecessary non-breaking spaces (nbsp, U+00A0, 0xc2a0 in
UTF-8) in the docs.  This commit replaces them with ASCII spaces
(0x20).

config.sgml is backpatched through 17.
ref/drop_extension.sgml is backpatched through 13.

Discussion: https://postgr.es/m/20240930.153404.202479334310259810.ishii%40postgresql.org
Reviewed-by: Yugo Nagata, Daniel Gustafsson
Backpatch-through: 17, 13
The fork name is always separated with the block number by an underscore
in the names of the files generated, but the docs stuck them together
without a separator, which was confusing.

Author: Christoph Berg
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 16
Use pqParseIntParam (nee parse_int_param) instead of using strtol
directly.  This allows trailing whitespace, which the previous coding
didn't, and makes the spelling of the error message consistent with
other similar cases.

This seems to be an oversight in commit e7a2217, which introduced
parse_int_param.  That fixed places that were using atoi(), but missed
this place which was randomly using strtol() instead.

Ordinarily I'd consider this minor cleanup not worth back-patching.
However, it seems that ecpg assumes it can add trailing whitespace
to URL parameters, so that use of the keepalives option fails in
that context.  Perhaps that's worth improving as a separate matter.
In the meantime, back-patch this to all supported branches.

Yuto Sasaki (some further cleanup by me)

Discussion: https://postgr.es/m/TY2PR01MB36286A7B97B9A15793335D18C1772@TY2PR01MB3628.jpnprd01.prod.outlook.com
…URCE.

If a MERGE command contains WHEN NOT MATCHED BY SOURCE actions, the
merge join condition is used by the executor to distinguish MATCHED
from NOT MATCHED BY SOURCE cases. However, this qual is executed using
the output from the join subplan node, which nulls the output from the
source relation in the not matched case, and so the result may be
incorrect if the join condition is "non-strict" -- for example,
something like "src.col IS NOT DISTINCT FROM tgt.col".

Fix this by enhancing the join recheck condition with an additional
"src IS NOT NULL" check, so that it does the right thing when
evaluated using the output from the join subplan.

Noted by Tom Lane while investigating bug #18634 from Alexander
Lakhin.

Back-patch to v17, where WHEN NOT MATCHED BY SOURCE support was added
to MERGE.

Discussion: https://postgr.es/m/18634-db5299c937877f2b%40postgresql.org
If a MERGE command contains WHEN NOT MATCHED BY SOURCE actions, the
source relation appears on the outer side of the join. Thus, any Vars
referring to the source in the merge join condition, actions, and
RETURNING list should be marked as nullable by the join, since they
are used in the ModifyTable node above the join. Note that this only
applies to the copy of join condition used in the executor to
distinguish MATCHED from NOT MATCHED BY SOURCE cases. Vars in the
original join condition, inside the join node itself, should not be
marked.

Failure to correctly mark these Vars led to a "wrong varnullingrels"
error in the final stage of query planning, in some circumstances. We
happened to get away without this in all previous tests, since they
all involved a ModifyTable node directly on top of the join node, so
that the top plan targetlist coincided with the output of the join,
and the varnullingrels check was more lax. However, if another plan
node, such as a one-time filter Result node, gets inserted between the
ModifyTable node and the join node, then a stricter check is applied,
which fails.

Per bug #18634 from Alexander Lakhin. Thanks to Tom Lane and Richard
Guo for review and analysis.

Back-patch to v17, where WHEN NOT MATCHED BY SOURCE support was added
to MERGE.

Discussion: https://postgr.es/m/18634-db5299c937877f2b%40postgresql.org
Commit bf03cfd started scanning all available BCP 47 locale names on
Windows.  This caused an abort/crash in the Windows runtime library if
the default locale name contained non-ASCII characters, because of our
use of the setlocale() save/restore pattern with "char" strings.  After
switching to another locale with a different encoding, the saved name
could no longer be understood, and setlocale() would abort.

"Turkish_Türkiye.1254" is the example from recent reports, but there are
other examples of countries and languages with non-ASCII characters in
their names, and they appear in Windows' (old style) locale names.

To defend against this:

1.  In initdb, reject non-ASCII locale names given explicity on the
command line, or returned by the operating system environment with
setlocale(..., ""), or "canonicalized" by the operating system when we
set it.

2.  In initdb only, perform the save-and-restore with Windows'
non-standard wchar_t variant of setlocale(), so that it is not subject
to round trip failures stemming from char string encoding confusion.

3.  In the backend, we don't have to worry about the save-and-restore
problem because we have already vetted the defaults, so we just have to
make sure that CREATE DATABASE also rejects non-ASCII names in any new
databases.  SET lc_XXX doesn't suffer from the problem, but the ban
applies to it too because it uses check_locale().  CREATE COLLATION
doesn't suffer from the problem either, but it doesn't use
check_locale() so it is not included in the new ban for now, to minimize
the change.

Anyone who encounters the new error message should either create a new
duplicated locale with an ASCII-only name using Windows Locale Builder,
or consider using BCP 47 names like "tr-TR".  Users already couldn't
initialize a cluster with "Turkish_Türkiye.1254" on PostgreSQL 16+, but
the new failure mode is an error message that explains why, instead of a
crash.

Back-patch to 16, where bf03cfd landed.  Older versions are affected
in theory too, but only 16 and later are causing crash reports.

Reviewed-by: Andrew Dunstan <[email protected]> (the idea, not the patch)
Reported-by: Haifang Wang (Centific Technologies Inc) <[email protected]>
Discussion: https://postgr.es/m/PH8PR21MB3902F334A3174C54058F792CE5182%40PH8PR21MB3902.namprd21.prod.outlook.com
When instantiating an existing partitioned index for a new child
partition, we use generateClonedIndexStmt to build a suitable
IndexStmt to pass to DefineIndex.  However, when DefineIndex needs
to recurse to instantiate a newly created partitioned index on an
existing child partition, it was doing copyObject on the given
IndexStmt and then applying a bunch of ad-hoc fixups.  This has
a number of problems, primarily that it implies fresh lookups of
referenced objects such as opclasses and collations.  Since commit
2af07e2 caused DefineIndex to restrict search_path internally, those
lookups could fail or deliver different results than the original one.
We can avoid those problems and save a few dozen lines of code by
using generateClonedIndexStmt in this code path too.

Another thing this fixes is incorrect propagation of parent-index
comments to child indexes (because the copyObject approach copies
the idxcomment field while generateClonedIndexStmt doesn't).  I had
noticed this in connection with commit c01eb61, but not run the
problem to ground.

I'm tempted to back-patch this further than v17, but the only thing
it's known to fix in older branches is the comment issue, which is
pretty minor and doesn't seem worth the risk of introducing new
issues in stable branches.  (If anyone does care about that,
clearing idxcomment in the copied IndexStmt would be a safer fix.)

Per bug #18637 from usamoi.  Back-patch to v17 where the search_path
change came in.

Discussion: https://postgr.es/m/[email protected]
pg_cursor() supposed that any Portal it finds in the hash table must
have sourceText set up, but there's an edge case where that is not so.
A newly-created Portal has sourceText = NULL, and that doesn't change
until PortalDefineQuery is called.  In SPI_cursor_open_internal,
we perform GetCachedPlan between CreatePortal and PortalDefineQuery,
and it's possible for user-defined code to execute during that
planning and cause a fetch from the pg_cursors view, resulting in a
null-pointer-dereference crash.  (It looks like the same could happen
in exec_bind_message, but I've not tried to provoke a failure there.)

I considered trying to fix this by setting sourceText sooner, but
there may be instances of this same calling pattern in extensions,
and we couldn't be sure they'd get the memo promptly.  It seems
better to redefine pg_cursor as not showing Portals that have
not yet had PortalDefineQuery called on them, which we can do by
just skipping them if sourceText is still NULL.

(Before a1c6923, pg_cursor would instead return a row with NULL
in the statement column.  We could revert to that behavior but it
doesn't really seem like a better definition, especially since our
documentation doesn't suggest that the column could be NULL.)

Per report from PetSerAl.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKygsHTBXLXjwV43kpZa+Cs+XTiaeeJiZdL4cPBm9f4MTdw7wg@mail.gmail.com
…actions.

During logical decoding of in-progress transactions, we perform the toast
table scan while fetching the default toast value for an attribute. We
forgot to initialize the flag during this scan to indicate that the system
table scan is in progress. We need this flag to ensure that during logical
decoding we never directly access the tableam or heap APIs because we check
for concurrent aborts only in systable_* APIs.

Reported-by: Alexander Lakhin
Author: Takeshi Ideriha, Hou Zhijie
Reviewed-by: Amit Kapila, Hou Zhijie
Backpatch-through: 14
Discussion: https://postgr.es/m/[email protected]
Several places treat MyStartTime as a "long", which is only 32 bits
wide on some platforms.  In reality, MyStartTime is a pg_time_t,
i.e., a signed 64-bit integer.  This will lead to interesting bugs
on the aforementioned systems in 2038 when signed 32-bit integers
are no longer sufficient to store Unix time (e.g., "pg_ctl start"
hanging).  To fix, ensure that MyStartTime is handled as a 64-bit
value everywhere.  (Of course, users will need to ensure that
time_t is 64 bits wide on their system, too.)

Co-authored-by: Max Johnson
Discussion: https://postgr.es/m/CO1PR07MB905262E8AC270FAAACED66008D682%40CO1PR07MB9052.namprd07.prod.outlook.com
Backpatch-through: 12
Commit 1ab67c9, which modified this catalog query so that it
doesn't return temporary relations, forgot to schema-qualify the
operator.  A comment earlier in the function implores us to fully
qualify everything in the query:

	 * Since we execute the constructed query with the default search_path
	 * (which could be unsafe), everything in this query MUST be fully
	 * qualified.

This commit fixes that.  While at it, add a newline for consistency
with surrounding code.

Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/ZwQJYcuPPUsF0reU%40nathan
Backpatch-through: 12
Reported-by: Laurenz Albe

Discussion: https://postgr.es/m/[email protected]

Backpatch-through: 17 only
The cache needs to be available very early, so don't rely on
InitializeSearchPath() to initialize the it.

Reported-by: Murat Efendioğlu
Discussion: https://postgr.es/m/CACbCzujQ4zS8MM1bx-==+tr+D3Hk5G1cjN4XkUQ+Q=cEpwhzqg@mail.gmail.com
Backpatch-through: 17
The test was unstable in branches 14 and 15 as we were relying on the
number of changes in the table having a toast column to start streaming.
On branches >= 16, we have a GUC debug_logical_replication_streaming which
can stream each change, so the test was stable in those branches.

Change the test to use PREPARE TRANSACTION as that should make the result
consistent and test the code changed in 022564f.

Reported-by: Daniel Gustafsson as per buildfarm
Author: Hou Zhijie, Amit Kapila
Backpatch-through: 14
Discussion: https://postgr.es/m/[email protected]
Commit 149ac7d which re-implemented pgindent in Perl explicitly
imported the devnull function from File::Spec, but the module does
not export anything.  In recent versions of Perl calling a missing
import function cause a warning, which combined with warnings being
fatal cause pgindent to error out.

Backpatch to all supported versions.

Author: Erik Wienhold <[email protected]>
Reviewed-by: Andrew Dunstan <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discusson: https://postgr.es/m/[email protected]
Backpatch-through: v12
The release notes accidentally spelled AT LOCAL as AS LOCAL.

Reported-by: Takatsuka Haruka <[email protected]>
Discussion: https://postgr.es/m/[email protected]
michaelpq and others added 21 commits November 5, 2024 09:40
PgStat_HashKey is currently initialized in a way that could result in
random data if the structure has any padding bytes.  The structure
has no padding bytes currently, fortunately, but it could become a
problem should the structure change at some point in the future.

The code is changed to use some memset(0) so as any padding would be
handled properly, as it would be surprising to see random failures in
the pgstats entry lookups.  PgStat_HashKey is a structure internal to
pgstats, and an ABI change could be possible in the scope of a bug fix,
so backpatch down to 15 where this has been introduced.

Author: Bertrand Drouvot
Reviewed-by: Jelte Fennema-Nio, Michael Paquier
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 15
Supply a new memory manager for RuntimeDyld, to avoid crashes in
generated code caused by memory placement that can overflow a 32 bit
data type.  This is a drop-in replacement for the
llvm::SectionMemoryManager class in the LLVM library, with Michael
Smith's proposed fix from
https://www.github.com/llvm/llvm-project/pull/71968.

We hereby slurp it into our own source tree, after moving into a new
namespace llvm::backport and making some minor adjustments so that it
can be compiled with older LLVM versions as far back as 12.  It's harder
to make it work on even older LLVM versions, but it doesn't seem likely
that people are really using them so that is not investigated for now.

The problem could also be addressed by switching to JITLink instead of
RuntimeDyld, and that is the LLVM project's recommended solution as
the latter is about to be deprecated.  We'll have to do that soon enough
anyway, and then when the LLVM version support window advances far
enough in a few years we'll be able to delete this code.  Unfortunately
that wouldn't be enough for PostgreSQL today: in most relevant versions
of LLVM, JITLink is missing or incomplete.

Several other projects have already back-ported this fix into their fork
of LLVM, which is a vote of confidence despite the lack of commit into
LLVM as of today.  We don't have our own copy of LLVM so we can't do
exactly what they've done; instead we have a copy of the whole patched
class so we can pass an instance of it to RuntimeDyld.

The LLVM project hasn't chosen to commit the fix yet, and even if it
did, it wouldn't be back-ported into the releases of LLVM that most of
our users care about, so there is not much point in waiting any longer
for that.  If they make further changes and commit it to LLVM 19 or 20,
we'll still need this for older versions, but we may want to
resynchronize our copy and update some comments.

The changes that we've had to make to our copy can be seen by diffing
our SectionMemoryManager.{h,cpp} files against the ones in the tree of
the pull request.  Per the LLVM project's license requirements, a copy
is in SectionMemoryManager.LICENSE.

This should fix the spate of crash reports we've been receiving lately
from users on large memory ARM systems.

Back-patch to all supported releases.

Co-authored-by: Thomas Munro <[email protected]>
Co-authored-by: Anthonin Bonnefoy <[email protected]>
Reviewed-by: Anthonin Bonnefoy <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]> (license aspects)
Reported-by: Anthonin Bonnefoy <[email protected]>
Discussion: https://postgr.es/m/CAO6_Xqr63qj%3DSx7HY6ZiiQ6R_JbX%2B-p6sTPwDYwTWZjUmjsYBg%40mail.gmail.com
An unfortunate typo in commit 2d819a0 can cause wrong results when
the default collation provider is libc, LC_CTYPE=C, and LC_COLLATE is
a real locale. Users with this combination of settings must REINDEX
all affected indexes.

The same typo can also cause performance degradation when LC_COLLATE=C
and LC_CTYPE is a real locale.

Problem does not exist in master (due to refactoring), so fix only in
version 17.

Reported-by: Drew Callahan
Discussion: https://postgr.es/m/[email protected]
The previous wording is easy to read incorrectly; this change makes it
simpler, less ambiguous, and less prominent.

Backpatch to all live branches.

Reviewed-by: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/[email protected]
When we generate multiple clones of the same qual condition to cope
with outer join identity 3, we need to ensure that all the clones get
the same serial number.  To achieve this, we reset the
root->last_rinfo_serial counter each time we produce RestrictInfo(s)
from the qual list (see deconstruct_distribute_oj_quals).  This
approach works only if we ensure that we are not changing the qual
list in any way that'd affect the number of RestrictInfos built from
it.

However, with b262ad4, an IS NULL qual on a NOT NULL column might
result in an additional constant-FALSE RestrictInfo.  And different
versions of the same qual clause can lead to different conclusions
about whether it can be reduced to constant-FALSE.  This would affect
the number of RestrictInfos built from the qual list for different
versions, causing inconsistent RestrictInfo serial numbers across
multiple clones of the same qual.  This inconsistency can confuse
users of these serial numbers, such as rebuild_joinclause_attr_needed,
and lead to planner errors such as "ERROR:  variable not found in
subplan target lists".

To fix, reset the root->last_rinfo_serial counter after generating the
additional constant-FALSE RestrictInfo.

Back-patch to v17 where the issue crept in.  In v17, I failed to make
a test case that would expose this bug, so no test case for v17.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-B6kafn+LmPuh-TYFwFyEm-vVj3Qqv7Yo-69CEv14rRg@mail.gmail.com
If the collation of any grouping column doesn’t match the collation of
the corresponding partition key, partitionwise grouping can yield
incorrect results. For example, rows that would be grouped under the
grouping collation may end up in different partitions under the
partitioning collation. In such cases, full partitionwise grouping
would produce results that differ from those without partitionwise
grouping, so disallowed that.

Partial partitionwise aggregation is still allowed, as the Finalize
step reconciles partition-level aggregates with grouping requirements
across all partitions, ensuring that the final output remains
consistent.

This commit also fixes group_by_has_partkey() by ensuring the
RelabelType node is stripped from grouping expressions when matching
them to partition key expressions to avoid false mismatches.

Bug: #18568
Reported-by: Webbo Han <[email protected]>
Author: Webbo Han <[email protected]>
Reviewed-by: Tender Wang <[email protected]>
Reviewed-by: Aleksander Alekseev <[email protected]>
Reviewed-by: Jian He <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/CAHewXNno_HKiQ6PqyLYfuqDtwp7KKHZiH1J7Pqyz0nr+PS2Dwg@mail.gmail.com
Backpatch-through: 12
If the collation of any join key column doesn’t match the collation of
the corresponding partition key, partitionwise joins can yield incorrect
results. For example, rows that would match under the join key collation
might be located in different partitions due to the partitioning
collation. In such cases, a partitionwise join would yield different
results from a non-partitionwise join, so disallow it in such cases.

Reported-by: Tender Wang <[email protected]>
Author: Jian He <[email protected]>
Reviewed-by: Tender Wang <[email protected]>
Reviewed-by: Junwang Zhao <[email protected]>
Discussion: https://postgr.es/m/CAHewXNno_HKiQ6PqyLYfuqDtwp7KKHZiH1J7Pqyz0nr+PS2Dwg@mail.gmail.com
Backpatch-through: 12
Commit ac04aa8 put the shutoff for this into the planner, which is
not ideal because it doesn't prevent us from re-using a previously
made parallel plan.  Revert the planner change and instead put the
shutoff into InitializeParallelDSM, modeling it on the existing code
there for recovering from failure to allocate a DSM segment.

However, that code path is mostly untested, and testing a bit harder
showed there's at least one bug: ExecHashJoinReInitializeDSM is not
prepared for us to have skipped doing parallel DSM setup.  I also
thought the Assert in ReinitializeParallelWorkers is pretty
ill-advised, and replaced it with a silent Min() operation.

The existing test case added by ac04aa8 serves fine to test this
version of the fix, so no change needed there.

Patch by me, but thanks to Noah Misch for the core idea that we
could shut off worker creation when !INTERRUPTS_CAN_BE_PROCESSED.
Back-patch to v12, as ac04aa8 was.

Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
(We lack a query for identifying broken foreign keys in the first
changelog item, but the rest of this is in reviewable shape.)

As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.

Also as usual for a .1 release, there are some entries here that
are not really relevant for v17 because they already appeared in 17.0.
Those'll be removed later.
The code in question (pg_preadv() and pg_pwritev()) has been around
for a while, but commit 15c9ac3 moved it to a header file.  If
third-party code that includes this header file is built with
-Wsign-compare on a system without preadv() or pwritev(), warnings
ensue.  This commit fixes said warnings by casting the result of
pg_pread()/pg_pwrite() to size_t, which should be safe because we
will have already checked for a negative value.

Author: Wolfgang Walther
Discussion: https://postgr.es/m/16989737-1aa8-48fd-8dfe-b7ada06509ab%40technowledgy.de
Backpatch-through: 17
This commit changes libpq so that errors reported by the backend during
the protocol negotiation for SSL and GSS are discarded by the client, as
these may include bytes that could be consumed by the client and write
arbitrary bytes to a client's terminal.

A failure with the SSL negotiation now leads to an error immediately
reported, without a retry on any other methods allowed, like a fallback
to a plaintext connection.

A failure with GSS discards the error message received, and we allow a
fallback as it may be possible that the error is caused by a connection
attempt with a pre-11 server, GSS encryption having been introduced in
v12.  This was a problem only with v17 and newer versions; older
versions discard the error message already in this case, assuming a
failure caused by a lack of support for GSS encryption.

Author: Jacob Champion
Reviewed-by: Peter Eisentraut, Heikki Linnakangas, Michael Paquier
Security: CVE-2024-10977
Backpatch-through: 12
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 2592030f456910263c8972668576f954fce10595
Many process environment variables (e.g. PATH), bypass the containment
expected of a trusted PL.  Hence, trusted PLs must not offer features
that achieve setenv().  Otherwise, an attacker having USAGE privilege on
the language often can achieve arbitrary code execution, even if the
attacker lacks a database server operating system user.

To fix PL/Perl, replace trusted PL/Perl %ENV with a tied hash that just
replaces each modification attempt with a warning.  Sites that reach
these warnings should evaluate the application-specific implications of
proceeding without the environment modification:

  Can the application reasonably proceed without the modification?

    If no, switch to plperlu or another approach.

    If yes, the application should change the code to stop attempting
    environment modifications.  If that's too difficult, add "untie
    %main::ENV" in any code executed before the warning.  For example,
    one might add it to the start of the affected function or even to
    the plperl.on_plperl_init setting.

In passing, link to Perl's guidance about the Perl features behind the
security posture of PL/Perl.

Back-patch to v12 (all supported versions).

Andrew Dunstan and Noah Misch

Security: CVE-2024-10979
If a CTE, subquery, sublink, security invoker view, or coercion
projection references a table with row-level security policies, we
neglected to mark the plan as potentially dependent on which role
is executing it.  This could lead to later executions in the same
session returning or hiding rows that should have been hidden or
returned instead.

Reported-by: Wolfgang Walther
Reviewed-by: Noah Misch
Security: CVE-2024-10976
Backpatch-through: 12
The SQL spec mandates that SET SESSION AUTHORIZATION implies
SET ROLE NONE.  We tried to implement that within the lowest-level
functions that manipulate these settings, but that was a bad idea.
In particular, guc.c assumes that it doesn't matter in what order
it applies GUC variable updates, but that was not the case for these
two variables.  This problem, compounded by some hackish attempts to
work around it, led to some security-grade issues:

* Rolling back a transaction that had done SET SESSION AUTHORIZATION
would revert to SET ROLE NONE, even if that had not been the previous
state, so that the effective user ID might now be different from what
it had been.

* The same for SET SESSION AUTHORIZATION in a function SET clause.

* If a parallel worker inspected current_setting('role'), it saw
"none" even when it should see something else.

Also, although the parallel worker startup code intended to cope
with the current role's pg_authid row having disappeared, its
implementation of that was incomplete so it would still fail.

Fix by fully separating the miscinit.c functions that assign
session_authorization from those that assign role.  To implement the
spec's requirement, teach set_config_option itself to perform "SET
ROLE NONE" when it sets session_authorization.  (This is undoubtedly
ugly, but the alternatives seem worse.  In particular, there's no way
to do it within assign_session_authorization without incompatible
changes in the API for GUC assign hooks.)  Also, improve
ParallelWorkerMain to directly set all the relevant user-ID variables
instead of relying on some of them to get set indirectly.  That
allows us to survive not finding the pg_authid row during worker
startup.

In v16 and earlier, this includes back-patching 9987a7b which
fixed a violation of GUC coding rules: SetSessionAuthorization
is not an appropriate place to be throwing errors from.

Security: CVE-2024-10978
meson makes the backslashes in text2macro.pl's --strip argument
into forward slashes, effectively disabling comment stripping.
That hasn't caused us issues before, but it breaks the test case
for b7e3a52a8.  We don't really need the pattern to be adjustable,
so just hard-wire it into the script instead.

Context: mesonbuild/meson#1564
Security: CVE-2024-10979
TestUpgradeXversion knows how to make the main regression database's
references to pg_regress.so be version-independent.  But it doesn't
do that for plperl's database, so that the C function added by
commit b7e3a52a8 is causing cross-version upgrade test failures.
Path of least resistance is to just drop the function at the end
of the new test.

In <= v14, also take the opportunity to clean up the generated
test files.

Security: CVE-2024-10979
…cks.

Commit 5a2fed911 had an unexpected side-effect: the parallel worker
launched for the new test case would fail if it couldn't use a
superuser-reserved connection slot.  The reason that test failed
while all our pre-existing ones worked is that the connection
privilege tests in InitPostgres had been based on the superuserness
of the leader's AuthenticatedUserId, but after the rearrangements
of 5a2fed911 we were testing the superuserness of CurrentUserId,
which the new test case deliberately made to be a non-superuser.

This all seems very accidental and probably not the behavior we really
want, but a security patch is no time to be redesigning things.
Pending some discussion about desirable semantics, hack it so that
InitPostgres continues to pay attention to the superuserness of
AuthenticatedUserId when starting a parallel worker.

Nathan Bossart and Tom Lane, per buildfarm member sawshark.

Security: CVE-2024-10978
Security: CVE-2024-10976, CVE-2024-10977, CVE-2024-10978, CVE-2024-10979
@hlinnaka hlinnaka changed the base branch from REL_16_STABLE_neon to REL_17_STABLE_neon November 12, 2024 07:51
hlinnaka added a commit to neondatabase/neon that referenced this pull request Nov 13, 2024
This includes a patch to temporarily disable one test in the pg_anon
test suite. It is an upstream issue, the test started failing with the
new PostgreSQL minor versions because of a change in the default
timezone used in tests. We don't want to block the release for this,
so just disable the test for now. See
https://gitlab.com/dalibo/postgresql_anonymizer/-/commit/199f0a392b37c59d92ae441fb8f037e094a11a52#note_2148017485

Corresponding postgres repository PRs:
neondatabase/postgres#524
neondatabase/postgres#525
neondatabase/postgres#526
neondatabase/postgres#527
@hlinnaka hlinnaka merged commit aa2e29f into REL_17_STABLE_neon Nov 13, 2024
1 check passed
@hlinnaka hlinnaka deleted the minor-update-17 branch November 13, 2024 13:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.