-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Migration 79/04_mitigate_stream_ordering_update_race
fails because there is no unique constraint matching given keys for referenced table "events"
#16192
Comments
79/04_mitigate_stream_ordering_update_race
fails because there is no unique constraint matching given keys for referenced table "events"
The SQL in question is synapse/synapse/storage/schema/main/delta/79/04_mitigate_stream_ordering_update_race.py Lines 54 to 58 in 8ebfd57
which suggests that there isn't a unique index on |
Can you |
Yep, here's the output.
|
It looks like you have a long backlog of background updates that have failed to apply, and that the migration in question is failing because it requires one of them to have completed. This is very strange and needs more investigation. Can you downgrade to older versions of Synapse until you find one that starts up without the error message you quoted? Once Synapse is started, can you leave it running for an hour, then grep the logs for |
I think v1.87.0 should work since you want earlier than #15998 / #15887? |
|
The relevant bit seems to be:
|
I'm also having this problem, but I am upgrading from an older database (v59). For me, the problem seems to be that background_updates queued in v60 are not able to run before the v79 update throws an exception, which then prevents synapse from starting. I can see why this is failing though, the index doesn't exist because the v60 background_updates haven't run. Delta 60 sets these up when it creates the stream_ordering2 field:
Because my homeserver refuses to start now, I can't get these background updates to run. This will probably happen to anyone who upgrades from <= 60 to 79. I suspect if I can just prevent 79 from running temporarily, then the v60 background updates will run, and then I can let v79 do its thing. But I'm not sure how to disable it. Or how the problem needs to be solved longterm! My logs / etc: empty background_updates table starting out in v59:
On my new server, it runs through the upgrade patches on the old database, but then fails at delta 79 My homeserver.log output of the upgrade:
my background_tasks table has 55 new rows in it after the upgrade:
and my events table is missing that unique index alright:
|
It looks like @gpetters94's installation couldn't run -- ALTER TABLE events DROP COLUMN stream_ordering; from synapse/synapse/storage/databases/main/events_bg_updates.py Lines 40 to 52 in 2b78981
Can you share the output of @damienvancouver: thanks for the report, but can you hold off for the time being? I would like to use this issue to fix one installation first and then see if that fix also works for you. |
It is frustrating that this obscures the exception from the background update. Maybe I can improve this... |
Would have been useful for #16192
The output of
|
... bugger. Is this because of the foreign keys on this column?
|
@DMRobertson Shared with my privately:
|
So there's a couple of things here:
Let's start with the first one. Note that we highly don't recommend manually modifying your database without input from us. Our goal is to get the background updates to "catch-up" and finish before some of the foreground updates have run. Looking at #15677 I think the manual fixes in there are "correct", but the automated fixed added in #15887 might not be working properly? Or might not have worked yet, it seems to require that Looking at the background updates given above, but then the logs given above it looks like some additional background updates have run now and in-fact you should have the needed index:
Just to double check, is this still failing with the same error? Can you redump the applied background updates and confirm which version of Synapse you're now running? |
I was able to fix my broken database with the following workaround. All commands below ran as root (or you can add "sudo" first if you aren't the root user)
Things that might go wrong: If you didn't install via the matrix.org repo packages, your __init__.py might be in a different spot. Try to find your top level matrix dir with If you restart too early before those background updates are done, it will crash the same way again. Just repeat the process but wait longer! I hope this helps out, it totally worked for me. In hindsight I should have gone from the Debian 10 backports package to the Debian 11 backports package (which is v78), before going to the matrix.org package which is v80. For those who read the warnings in time, that upgrade path should avoid this problem and the need for a workaround. |
Subscribed. I believe I have the same problem. Stuck on v1.88.0. When I try to upgrade to v1.89.0 or later it errors out. docker logs from postgres container:
docker logs from synapse container:
|
I have the same issue upgrading from 1.53 to 1.95.1. |
This is essentially the idea behind #16397. |
@gpetters94, are you able to answer Patrick's questions from #16192 (comment)?
|
I was able to fix my issue which looked like it had 2 problems within (#16192 (comment))
To get around this problem I used the work around from this issue (#15677). After apply the second workaround the background_updates table eventually emptied and then updating to v1.96.1 went smoothly. I hope this may help others that may be in my situation |
Description
Similar to #10691, I'm having issues after a system update. After an upgrade (from Debian 11 to 12) and a reboot, I get the following error on startup:
psycopg2.errors.InvalidForeignKey: there is no unique constraint matching given keys for referenced table "events"
What's strange is I don't think synapse actually updated. It looks like it's still using a bullseye version, and I have daily unattended upgrades so it shouldn't have changed much. I've compared the schema to the one referenced in the source code, but I don't see any missing keys.
Steps to reproduce
Homeserver
My self-hosted server
Synapse Version
1.90.0+bullseye1
Installation Method
Debian packages from packages.matrix.org
Database
PostgreSQL, single server, never ported.
Workers
I don't know
Platform
Debian 12 (previously 11) on a VPS
Configuration
No response
Relevant log output
The text was updated successfully, but these errors were encountered: