-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration eventually incomplete: last_successful_update
part of 0.23.0-alpha3 SQLite dump, but not of newly initialised PostgreSQL nodes
table schema
#1748
Comments
The one node created after the upgrade from 0.22.3 also has the value |
Hi @almereyda, I have created #1754 to remove the column from the table. As for the migration from one database engine to another, this is not something we support or have a goal to support so if you are unable to make the two SQL dialects with each other after this, then that will likely require you to do manual intervention. |
Thanks for the quick reaction and confirming the regression. Yes, manual intervention is totally expected, esp. when converting SQL dialects. So to confirm, migrating from one store to the other is totally possible. One only needs to take care of the serialisation format, by converting single tick marks into double quotes, remove the |
As much as I understand that the SQLite migration path is currently not supported officially, it may as well be valid to consider that it will happen in the field at some point or the other. To complete the instructions that we find here, it is also to note that we had to restart the sequence counters from their actual value, as this was not part of the SQLite dump. select nextval('api_keys_id_seq');
select nextval('nodes_id_seq');
select nextval('pre_auth_key_acl_tags_id_seq');
select nextval('pre_auth_keys_id_seq');
select nextval('routes_id_seq');
select nextval('users_id_seq'); This showed select max(id) from api_keys;
select max(id) from nodes;
select max(id) from pre_auth_key_acl_tags;
select max(id) from pre_auth_keys;
select max(id) from routes;
select max(id) from users; ALTER SEQUENCE api_keys_id_seq RESTART WITH <output from above + 1>
ALTER SEQUENCE nodes_id_seq RESTART WITH <output from above + 1>
ALTER SEQUENCE pre_auth_key_acl_tags_id_seq RESTART WITH <output from above + 1>
ALTER SEQUENCE pre_auth_keys_id_seq RESTART WITH <output from above + 1>
ALTER SEQUENCE routes_id_seq RESTART WITH <output from above + 1>
ALTER SEQUENCE users_id_seq RESTART WITH <output from above + 1> After restarting the counters, the API returned to nominal operation. This is left here for the interested search engine user running into similar side-effects when importing an SQLite dump into an empty PostgreSQL database freshly initialised by Headscale. |
Bug description
When converting an SQLite dump of the current 0.23 alpha3 to be imported into a cleanly initialised PostgreSQL instance with Headschale schema, the
psql
client throws an error that there is one value too much in the line.Manually examining the situation makes it appear as if the column
last_successful_update
is not needed anymore.A migration is eventually missing to remove that column from existing databases.
Environment
To Reproduce
sqlite3 db.sqlite .dump > db.sql
."
.IF NOT EXISTS
to allCREATE TABLE
statements.last_successful_update
.The text was updated successfully, but these errors were encountered: