-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Renaming DbNode.type
to DbNode.node_type
#2552
Renaming DbNode.type
to DbNode.node_type
#2552
Conversation
Coverage increased (+0.08%) to 70.131% when pulling 33a82b734c0447de3176d8f6db972dc99bdb3b68 on sphuber:fix_2510_rename_node_type_to_node_type into 56344da on aiidateam:provenance_redesign. |
1 similar comment
Coverage increased (+0.08%) to 70.131% when pulling 33a82b734c0447de3176d8f6db972dc99bdb3b68 on sphuber:fix_2510_rename_node_type_to_node_type into 56344da on aiidateam:provenance_redesign. |
ead3a3b
to
26c4579
Compare
26c4579
to
9864aca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of comments.
Also I realised that we set {array|symbols}
at the same time as creating the array (and remove the attribute at the same time when deleting the numpy from disk).
It's ok, but if the migration fails, the data will stay/be removed from the database while the file will be gon/already there, so maybe some migrations might fail?
def load_node(self, pk): | ||
return self.DbNode.objects.get(pk=pk) | ||
|
||
def set_attribute(self, node, key, value): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this method (and the few following ones) should be in the base class, as these are specific to a given version of the schema/AiiDA ORM.
If you prefer to have them as methods to reuse them, maybe make a subclass of TestMigrations (e.g. TestMigrationsDbAttributeAndFileRepository or something more specific, e.g. TestMigrationsAttributesV0025) and inherit from that class where appropriate).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, but the load_node
one should be able to stay in the generic one right? That is not contingent on the attributes and therefore the ModelModifier
DbNode.c.type == op.inline_literal('node.data.array.trajectory.TrajectoryData.'))).fetchall() | ||
|
||
for pk, _ in nodes: | ||
connection.execute( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this the same as the following line (one in SQL and one in SQLAlchemy?) Did the second version work or it required some custom function (and then we should just keep the SQLA?) I'm also surprised that the migration didn't fail in this case...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, in fact it seems like the second one in SqlAlchemy should not work because the json_object_delete_keys
function does not even exist. This shows a problem with our testing: this was not caught because we do not tests the downgrades explicitly. Sure it is called when downgrading, but only during empty databases mostly, which is why the faulty code was not triggered since it is only ran when there are TrajectoryData nodes.
As to your comment, you are right and the only way to fix it is to also split those, do the file system operations first and set the attribute reference last. Since we only support going forward with migrations (officially) we should check whether the upgrade of the second migration is idempotent. With regards to deleting the array from disk this is the case because errors are ignored if it doesn't exist, because for example it was already removed in a failed previous migration. Whether the deletion of the attribute would fail depends on the behavior of the UPDATE statement. I am not sure if it fails if the attribute does not exist. I will check... edit: At least directly in postgres it simply print edit2: for Django we use the Django ORM, but the |
@giovannipizzi I added a separate commit so you can easily verify the changes, but after you approve please don't merge. I want to keep the first two commits separate as the second one especially is important to keep separate, so after you approve I will squash the fixup manually and force push |
Many of the migrations and their respective unit tests used the ORM of `aiida-core` or the database model classes directly. However, as the database models evolve and change, during migrations one cannot rely on the current implementation of the database models or the ORM as it may not be commensurate of their state at the time of the migration. This means that the migrations absolutely have to be written relying only on raw SQL or on the model classes that are provided through the migration mechanism of the various backends, as they will make sure that those fake classes reflect the state of the models at the time of the migration.
17df0c9
to
30b1253
Compare
Fixes #2510
This PR contains two commits, the first implementing the actual change to the model and adapting the ORM and the second a rewrite of many of the existing migrations and tests. The affected migrations and tests were directly accessing database model classes or our own ORM, which during migrations are not commensurate with their state at the time of the migration resulting into exceptions. The only solution is to, during migrations, only use raw SQL or the models retrieved through the migration framework of the respective backend.