You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be cool if pgmg gathered up the migration files and fed the metadata into a query that produced a result set for the entire "plan" for the migration.
Then pgmg just iterates through that plan and blindly runs the plan as ordered.
That would make pgmg a very small side effectful wrapper around a pure query. That should make it a lot easier to test lots of permutations without needing to actually run any effects. And it would allow pgmg to give the user a lot more feedback on what it is about to do before it does it. A bit like terraform's plan / apply. It could give us all a lot more confident in running production migrations if the exact plan could be previewed prior to running anything.
Finally there's the chatty aspect (#41), this approach would lead to a single message being sent to and from the database (aside from the users actual migration logic). So running a simple migration from a Github Actions runner in the US with a db in Sydney wouldn't cause much overhead. It would be 1 query x 100ms instead of 3 or 4 queries per migration * 100ms.
The text was updated successfully, but these errors were encountered:
It would be cool if pgmg gathered up the migration files and fed the metadata into a query that produced a result set for the entire "plan" for the migration.
Then pgmg just iterates through that plan and blindly runs the plan as ordered.
That would make pgmg a very small side effectful wrapper around a pure query. That should make it a lot easier to test lots of permutations without needing to actually run any effects. And it would allow pgmg to give the user a lot more feedback on what it is about to do before it does it. A bit like terraform's plan / apply. It could give us all a lot more confident in running production migrations if the exact plan could be previewed prior to running anything.
Finally there's the chatty aspect (#41), this approach would lead to a single message being sent to and from the database (aside from the users actual migration logic). So running a simple migration from a Github Actions runner in the US with a db in Sydney wouldn't cause much overhead. It would be 1 query x 100ms instead of 3 or 4 queries per migration * 100ms.
The text was updated successfully, but these errors were encountered: