Skip to content

Commit

Permalink
feat: Add new test which asserts parity between upgrade and downgrade…
Browse files Browse the repository at this point in the history
… detectable effects.
  • Loading branch information
DanCardin committed Nov 26, 2021
1 parent 2fb20d0 commit ab9b645
Show file tree
Hide file tree
Showing 11 changed files with 212 additions and 42 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,9 @@ itself.
- [Experimental
tests](http://pytest-alembic.readthedocs.io/en/latest/experimental_tests.html)

- all_models_register_on_metadata
- downgrade_leaves_no_trace

These tests will need to be enabled manually because their semantics or API are
not yet guaranteed to stay the same. See the linked docs for more details!

Expand All @@ -91,7 +94,7 @@ tests](http://pytest-alembic.readthedocs.io/en/latest/custom_tests.html)
data](http://pytest-alembic.readthedocs.io/en/latest/custom_data.html)
(to be inserted automatically before a given revision).

Sometimes when writing a particularly knarly data migration, it helps to
Sometimes when writing a particularly gnarly data migration, it helps to
be able to practice a little timely TDD, since there’s always the
potential you’ll trash your actual production data.

Expand All @@ -100,7 +103,7 @@ that you would normally, through the use of the `alembic_runner`
fixture.

``` python
def test_knarly_migration_xyz123(alembic_engine, alembic_runner):
def test_gnarly_migration_xyz123(alembic_engine, alembic_runner):
# Migrate up to, but not including this new migration
alembic_runner.migrate_up_before('xyz123')

Expand Down
2 changes: 1 addition & 1 deletion docs/source/custom_tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Honestly, there's not much to it by this point!

.. code-block:: python
def test_knarly_migration_xyz123(alembic_runner):
def test_gnarly_migration_xyz123(alembic_runner):
# Migrate up to, but not including this new migration
alembic_runner.migrate_up_before('xyz123')
Expand Down
73 changes: 68 additions & 5 deletions docs/source/experimental_tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ test_all_models_register_on_metadata
Diffs the set of tables registered by alembic's :code:`env.py` versus the set
of full tables we find throughout your models package/module.

Enabling this test (TL;DR)
~~~~~~~~~~~~~~~~~~~~~~~~~~
Enabling all_models_register_on_metadata (TL;DR)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can either enable this test with no configuration, which will attempt to
identify the source module from which the :code:`env.py` is loading its
:code:`MetaData` and automatically search in that module/package
Expand All @@ -26,7 +26,7 @@ identify the source module from which the :code:`env.py` is loading its
pytest_alembic_include_experimental = 'all_models_register_on_metadata'
# or setup.cfg/pytest.ini
[tool:ini_options]
[pytest]
pytest_alembic_include_experimental = all_models_register_on_metadata
Expand All @@ -43,8 +43,8 @@ and provide the module/package directly.
tests.experimental.test_all_models_register_on_metadata(alembic_runner, 'package.models')
Explanation
~~~~~~~~~~~
How all_models_register_on_metadata works
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The problem this test attempts to solve is best described with an example. Consider
the following package structure:

Expand Down Expand Up @@ -144,3 +144,66 @@ course of executing the :code:`env.py` through alembic.

This immediately resulted in an ``--autogenerate`` suggesting that the table
be droped, since it was alembic assumes you've deleted the model entirely!


test_downgrade_leaves_no_trace
------------------------------
Attempts to ensure that the downgrade for every migration precisely undoes
the changes performed in the upgrade.

Enabling downgrade_leaves_no_trace (TL;DR)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: toml
:caption: pyproject.toml/setup.cfg/pytest.ini
# pyproject.toml
[tool.pytest.ini_options]
pytest_alembic_include_experimental = 'downgrade_leaves_no_trace'
# or setup.cfg/pytest.ini
[pytest]
pytest_alembic_include_experimental = downgrade_leaves_no_trace
Or you can manually import and execute the test somewhere in your own tests.
Using this mechanism, you would be able to circumvent the automatic detection
and provide the module/package directly.


.. code-block:: python
from pytest_alembic import tests
def test_downgrade_leaves_no_trace(alembic_runner):
tests.experimental.test_downgrade_leaves_no_trace(alembic_runner)
How downgrade_leaves_no_trace works
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This test works by attempting to produce two autogenerated migrations.

1. The first is the comparison between the original state of the database before the
given migration's upgrade occurs, and the `MetaData` produced by having performed
the upgrade.

This should approximate the autogenerated migration that alembic
would have generated to produce your upgraded database state itself.

2. The 2nd is the comparison between the state of the database after having
performed the upgrade -> downgrade cycle for this revision, and the same
`MetaData` used in the first comparison.

This should approximate what alembic would have autogenerated if you
**actual** performed the downgrade on your database.

In the event these two autogenerations do not match, it implies that your
upgrade -> downgrade cycle produces a database state which is different
(enough for alembic to detect) from the state of the database without having
performed the migration at all.

.. note::

This isn't perfect! Alembic autogeneration will not detect many
kinds of changes! If you encounter some scenario in which this does not
detect a change you'd expect it to, alembic already has extensive ability
to customize aand extend the autogeneration capabilities.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
from alembic import op
import sqlalchemy as sa
from alembic import op

revision = "aaaaaaaaaaaa"
down_revision = None
Expand All @@ -9,11 +9,11 @@

def upgrade():
op.create_table(
"foo",
"ignore",
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
sa.PrimaryKeyConstraint("id"),
)


def downgrade():
op.rename_table('foo', 'bar')
op.drop_table("ignore")
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import sqlalchemy as sa
from alembic import op

revision = "bbbbbbbbbbbb"
down_revision = "aaaaaaaaaaaa"
branch_labels = None
depends_on = None


def upgrade():
op.create_table(
"foo",
sa.Column("id", sa.Integer(), autoincrement=True, nullable=False),
sa.PrimaryKeyConstraint("id"),
)


def downgrade():
op.rename_table("foo", "bar")
6 changes: 6 additions & 0 deletions examples/test_downgrade_leaves_no_trace_failure/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@
Base = declarative_base()


class CreatedAt(Base):
__tablename__ = "ignore"

id = Column(types.Integer(), autoincrement=True, primary_key=True)


class CreatedAt(Base):
__tablename__ = "foo"

Expand Down
1 change: 1 addition & 0 deletions examples/test_downgrade_leaves_no_trace_failure/setup.cfg
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
[tool:pytest]
pytest_alembic_exclude = upgrade,single_head_revision,model_definitions_match_ddl,up_down_consistency
pytest_alembic_include_experimental = downgrade_leaves_no_trace
1 change: 1 addition & 0 deletions examples/test_empty_history/setup.cfg
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
[tool:pytest]
pytest_alembic_exclude = single_head_revision
pytest_alembic_include_experimental = downgrade_leaves_no_trace,all_models_register_on_metadata
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "pytest-alembic"
version = "0.5.1"
version = "0.6.0"
description = "A pytest plugin for verifying alembic migrations."
authors = [
"Dan Cardin <[email protected]>",
Expand Down
119 changes: 95 additions & 24 deletions src/pytest_alembic/tests/experimental/downgrade_leaves_no_trace.py
Original file line number Diff line number Diff line change
@@ -1,41 +1,112 @@
from typing import List, Optional, Set, Tuple

import alembic.migration
from alembic.autogenerate import produce_migrations, render_python_code
from sqlalchemy import MetaData

from pytest_alembic.plugin.error import AlembicTestFailure
from pytest_alembic.runner import MigrationContext

try:
from sqlalchemy.ext.declarative import DeclarativeMeta
except ImportError: # pragma: no cover
from sqlalchemy.declarative import DeclarativeMeta

def test_downgrade_leaves_no_trace(alembic_runner: MigrationContext):
"""Assert equal states of the MetaData before and after an upgrade/downgrade cycle.
This test works by attempting to produce two autogenerated migrations.
1. The first is the comparison between the original state of the database before the
given migration's upgrade occurs, and the `MetaData` produced by having performed
the upgrade.
This should approximate the autogenerated migration that alembic
would have generated to produce your upgraded database state itself.
2. The 2nd is the comparison between the state of the database after having
performed the upgrade -> downgrade cycle for this revision, and the same
`MetaData` used in the first comparison.
def test_downgrade_leaves_no_trace(alembic_runner: MigrationContext, alembic_engine):
"""Assert that all tables defined on your `MetaData`, are imported in the `env.py`.
This should approximate what alembic would have autogenerated if you
**actual** performed the downgrade on your database.
In the event these two autogenerations do not match, it implies that your
upgrade -> downgrade cycle produces a database state which is different
(enough for alembic to detect) from the state of the database without having
performed the migration at all.
**note** this isn't perfect! Alembic autogeneration will not detect many
kinds of changes! If you encounter some scenario in which this does not
detect a change you'd expect it to, alembic already has extensive ability
to customize and extend the autogeneration capabilities.
"""
original_metadata = MetaData()
original_metadata.reflect(alembic_engine)
command_executor = alembic_runner.command_executor
engine = command_executor.connection

alembic_runner.migrate_up_one()
# Swap the original engine for a connection to enable us to rollback the transaction
# midway through.
connection = engine.connect()
command_executor.alembic_config.attributes["connection"] = connection

upgrade_metadata = MetaData()
upgrade_metadata.reflect(alembic_engine)
revisions = alembic_runner.history.revisions[:-1]
if len(revisions) == 1:
return

alembic_runner.migrate_down_one()
for revision in revisions:
# Leaves the database in its previous state, to avoid subtle upgrade -> downgrade issues.
check_revision_cycle(alembic_runner, connection, revision)

downgrade_metadata = MetaData()
downgrade_metadata.reflect(alembic_engine)
# So we need to proceed by one.
alembic_runner.migrate_up_to(revision)

# old_tables = {k: v for k, v in old_metadata.tables.items()}
# new_tables = {k: v for k, v in new_metadata.tables.items() if k != 'alembic_version'}

import pdb; pdb.set_trace()
if new_tables:
tables = ', '.join(new_tables.keys())
raise AlembicTestFailure(
f"{new_tables}"
def check_revision_cycle(alembic_runner, connection, original_revision):
migration_context = alembic.migration.MigrationContext.configure(connection)

# We first need to produce a `MetaData` which represents the state of the database
# we're trying to get to.
with connection.begin() as trans:
alembic_runner.migrate_up_one()
upgrade_revision = alembic_runner.current

upgrade_metadata = MetaData()
upgrade_metadata.reflect(connection)

# Having procured the target `MetaData`, we need the database back in its original state.
trans.rollback()

with connection.begin() as trans:
# Produce a canonically autogenerated upgrade relative to the original.
autogenerated_upgrade = produce_migrations(migration_context, upgrade_metadata)
rendered_autogenerated_upgrade = render_python_code(autogenerated_upgrade.upgrade_ops)

# Now, we can perform the upgrade -> downgrade cycle!
alembic_runner.migrate_up_one()
alembic_runner.migrate_down_one()

downgrade_metadata = MetaData()
downgrade_metadata.reflect(connection)

# Produce a canonically autogenerated upgrade relative to the post-downgrade state.
autogenerated_post_downgrade = produce_migrations(migration_context, upgrade_metadata)
rendered_autogenerated_post_downgrade = render_python_code(
autogenerated_post_downgrade.upgrade_ops
)

raise
# **This** rollback is to ensure we leave the database back in it's original state for the next revision.
trans.rollback()

if rendered_autogenerated_upgrade != rendered_autogenerated_post_downgrade:
raise AlembicTestFailure(
(
f"There is a difference between the pre-'{upgrade_revision}'-upgrade `MetaData`, "
f"and the post-'{upgrade_revision}'-downgrade `MetaData`. This implies that the "
"upgrade performs some set of DDL changes which the downgrade does not "
"precisely undo."
),
context=[
(
f"DDL diff for {original_revision} -> {upgrade_revision}",
rendered_autogenerated_upgrade,
),
(
f"DDL diff after performing the {upgrade_revision} -> {original_revision} downgrade",
rendered_autogenerated_post_downgrade,
),
],
)
18 changes: 12 additions & 6 deletions tests/test_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,11 @@ def run_pytest(pytester, *, success=True, passed=4, skipped=0, failed=0, test_al
pytester.copy_example()
result = pytester.inline_run(*args)

expected_return = pytest.ExitCode.OK if success else pytest.ExitCode.TESTS_FAILED
expected_return = (
(pytest.ExitCode.OK if passed or skipped or failed else pytest.ExitCode.NO_TESTS_COLLECTED)
if success
else pytest.ExitCode.TESTS_FAILED
)
assert result.ret == expected_return

result.assertoutcome(passed=passed, skipped=skipped, failed=failed)
Expand All @@ -37,7 +41,7 @@ def test_no_data(pytester):


def test_empty_history(pytester):
run_pytest(pytester, passed=3)
run_pytest(pytester, passed=5)


def test_alternative_script_location(pytester):
Expand Down Expand Up @@ -156,6 +160,7 @@ def test_consistency_doesnt_roundtrip(pytester):
result, test="test_up_down_consistency", content="after performing a roundtrip"
)


def test_downgrade_leaves_no_trace_success(pytester):
"""Assert the all-models-register test is collected when included through automatic test insertion.
Expand All @@ -167,9 +172,10 @@ def test_downgrade_leaves_no_trace_success(pytester):


def test_downgrade_leaves_no_trace_failure(pytester):
"""Assert the all-models-register test is collected when included through automatic test insertion.
"""
result = run_pytest(pytester, success=False, passed=0, failed=1, test_alembic=False)
"""Assert the all-models-register test is collected when included through automatic test insertion."""
result = run_pytest(pytester, success=False, passed=0, failed=1)
assert_failed_test_has_content(
result, test="test_downgrade_leaves_no_trace", content="after performing a roundtrip"
result,
test="test_downgrade_leaves_no_trace",
content="difference between the pre-'bbbbbbbbbbbb'-upgrade `MetaData`",
)

0 comments on commit ab9b645

Please sign in to comment.