Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add destroy units #812

Merged
merged 2 commits into from
Mar 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion juju/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -2014,7 +2014,13 @@ async def destroy_unit(self, unit_id, destroy_storage=False, dry_run=False, forc
'max-wait': max_wait,
'dry-run': dry_run,
}])
destroy_units = destroy_unit

async def destroy_units(self, *unit_names, destroy_storage=False, dry_run=False, force=False, max_wait=None):
"""Destroy several units at once.

"""
for u in unit_names:
await self.destroy_unit(u, destroy_storage, dry_run, force, max_wait)
Comment on lines +2022 to +2023
Copy link
Contributor

@addyess addyess Mar 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can do this or this:

Suggested change
for u in unit_names:
await self.destroy_unit(u, destroy_storage, dry_run, force, max_wait)
await jasyncio.gather(*(
self.destroy_unit(u, destroy_storage, dry_run, force, max_wait)
for u in unit_names
))

While I believe your proposed solution here would get the job done, it may be quicker to not issue the Destroy sequentially. I don't know how long it actually takes to actually start one destroy_unit

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless it makes things significantly slower, I'd prefer to depend on asyncio as little as possible as we ultimately want to get rid of it. I didn't observe such performance deficit when I was running the tests, but feel free to run a comparison on it if you're curious 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope it's totally your call here! thanks for the PR. LGTM


def download_backup(self, archive_id, target_filename=None):
"""Download a backup archive file.
Expand Down
17 changes: 17 additions & 0 deletions tests/integration/test_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -882,6 +882,23 @@ async def test_wait_for_idle_with_exact_units_scale_down_zero(event_loop):
assert (end_time - start_time) > 0.001


@base.bootstrapped
@pytest.mark.asyncio
async def test_destroy_units(event_loop):
async with base.CleanModel() as model:
app = await model.deploy(
'ubuntu',
application_name='ubuntu',
series='jammy',
channel='stable',
num_units=3,
)
await model.wait_for_idle(status='active')
await model.destroy_units(*[u.name for u in app.units])
await model.wait_for_idle(timeout=5 * 60, wait_for_exact_units=0)
assert app.units == []


@base.bootstrapped
@pytest.mark.asyncio
async def test_watcher_reconnect(event_loop):
Expand Down