You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
U02B400J44U: Hey folks! Is there a built-in way to clean up logs from old runs?
We've been running a few experimental jobs in Dagster and are already up to 28GB in compute_logs (currently stored on NFS) and 1GB in event_logs (in Postgres). Keeping these around for a few days or weeks to debug failed pipelines makes sense, but left to grow forever could become prohibitively expensive.
It seems like it would be pretty straightforward to supply our own `ComputeLogManager` and `EventLogStorage` implementations which would delegate to the native ones but also delete data for old runs periodically, but before I do this I want to see if a solution exists already.
U02B400J44U: Actually just found `dagster assets wipe` which is maybe what I'm looking for? <https://docs.dagster.io/_apidocs/cli#dagster-asset-wipe>
UM49TQ8EB: `dagster assets wipe` will just clear the asset index… it won’t wipe the event logs for all other events, which is what I think you’re looking for
UM49TQ8EB: I think you could probably create a pipeline that runs on a schedule that deletes runs older than a certain date
U02B400J44U: Yeah maybe just deleting the runs would be sufficent
U02B400J44U: Maybe `dagster run wipe` could take an argument like `--older-than=2w`
UM49TQ8EB: <@U018K0G2Y85> issue add CLI option for wiping old runs
Message from the maintainers:
Are you looking for the same documentation content? Give it a 👍. We factor engagement into prioritization.
The text was updated successfully, but these errors were encountered:
Issue from the Dagster Slack
add CLI option for wiping old runs
This issue was generated from the slack conversation at: https://dagster.slack.com/archives/C01U954MEER/p1628715859344300?thread_ts=1628715859.344300&cid=C01U954MEER
Conversation excerpt:
Message from the maintainers:
Are you looking for the same documentation content? Give it a 👍. We factor engagement into prioritization.
The text was updated successfully, but these errors were encountered: