Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate if matrix retention policy could replace purger script #80

Closed
andrevmatos opened this issue Jan 11, 2020 · 4 comments
Closed
Assignees
Labels
enhancement New feature or request future

Comments

@andrevmatos
Copy link
Contributor

matrix-org/synapse#6358
matrix-org/synapse#5815

Of particular interest:

This is not a generic implementation, because it relies on server admins setting boundaries for what users can provide as values in a room's retention policy, and considers the rooms lacking a policy as following the server's. Changing that should be fairly trivial, though, and without consequences.

@andrevmatos andrevmatos added enhancement New feature or request future labels Jan 11, 2020
@ulope
Copy link
Collaborator

ulope commented Feb 5, 2020

Interesting!

@fredo fredo self-assigned this Mar 11, 2020
@fredo
Copy link
Contributor

fredo commented Mar 11, 2020

The two most relevant information are the following:

https://github.com/matrix-org/synapse/blob/master/docs/message_retention_policies.md

and the sample from the config file:

# Message retention policy at the server level.
#
# Room admins and mods can define a retention period for their rooms using the
# 'm.room.retention' state event, and server admins can cap this period by setting
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
#
# If this feature is enabled, Synapse will regularly look for and purge events
# which are older than the room's maximum retention period. Synapse will also
# filter events received over federation so that events that should have been
# purged are ignored and not stored again.
#
retention:
  # The message retention policies feature is disabled by default. Uncomment the
  # following line to enable it.
  #
  #enabled: true

  # Default retention policy. If set, Synapse will apply it to rooms that lack the
  # 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
  # matter much because Synapse doesn't take it into account yet.
  #
  #default_policy:
  #  min_lifetime: 1d
  #  max_lifetime: 1y

  # Retention policy limits. If set, a user won't be able to send a
  # 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
  # that's not within this range. This is especially useful in closed federations,
  # in which server admins can make sure every federating server applies the same
  # rules.
  #
  #allowed_lifetime_min: 1d
  #allowed_lifetime_max: 1y

  # Server admins can define the settings of the background jobs purging the
  # events which lifetime has expired under the 'purge_jobs' section.
  #
  # If no configuration is provided, a single job will be set up to delete expired
  # events in every room daily.
  #
  # Each job's configuration defines which range of message lifetimes the job
  # takes care of. For example, if 'shortest_max_lifetime' is '2d' and
  # 'longest_max_lifetime' is '3d', the job will handle purging expired events in
  # rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
  # lower than or equal to 3 days. Both the minimum and the maximum value of a
  # range are optional, e.g. a job with no 'shortest_max_lifetime' and a
  # 'longest_max_lifetime' of '3d' will handle every room with a retention policy
  # which 'max_lifetime' is lower than or equal to three days.
  #
  # The rationale for this per-job configuration is that some rooms might have a
  # retention policy with a low 'max_lifetime', where history needs to be purged
  # of outdated messages on a very frequent basis (e.g. every 5min), but not want
  # that purge to be performed by a job that's iterating over every room it knows,
  # which would be quite heavy on the server.
  #
  #purge_jobs:
  #  - shortest_max_lifetime: 1d
  #    longest_max_lifetime: 3d
  #    interval: 5m:
  #  - shortest_max_lifetime: 3d
  #    longest_max_lifetime: 1y
  #    interval: 24h

@fredo
Copy link
Contributor

fredo commented Mar 11, 2020

It leaves the question to me how we can assure that the purger job wont affect the server's performance to much. Any ideas?

Furthermore, It is mentioned that it does not free the disk space back to the OS but instead overwrites freed database space. I think this should be sufficient.

@fredo
Copy link
Contributor

fredo commented Mar 19, 2020

Merged in #94. Should be somehow monitored though if it works properly.

@fredo fredo closed this as completed Mar 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request future
Projects
None yet
Development

No branches or pull requests

3 participants