Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading from QubesOS 4.x doesn't result in the same end user configuration (discards and tpool) #5643

Closed
tlaurion opened this issue Feb 11, 2020 · 13 comments
Labels
C: other eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) P: major Priority: major. Between "default" and "critical" in severity. T: bug Type: bug report. A problem or defect resulting in unintended behavior in something that exists.

Comments

@tlaurion
Copy link
Contributor

tlaurion commented Feb 11, 2020

Qubes OS version
Qubes 4.+. Behavior is different for users across RC and actual 4.0.3 release.

Affected component(s) or functionality

  • lvm discards are setuped differently from anaconda changes between RC and 4.0.3 release

  • tpool metadata size is different, latest release fixes probable tpool fillup

Brief summary
This can happen.

To Reproduce
Compare tpool size between earlier 4.x and actual 4.0.3 version from anaconda/blivet deployed defaults. Safe values are only deployed at new installation

Expected behavior
Have the same behavior as stated in upgrade guide or have a warning in there on the upgrade guide saying that the result from upgrading has risks and document anaconda changes and needed actions.

Have disk space widget prevent the tool metadata filled up situation.

False:
If you installed Qubes 4.0, 4.0.1, 4.0.2, or 4.0.3-rc1 and have fully updated, then your system is already equivalent to a Qubes 4.0.3 installation. No further action is required. SRC

The point being: further actions are required

Actual behavior

  • tpool might fill up

  • discards configurations are different across RC revisions (ex: root fs (dom0) discards) resulting in different user faced problems depending of the 4.x version they installed from, prior to upgrading to latest versions from packages, those configurations not being patched (LVM discard, dom0 pools discard, tpool metadata size doubled).

Solutions you've tried
Workaround is to reduce 8gb swap and expend tpool

Relevant documentation you've consulted

@tlaurion tlaurion added P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: bug Type: bug report. A problem or defect resulting in unintended behavior in something that exists. labels Feb 11, 2020
@andrewdavidwong andrewdavidwong added C: other P: major Priority: major. Between "default" and "critical" in severity. and removed P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. labels Feb 15, 2020
@andrewdavidwong andrewdavidwong added this to the Release 4.0 updates milestone Feb 15, 2020
@tlaurion
Copy link
Contributor Author

Note: upcoming changes in installer will seperate dom0 thin pool from main pool.

@tlaurion
Copy link
Contributor Author

@marmarek
Copy link
Member

@fepitre created a tool for R4.0 -> R4.1 upgrade, which also take care of those differences: https://github.com/fepitre/qubes-migration

@tlaurion
Copy link
Contributor Author

tlaurion commented Apr 17, 2020

@marmarek @fepitre The thin pool metadata gets doubled where?

@brendanhoar
Copy link

I don't see that in @fepitre 's code (yet?), @marmarek.

@marmarek
Copy link
Member

Not yet, but it will be there: #5685

@tlaurion
Copy link
Contributor Author

tlaurion commented Aug 19, 2021

@marmarek : last time I checked, discards were not pushed under lvm.conf file still under 4.1 even if QubesOS/openqa-tests-qubesos@2f49b75 was pushed as default?

(Linked to #3686)

@marmarek
Copy link
Member

marmarek commented Aug 19, 2021

Discards in lvm.conf don't matter for thin volumes (they affect only non-thin volumes, which we don't use). I'm not sure if that is an LVM bug, or intended behavior (likely the latter). We do blkdiscard ourselves on a volume just before removing it.
EDIT: the above is about issue_discards option. thin_pool_discards does apply to thin volumes and is enabled by default.

@DemiMarie
Copy link

Discards in lvm.conf don't matter for thin volumes (they affect only non-thin volumes, which we don't use). I'm not sure if that is an LVM bug, or intended behavior (likely the latter). We do blkdiscard ourselves on a volume just before removing it.
EDIT: the above is about issue_discards option. thin_pool_discards does apply to thin volumes and is enabled by default.

issue_discards being false is indeed intended, as it allows various operations on non-thin volumes to be undone if caught promptly.

@brendanhoar
Copy link

issue_discards being false is indeed intended, as it allows various operations on non-thin volumes to be undone if caught promptly.

Personally I'd like the global state of whether issues are flowed down to the hardware/storage device to be exposured in Qubes Global Settings.

This will allow the end user to make the determination of which is more important to them: a) recovery of data OR b) device performance and opportunistic anti-forensics.

Flowing discards to lowest layers require both lvm.conf to be correct as well as the luks config.

B

@marmarek
Copy link
Member

Let me repeat: issue_discards has nothing to do with LVM thin volumes we use. There is no point in discussing (here) what value should it have, and no point in adding any kind of interface for changing it.

@brendanhoar
Copy link

I will follow your guidance, @marmarek.

@andrewdavidwong andrewdavidwong added the eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) label Aug 5, 2023
@github-actions
Copy link

github-actions bot commented Aug 5, 2023

This issue is being closed because:

If anyone believes that this issue should be reopened and reassigned to an active milestone, please leave a brief comment.
(For example, if a bug still affects Qubes OS 4.1, then the comment "Affects 4.1" will suffice.)

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: other eol-4.0 Closed because Qubes 4.0 has reached end-of-life (EOL) P: major Priority: major. Between "default" and "critical" in severity. T: bug Type: bug report. A problem or defect resulting in unintended behavior in something that exists.
Projects
None yet
Development

No branches or pull requests

5 participants