-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qubes Disk Widget - please inform about/alert on pool metadata space #5053
Qubes Disk Widget - please inform about/alert on pool metadata space #5053
Comments
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Added usage_details method to Pool class (returns a dictionary with detailed information on pool usage) and LVM implementation that returns metadata info. Needed for QubesOS/qubes-issues#5053
Thanks @marmarta ! I tried several times to do all the api plumbing to get the data up to where it was needed in the widget and failed a couple months back. Looking forward to R 4.1! Brendan |
Automated announcement from builder-github The package
|
@andrewdavidwong : the upgrade notes of QubesOS states that the user can apply upgrades from Qubes 4 and have the same system as if it was freshly installed. @marmarek This is false, since anaconda was modified since then and changes behavior for discards and metadata pool size, and consequently, newer installations are fixed to mitigate future problems, where upgraded systems are not. (True?) @marmarta As stated here, the widget should permit the user to fix problems related on that matter, so that freshly installed Qubes/upgraded 4.+ results in the same stable system for users on LVM discards/metadata pool potential problems. |
Shouldn't this be reported as a separate issue? It sounds rather different from the topic of this issue ("inform about/alert on pool metadata space"). |
The issue related here is still that once tpool is maxed out, the user is not informed of what to do to fix it but informed it is near maxed out. What would be the proposed action? This is why I tagged @marmarta in this issue, rest being background information. Opened ticket #5643 |
I see that @brendanhoar raised this point in the original issue description, where he wrote:
It sounds like you're saying that this part of the issue has not yet been addressed, so you'd like for this issue to be reopened. Am I understanding you correctly? |
Yes. Workaround referenced is unfortunately to reduce size of swap and affect that space to tpool (https://groups.google.com/d/msg/qubes-users/3r9MuyQHTUs/qLzKmpG4AQAJ). Other advices welcome.
It's an assured disaster waiting to happen. Even faster if playing around with wyng.
It hit me twice. Upgraded to 4.0.3 ( reinstalled, redeployed backups) to since then.
…On February 15, 2020 12:58:24 PM UTC, Andrew David Wong ***@***.***> wrote:
> The issue related here is still that once tpool is maxed out, the
user is not informed of what to do to fix it but informed it is near
maxed out. What would be the proposed action?
I see that @brendanhoar raised this point in the original issue
description, where he wrote:
> Add a popup indicating what steps the user can take to address when
metadata is low.
> [...]
> On the other hand, resolving metadata space issues is not
straightforward and some steps a user might take based as generally
educated computer user can make the situation worse. In addition,
generally, it is easy to get the pool+file systems into a
non-recoverable state.
It sounds like you're saying that this part of the issue has not yet
been addressed, so you'd like for this issue to be reopened. Am I
understanding you correctly?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#5053 (comment)
-- Sent from /e/ Mail
|
Can this be backported to R4.0? This is a serious issue that has caused data loss. |
Widget in R4.1 reports metadata usage already, and also the default metadata size was doubled to avoid this issue from happening. |
The problem you're addressing (if any)
The new Qubes disk space widget is useful. However, some users of the default file system layout (ext4/etc. in a LVM Thin Provisioned Pool) are running out of metadata far before running out of disk space.
Describe the solution you'd like
Where is the value to a user, and who might that user be?
Resolving disk space issues is rather straightforward and when one is approaching or has reached the limit, there are steps that generally educated computer users can take to resolve.
On the other hand, resolving metadata space issues is not straightforward and some steps a user might take based as generally educated computer user can make the situation worse. In addition, generally, it is easy to get the pool+file systems into a non-recoverable state.
Describe alternatives you've considered
Additional context
https://groups.google.com/forum/#!topic/qubes-users/qq_ElNPdx-g
Relevant documentation you've consulted
I did not see anything addressing the issue on https://www.qubes-os.org
Related, non-duplicate issues
#5054
The text was updated successfully, but these errors were encountered: