-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to handle failing hard drive. #111
Comments
You can mount with options
|
Thank you for responding. Part of the problem is that the folder that I can mount (the backup) the (obviously) damaged files are missing. In other words, if there was a read error, then no file exists. I simply have them left as paths which are broken. I guess could recreate them as placeholder files, but would those mount and resolve with SecureFS? Secondly, In addition to the explicit read errors, I worry that some files might have silently corrupted. This is why I asked about if SecureFS does any verification on the blocks as I'd like to run a check to verify all the data. |
I've answered at least part of the second part of my question. Yes, SecureFS does keep track of chunks by checksum, so a corrupt read will show up as a failure to read the file and also log
Of note, this also throws a read error to the program that is attempting to read the file. But it only works when actually reading the whole the file and not just on
Because of that, with a basic script I can verify every file. This is not a perfect solution, and it wont help with recovery but it does at least identify files that "successfully" copied but provided bad data. I think a way to verify data would be a good addition to SecureFS as a way to ensure data consistency though I know it'll be no faster than my horrible script here. In addition, I went through and loaded my backup of the secure folder with trace on and enumerated every file. This created a mapping in the log for every file in the Backup. However, I still have several known bad filenames that I wasn't able to recover by checking the mapping in the logs. Presumably this is because they failed to copy from the original source. Is there a way to identify these files from that filename without having the original file? Thanks again. |
I don't understand this sentence.
The simplest way to create a repo with the exactly the same filenames as your original but with all file contents empty, mount it, run |
My recommendation would be a command line switch (perhaps --scrub after Filesystems like ZFS) that goes through and verifies all data is still good file by file and prints/logs which files it fails on. I know it may be excessive, but I still think it's a useful tool. |
@bryanlyon |
@AGenchev Are you suggesting that As for compression, I won't do that. It leaks information and breaks security. |
I am not sure whether I suggest it as it might bring much higher complexity to the filesystem, for example when a file is updated and some blocks are rewritten. ECC on the media matters if you're unsure in the media, but you have taken other measures like ECC RAM, non-overclocked system, etc. |
I have a SecureFS secured folder on a drive that is failing. Nothing super critical is on the drive, but I want to make sure that I get as much off as possible while also identifying any bad data. I have copied as much data as possible off the drive as was immediately feasible but some files failed to copy with read errors. I also still have the json file with the encryption keys and the password.
I can mount the backup of the data, but I don't want to use the original drive any more than necessary. HOWEVER, I do have a list of files that failed to copy over to the backup. These filenames are encrypted but I need to know the unencrypted filenames. Is there an easy way to recover the unencrypted filename from the encrypted path+filename without mounting the original folder?
In addition, I might be able to get more data from the drive by using software like ddrescue, but I assume that unless it's a perfect recovery that I'm going to lose chunks of the file. Does SecureFS have a way to verify that decryption was correct (I.E. is a given block or file's decryption verifiable)?
I know that this is a local problem and not the fault of SecureFS itself but would still like to know what can be done. Thanks.
The text was updated successfully, but these errors were encountered: