-
Notifications
You must be signed in to change notification settings - Fork 77
/
snapraid.1
1798 lines (1798 loc) · 52.1 KB
/
snapraid.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
.TH "SnapRAID Backup For Disk Arrays" 1
.SH NAME
snapraid \- SnapRAID Backup For Disk Arrays
.SH SYNOPSIS
snapraid [\-c, \-\-conf CONFIG]
.PD 0
.PP
.PD
[\-f, \-\-filter PATTERN] [\-d, \-\-filter\-disk NAME]
.PD 0
.PP
.PD
[\-m, \-\-filter\-missing] [\-e, \-\-filter\-error]
.PD 0
.PP
.PD
[\-a, \-\-audit\-only] [\-h, \-\-pre\-hash] [\-i, \-\-import DIR]
.PD 0
.PP
.PD
[\-p, \-\-plan PERC|bad|new|full]
.PD 0
.PP
.PD
[\-o, \-\-older\-than DAYS] [\-l, \-\-log FILE]
.PD 0
.PP
.PD
[\-Z, \-\-force\-zero] [\-E, \-\-force\-empty]
.PD 0
.PP
.PD
[\-U, \-\-force\-uuid] [\-D, \-\-force\-device]
.PD 0
.PP
.PD
[\-N, \-\-force\-nocopy] [\-F, \-\-force\-full]
.PD 0
.PP
.PD
[\-R, \-\-force\-realloc]
.PD 0
.PP
.PD
[\-S, \-\-start BLKSTART] [\-B, \-\-count BLKCOUNT]
.PD 0
.PP
.PD
[\-L, \-\-error\-limit NUMBER]
.PD 0
.PP
.PD
[\-v, \-\-verbose] [\-q, \-\-quiet]
.PD 0
.PP
.PD
status|smart|up|down|diff|sync|scrub|fix|check|list|dup
.PD 0
.PP
.PD
|pool|devices|touch|rehash
.PD 0
.PP
.PD
.PP
snapraid [\-V, \-\-version] [\-H, \-\-help] [\-C, \-\-gen\-conf CONTENT]
.PD 0
.PP
.PD
.SH DESCRIPTION
SnapRAID is a backup program designed for disk arrays, storing
parity information for data recovery in the event of up to six
disk failures.
.PP
Primarily intended for home media centers with large,
infrequently changing files, SnapRAID offers several features:
.PD 0
.IP \(bu
You can utilize disks already filled with files without the
need to reformat them, accessing them as usual.
.IP \(bu
All your data is hashed to ensure data integrity and prevent
silent corruption.
.IP \(bu
When the number of failed disks exceeds the parity count,
data loss is confined to the affected disks; data on
other disks remains accessible.
.IP \(bu
If you accidentally delete files on a disk, recovery is
possible.
.IP \(bu
Disks can have different sizes.
.IP \(bu
You can add disks at any time.
.IP \(bu
SnapRAID doesn\'t lock in your data; you can stop using it
anytime without reformatting or moving data.
.IP \(bu
To access a file, only a single disk needs to spin, saving
power and reducing noise.
.PD
.PP
For more information, please visit the official SnapRAID site:
.PP
.RS 4
http://www.snapraid.it/
.PD 0
.PP
.PD
.RE
.SH LIMITATIONS
SnapRAID is in between a RAID and a Backup program trying to get the best
benefits of them. Although it also has some limitations that you should
consider before using it.
.PP
The main one is that if a disk fails, and you haven\'t recently synced,
you may be unable to do a complete recover.
More specifically, you may be unable to recover up to the size of
the changed or deleted files from the last sync operation.
This happens even if the files changed or deleted are not in the
failed disk. This is why SnapRAID is better suited for
data that rarely change.
.PP
On the other hand, newly added files don\'t prevent recovering already
existing files. You may only lose the recently added files, if they are on
the failed disk.
.PP
Other SnapRAID limitations are:
.PD 0
.IP \(bu
With SnapRAID, you still have separate file\-systems for each disk.
With RAID you get a single large file\-system.
.IP \(bu
SnapRAID doesn\'t stripe data.
With RAID you get a speed boost with striping.
.IP \(bu
SnapRAID doesn\'t support real\-time recovery.
With RAID you do not have to stop working when a disk fails.
.IP \(bu
SnapRAID is able to recover damages only from a limited number of disks.
With a Backup you can recover from a complete
failure of the whole disk array.
.IP \(bu
Only file, time\-stamps, symlinks and hardlinks are saved.
Permissions, ownership and extended attributes are not saved.
.PD
.SH GETTING STARTED
To use SnapRAID you need to first select one disk of your disk array
to dedicate at the \[dq]parity\[dq] information. With one disk for parity you
will be able to recover from a single disk failure, like RAID5.
.PP
If you want to be able to recover from more disk failures, like RAID6,
you must reserve additional disks for parity. Any additional parity
disk allow to recover from one more disk failure.
.PP
As parity disks, you have to pick the biggest disks in the array,
as the parity information may grow in size as the biggest data
disk in the array.
.PP
These disks will be dedicated to store the \[dq]parity\[dq] files.
You should not store your data in them.
.PP
Then you have to define the \[dq]data\[dq] disks that you want to protect
with SnapRAID. The protection is more effective if these disks
contain data that rarely change. For this reason it\'s better to
DO NOT include the Windows C:\\ disk, or the Unix /home, /var and /tmp
disks.
.PP
The list of files is saved in the \[dq]content\[dq] files, usually
stored in the data, parity or boot disks.
These files contain the details of your backup, with all the
check\-sums to verify its integrity.
The \[dq]content\[dq] file is stored in multiple copies, and each one must
be in a different disk, to ensure that in even in case of multiple
disk failures at least one copy is available.
.PP
For example, suppose that you are interested only at one parity level
of protection, and that your disks are present in:
.PP
.RS 4
/mnt/diskp <\- selected disk for parity
.PD 0
.PP
.PD
/mnt/disk1 <\- first disk to protect
.PD 0
.PP
.PD
/mnt/disk2 <\- second disk to protect
.PD 0
.PP
.PD
/mnt/disk3 <\- third disk to protect
.PD 0
.PP
.PD
.RE
.PP
you have to create the configuration file /etc/snapraid.conf with
the following options:
.PP
.RS 4
parity /mnt/diskp/snapraid.parity
.PD 0
.PP
.PD
content /var/snapraid/snapraid.content
.PD 0
.PP
.PD
content /mnt/disk1/snapraid.content
.PD 0
.PP
.PD
content /mnt/disk2/snapraid.content
.PD 0
.PP
.PD
data d1 /mnt/disk1/
.PD 0
.PP
.PD
data d2 /mnt/disk2/
.PD 0
.PP
.PD
data d3 /mnt/disk3/
.PD 0
.PP
.PD
.RE
.PP
If you are in Windows, you should use the Windows path format, with drive
letters and backslashes instead of slashes.
.PP
.RS 4
parity E:\\snapraid.parity
.PD 0
.PP
.PD
content C:\\snapraid\\snapraid.content
.PD 0
.PP
.PD
content F:\\array\\snapraid.content
.PD 0
.PP
.PD
content G:\\array\\snapraid.content
.PD 0
.PP
.PD
data d1 F:\\array\\
.PD 0
.PP
.PD
data d2 G:\\array\\
.PD 0
.PP
.PD
data d3 H:\\array\\
.PD 0
.PP
.PD
.RE
.PP
If you have many disks, and you run out of drive letters, you can mount
disks directly in sub folders. See:
.PP
.RS 4
https://www.google.com/search?q=Windows+mount+point
.PD 0
.PP
.PD
.RE
.PP
At this point you are ready to start the \[dq]sync\[dq] command to build the
parity information.
.PP
.RS 4
snapraid sync
.PD 0
.PP
.PD
.RE
.PP
This process may take some hours the first time, depending on the size
of the data already present in the disks. If the disks are empty
the process is immediate.
.PP
You can stop it at any time pressing Ctrl+C, and at the next run it
will start where interrupted.
.PP
When this command completes, your data is SAFE.
.PP
Now you can start using your array as you like, and periodically
update the parity information running the \[dq]sync\[dq] command.
.SS Scrubbing
To periodically check the data and parity for errors, you can
run the \[dq]scrub\[dq] command.
.PP
.RS 4
snapraid scrub
.PD 0
.PP
.PD
.RE
.PP
This command verifies the data in your array comparing it with
the hash computed in the \[dq]sync\[dq] command.
.PP
Every run of the command checks about the 8% of the array, but not data
already scrubbed in the previous 10 days.
You can use the \-p, \-\-plan option to specify a different amount,
and the \-o, \-\-older\-than option to specify a different age in days.
For example, to check 5% of the array older than 20 days use:
.PP
.RS 4
snapraid \-p 5 \-o 20 scrub
.PD 0
.PP
.PD
.RE
.PP
If during the process, silent or input/output errors are found,
the corresponding blocks are marked as bad in the \[dq]content\[dq] file,
and listed in the \[dq]status\[dq] command.
.PP
.RS 4
snapraid status
.PD 0
.PP
.PD
.RE
.PP
To fix them, you can use the \[dq]fix\[dq] command filtering for bad blocks with
the \-e, \-\-filter\-error options:
.PP
.RS 4
snapraid \-e fix
.PD 0
.PP
.PD
.RE
.PP
At the next \[dq]scrub\[dq] the errors will disappear from the \[dq]status\[dq] report
if really fixed. To make it fast, you can use \-p bad to scrub only blocks
marked as bad.
.PP
.RS 4
snapraid \-p bad scrub
.PD 0
.PP
.PD
.RE
.PP
Take care that running \[dq]scrub\[dq] on a not synced array may result in
errors caused by removed or modified files. These errors are reported
in the \[dq]scrub\[dq] result, but related blocks are not marked as bad.
.SS Pooling
To have all the files in your array shown in the same directory tree,
you can enable the \[dq]pooling\[dq] feature. It consists in creating a
read\-only virtual view of all the files in your array using symbolic
links.
.PP
You can configure the \[dq]pooling\[dq] directory in the configuration file with:
.PP
.RS 4
pool /pool
.PD 0
.PP
.PD
.RE
.PP
or, if you are in Windows, with:
.PP
.RS 4
pool C:\\pool
.PD 0
.PP
.PD
.RE
.PP
and then run the \[dq]pool\[dq] command to create or update the virtual view.
.PP
.RS 4
snapraid pool
.PD 0
.PP
.PD
.RE
.PP
If you are using a Unix platform and you want to share such directory
in the network to either Windows or Unix machines, you should add
to your /etc/samba/smb.conf the following options:
.PP
.RS 4
# In the global section of smb.conf
.PD 0
.PP
.PD
unix extensions = no
.PD 0
.PP
.PD
.RE
.PP
.RS 4
# In the share section of smb.conf
.PD 0
.PP
.PD
[pool]
.PD 0
.PP
.PD
comment = Pool
.PD 0
.PP
.PD
path = /pool
.PD 0
.PP
.PD
read only = yes
.PD 0
.PP
.PD
guest ok = yes
.PD 0
.PP
.PD
wide links = yes
.PD 0
.PP
.PD
follow symlinks = yes
.PD 0
.PP
.PD
.RE
.PP
In Windows the same sharing operation is not so straightforward,
because Windows shares the symbolic links as they are, and that
requires the network clients to resolve them remotely.
.PP
To make it working, besides sharing in the network the pool directory,
you must also share all the disks independently, using as share points
the disk names as defined in the configuration file. You must also specify in
the \[dq]share\[dq] option of the configure file, the Windows UNC path that remote
clients needs to use to access such shared disks.
.PP
For example, operating from a server named \[dq]darkstar\[dq], you can use
the options:
.PP
.RS 4
data d1 F:\\array\\
.PD 0
.PP
.PD
data d2 G:\\array\\
.PD 0
.PP
.PD
data d3 H:\\array\\
.PD 0
.PP
.PD
pool C:\\pool
.PD 0
.PP
.PD
share \\\\darkstar
.PD 0
.PP
.PD
.RE
.PP
and share the following dirs in the network:
.PP
.RS 4
\\\\darkstar\\pool \-> C:\\pool
.PD 0
.PP
.PD
\\\\darkstar\\d1 \-> F:\\array
.PD 0
.PP
.PD
\\\\darkstar\\d2 \-> G:\\array
.PD 0
.PP
.PD
\\\\darkstar\\d3 \-> H:\\array
.PD 0
.PP
.PD
.RE
.PP
to allow remote clients to access all the files at \\\\darkstar\\\\pool.
.PP
You may also need to configure remote clients enabling access at remote
symlinks with the command:
.PP
.RS 4
fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1
.PD 0
.PP
.PD
.RE
.SS Undeleting
SnapRAID is more like a backup program than a RAID system, and it
can be used to restore or undelete files to their previous state using
the \-f, \-\-filter option :
.PP
.RS 4
snapraid fix \-f FILE
.PD 0
.PP
.PD
.RE
.PP
or for a directory:
.PP
.RS 4
snapraid fix \-f DIR/
.PD 0
.PP
.PD
.RE
.PP
You can also use it to recover only accidentally deleted files inside
a directory using the \-m, \-\-filter\-missing option, that restores
only missing files, leaving untouched all the others.
.PP
.RS 4
snapraid fix \-m \-f DIR/
.PD 0
.PP
.PD
.RE
.PP
Or to recover all the deleted files in all the drives with:
.PP
.RS 4
snapraid fix \-m
.PD 0
.PP
.PD
.RE
.SS Recovering
The worst happened, and you lost one or more disks!
.PP
DO NOT PANIC! You will be able to recover them!
.PP
The first thing you have to do is to avoid further changes at your disk array.
Disable any remote connection to it, any scheduled process, including any
scheduled SnapRAID nightly sync or scrub.
.PP
Then proceed with the following steps.
.SS STEP 1 \-> Reconfigure
You need some space to recover, even better if you already have additional
spare disks, but in case, also an external USB or remote disk is enough.
.PP
Change the SnapRAID configuration file to make the \[dq]data\[dq] or \[dq]parity\[dq]
option of the failed disk to point to the place where you have enough empty
space to recover the files.
.PP
For example, if you have that disk \[dq]d1\[dq] failed, you can change from:
.PP
.RS 4
data d1 /mnt/disk1/
.PD 0
.PP
.PD
.RE
.PP
to:
.PP
.RS 4
data d1 /mnt/new_spare_disk/
.PD 0
.PP
.PD
.RE
.PP
If the disk to recover is a parity disk, change the appropriate \[dq]parity\[dq]
option.
If you have more broken disks, change all their configuration options.
.SS STEP 2 \-> Fix
Run the fix command, storing the log in an external file with:
.PP
.RS 4
snapraid \-d NAME \-l fix.log fix
.PD 0
.PP
.PD
.RE
.PP
Where NAME is the name of the disk, like \[dq]d1\[dq] as in our previous example.
In case the disk to recover is a parity disk, use the \[dq]parity\[dq], \[dq]2\-parity\[dq]
names.
If you have more broken disks, use multiple \-d options to specify all
of them.
.PP
This command will take a long time.
.PP
Take care that you need also few gigabytes free to store the fix.log file.
Run it from a disk with some free space.
.PP
Now you have recovered all the recoverable. If some file is partially or totally
unrecoverable, it will be renamed adding the \[dq].unrecoverable\[dq] extension.
.PP
You can get a detailed list of all the unrecoverable blocks in the fix.log file
checking all the lines starting with \[dq]unrecoverable:\[dq]
.PP
If you are not satisfied of the recovering, you can retry it as many
time you wish.
.PP
For example, if you have removed files from the array after the last
\[dq]sync\[dq], this may result in some other files not recovered.
In this case, you can retry the \[dq]fix\[dq] using the \-i, \-\-import option,
specifying where these files are now, to include them again in the
recovering process.
.PP
If you are satisfied of the recovering, you can now proceed further,
but take care that after syncing you cannot retry the \[dq]fix\[dq] command
anymore!
.SS STEP 3 \-> Check
As paranoid check, you can now run a \[dq]check\[dq] command to ensure that
everything is OK on the recovered disk.
.PP
.RS 4
snapraid \-d NAME \-a check
.PD 0
.PP
.PD
.RE
.PP
Where NAME is the name of the disk, like \[dq]d1\[dq] as in our previous example.
.PP
The options \-d and \-a tell SnapRAID to check only the specified disk,
and ignore all the parity data.
.PP
This command will take a long time, but if you are not paranoid,
you can skip it.
.SS STEP 4 \-> Sync
Run the \[dq]sync\[dq] command to re\-synchronize the array with the new disk.
.PP
.RS 4
snapraid sync
.PD 0
.PP
.PD
.RE
.PP
If everything is recovered, this command is immediate.
.SH COMMANDS
SnapRAID provides a few simple commands that allow to:
.PD 0
.IP \(bu
Prints the status of the array \-> \[dq]status\[dq]
.IP \(bu
Controls the disks \-> \[dq]smart\[dq], \[dq]up\[dq], \[dq]down\[dq]
.IP \(bu
Makes a backup/snapshot \-> \[dq]sync\[dq]
.IP \(bu
Periodically checks data \-> \[dq]scrub\[dq]
.IP \(bu
Restore the last backup/snapshot \-> \[dq]fix\[dq].
.PD
.PP
Take care that the commands have to be written in lower case.
.SS status
Prints a summary of the state of the disk array.
.PP
It includes information about the parity fragmentation, how old
are the blocks without checking, and all the recorded silent
errors encountered while scrubbing.
.PP
Note that the information presented refers at the latest time you
run \[dq]sync\[dq]. Later modifications are not taken into account.
.PP
If bad blocks were detected, their block numbers are listed.
To fix them, you can use the \[dq]fix \-e\[dq] command.
.PP
It also shows a graph representing the last time each block
was scrubbed or synced. Scrubbed blocks are shown with \'*\',
blocks synced but not yet scrubbed with \'o\'.
.PP
Nothing is modified.
.SS smart
Prints a SMART report of all the disks of the array.
.PP
It includes an estimation of the probability of failure in the next
year allowing to plan maintenance replacements of the disks that show
suspicious attributes.
.PP
This probability estimation obtained correlating the SMART attributes
of the disks, with the Backblaze data available at:
.PP
.RS 4
https://www.backblaze.com/hard\-drive\-test\-data.html
.PD 0
.PP
.PD
.RE
.PP
If SMART reports that a disk is failing, \[dq]FAIL\[dq] or \[dq]PREFAIL\[dq] is printed
for that disk, and SnapRAID returns with an error.
In this case an immediate replacement of the disk is highly recommended.
.PP
Other possible strings are:
.RS 4
.PD 0
.HP 4
.I logfail
In the past some attributes were lower than
the threshold.
.HP 4
.I logerr
The device error log contains errors.
.HP 4
.I selferr
The device self\-test log contains errors.
.PD
.RE
.PP
If the \-v, \-\-verbose option is specified a deeper statistical analysis
is provided. This analysis can help you to decide if you need more
or less parity.
.PP
This command uses the \[dq]smartctl\[dq] tool, and it\'s equivalent to run
\[dq]smartctl \-a\[dq] on all the devices.
.PP
If your devices are not auto\-detected correctly, you can configure
a custom command using the \[dq]smartctl\[dq] option in the configuration
file.
.PP
Nothing is modified.
.SS up
Spins up all the disks of the array.
.PP
You can spin\-up only some specific disks using the \-d, \-\-filter\-disk option.
.PP
Take care that spinning\-up all the disks at the same time needs a lot of power.
Ensure that your power\-supply can sustain that.
.PP
Nothing is modified.
.SS down
Spins down all the disks of the array.
.PP
This command uses the \[dq]smartctl\[dq] tool, and it\'s equivalent to run
\[dq]smartctl \-s standby,now\[dq] on all the devices.
.PP
You can spin\-down only some specific disks using the \-d, \-\-filter\-disk option.
.PP
Nothing is modified.
.SS diff
Lists all the files modified from the last \[dq]sync\[dq] that need to have
their parity data recomputed.
.PP
This command doesn\'t check the file data, but only the file time\-stamp
size and inode.
.PP
At the end of the command, you\'ll get a summary of the file changes
grouped by:
.RS 4
.PD 0
.HP 4
.I equal
Files equal at before.
.HP 4
.I added
Files added that were not present before.
.HP 4
.I removed
Files removed.
.HP 4
.I updated
Files with a different size or time\-stamp, meaning that
they were modified.
.HP 4
.I moved
Files moved to a different directory of the same disk.
They are identified by having the same name, size, time\-stamp
and inode, but different directory.
.HP 4
.I copied
Files copied in the same or different disk. Note that if in
true they are moved to a different disk, you\'ll also have
them counted in \[dq]removed\[dq].
They are identified by having the same name, size, and
time\-stamp. But if the sub\-second time\-stamp is zero,
then the full path should match, and not only the name.
.HP 4
.I restored
Files with a different inode but with name, size and time\-stamp
matching. These are usually files restored after being deleted.
.PD
.RE
.PP
If a \[dq]sync\[dq] is required, the process return code is 2, instead of the
default 0. The return code 1 is instead for a generic error condition.
.PP
Nothing is modified.
.SS sync
Updates the parity information. All the modified files
in the disk array are read, and the corresponding parity
data is updated.
.PP
You can stop this process at any time pressing Ctrl+C,
without losing the work already done.
At the next run the \[dq]sync\[dq] process will start where
interrupted.
.PP
If during the process, silent or input/output errors are found,
the corresponding blocks are marked as bad.
.PP
Files are identified by path and/or inode and checked by
size and time\-stamp.
If the file size or time\-stamp are different, the parity data
is recomputed for the whole file.
If the file is moved or renamed in the same disk, keeping the
same inode, the parity is not recomputed.
If the file is moved to another disk, the parity is recomputed,
but the previously computed hash information is kept.
.PP
The \[dq]content\[dq] and \[dq]parity\[dq] files are modified if necessary.
The files in the array are NOT modified.
.SS scrub
Scrubs the array, checking for silent or input/output errors in data
and parity disks.
.PP
For each command invocation, about the 8% of the array is checked, but
nothing that was already scrubbed in the last 10 days.
This means that scrubbing once a week, every bit of data is checked
at least one time every three months.
.PP
You can define a different scrub plan or amount using the \-p, \-\-plan
option that takes as argument:
bad \- Scrub blocks marked bad.
new \- Scrub just synced blocks not yet scrubbed.
full \- Scrub everything.
0\-100 \- Scrub the exact percentage of blocks.
.PP
If you specify a percentage amount, you can also use the \-o, \-\-older\-than
option to define how old the block should be.
The oldest blocks are scrubbed first ensuring an optimal check.
If instead you want to scrub the just synced blocks, not yet scrubbed,
you should use the \[dq]\-p new\[dq] option.
.PP
To get the details of the scrub status use the \[dq]status\[dq] command.
.PP
For any silent or input/output error found the corresponding blocks
are marked as bad in the \[dq]content\[dq] file.
These bad blocks are listed in \[dq]status\[dq], and can be fixed with \[dq]fix \-e\[dq].
After the fix, at the next scrub they will be rechecked, and if found
corrected, the bad mark will be removed.
To scrub only the bad blocks, you can use the \[dq]scrub \-p bad\[dq] command.
.PP
It\'s recommended to run \[dq]scrub\[dq] only on a synced array, to avoid to
have reported error caused by unsynced data. These errors are recognized
as not being silent errors, and the blocks are not marked as bad,
but such errors are reported in the output of the command.
.PP
Files are identified only by path, and not by inode.
.PP
The \[dq]content\[dq] file is modified to update the time of the last check
of each block, and to mark bad blocks.
The \[dq]parity\[dq] files are NOT modified.
The files in the array are NOT modified.
.SS fix
Fix all the files and the parity data.
.PP
All the files and the parity data are compared with the snapshot
state saved in the last \[dq]sync\[dq].
If a difference is found, it\'s reverted to the stored snapshot.
.PP
The \[dq]fix\[dq] command doesn\'t differentiate between errors and
intentional modifications. It unconditionally reverts the file state
at the last \[dq]sync\[dq].
.PP
If no other option is specified the full array is processed.
Use the filter options to select a subset of files or disks to operate on.
.PP
To only fix the blocks marked bad during \[dq]sync\[dq] and \[dq]scrub\[dq],
use the \-e, \-\-filter\-error option.
As difference from other filter options, with this one the fixes are
applied only to files that are not modified from the latest \[dq]sync\[dq].
.PP
All the files that cannot be fixed are renamed adding the
\[dq].unrecoverable\[dq] extension.
.PP
Before fixing, the full array is scanned to find any moved file,
after the last \[dq]sync\[dq] operation.
These files are identified by their time\-stamp, ignoring their name
and directory, and are used in the recovering process if necessary.
If you moved some of them outside the array, you can use the \-i, \-\-import
option to specify additional directories to scan.
.PP
Files are identified only by path, and not by inode.
.PP
The \[dq]content\[dq] file is NOT modified.
The \[dq]parity\[dq] files are modified if necessary.
The files in the array are modified if necessary.
.SS check
Verify all the files and the parity data.
.PP
It works like \[dq]fix\[dq], but it only simulates a recovery and no change
is written in the array.
.PP
This command is mostly intended for manual verification,
like after a recovery process or in other special conditions.
For periodic and scheduled checks uses \[dq]scrub\[dq].
.PP
If you use the \-a, \-\-audit\-only option, only the file
data is checked, and the parity data is ignored for a
faster run.
.PP
Files are identified only by path, and not by inode.
.PP
Nothing is modified.
.SS list
Lists all the files contained in the array at the time of the
last \[dq]sync\[dq].
.PP
Nothing is modified.
.SS dup
Lists all the duplicate files. Two files are assumed equal if their
hashes are matching. The file data is not read, but only the
pre\-computed hashes are used.
.PP
Nothing is modified.
.SS pool
Creates or updates in the \[dq]pooling\[dq] directory a virtual view of all
the files of your disk array.
.PP
The files are not really copied here, but just linked using
symbolic links.
.PP
When updating, all the present symbolic links and empty
sub\-directories are deleted and replaced with the new
view of the array. Any other regular file is left in place.
.PP
Nothing is modified outside the pool directory.
.SS devices
Prints the low level devices used by the array.
.PP
This command prints the devices associations in place in the array,
and it\'s mainly intended as a script interface.
.PP
The first two columns are the low level device id and path.
The next two columns are the high level device id and path.
The latest column if the disk name in the array.
.PP
In most cases you have one low level device for each disk in the
array, but in some more complex configurations, you may have multiple
low level devices used by a single disk in the array.
.PP
Nothing is modified.
.SS touch
Sets arbitrarily the sub\-second time\-stamp of all the files
that have it at zero.
.PP
This improves the SnapRAID capability to recognize moved
and copied files as it makes the time\-stamp almost unique,
removing possible duplicates.
.PP
More specifically, if the sub\-second time\-stamp is not zero,
a moved or copied file is identified as such if it matches
the name, size and time\-stamp. If instead the sub\-second time\-stamp
is zero, it\'s considered a copy only if it matches the full path,
size and time\-stamp.
.PP
Note that the second precision time\-stamp is not modified,
and all the dates and times of your files will be maintained.
.SS rehash
Schedules a rehash of the whole array.
.PP
This command changes the hash kind used, typically when upgrading
from a 32 bits system to a 64 bits one, to switch from
MurmurHash3 to the faster SpookyHash.
.PP
If you are already using the optimal hash, this command
does nothing and tells you that nothing has to be done.
.PP
The rehash isn\'t done immediately, but it takes place
progressively during \[dq]sync\[dq] and \[dq]scrub\[dq].
.PP
You can get the rehash state using \[dq]status\[dq].
.PP
During the rehash, SnapRAID maintains full functionality,
with the only exception of \[dq]dup\[dq] not able to detect duplicated
files using a different hash.
.SH OPTIONS
SnapRAID provides the following options:
.TP
.B \-c, \-\-conf CONFIG
Selects the configuration file to use. If not specified in Unix