Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Panic on readling some file. #145

Closed
galaxy001 opened this issue Oct 17, 2017 · 8 comments
Closed

Panic on readling some file. #145

galaxy001 opened this issue Oct 17, 2017 · 8 comments

Comments

@galaxy001
Copy link

galaxy001 commented Oct 17, 2017

gocryptfs v1.4.1-19-g64e5906 without_openssl; go-fuse v20170619-16-gbd6c960; 2017-10-09 go1.6.2

gocryptfs -reverse /volume1/dsG /tmp/dsg
cd /tmp/
gocryptfs -ro -fg -fusedebug ./dsg ./t

Commands:

# cat t/err.txt
bd736ef31f64aaf3c42544e8f3fb33e96c9f6f40  sync20160314/bt_tmp/[VCB-Studio] Log Horizon 2 [Ma10p_1080p]/SPs/[VCB-Studio] Log Horizon 2 [SP05][Ma10p_1080p][x265_flac].mkv
# cp -a  't/sync20160314/bt_tmp/[VCB-Studio] Log Horizon 2 [Ma10p_1080p]/SPs/[VCB-Studio] Log Horizon 2 [SP05][Ma10p_1080p][x265_flac].mkv' .
cp: error reading ‘t/sync20160314/bt_tmp/[VCB-Studio] Log Horizon 2 [Ma10p_1080p]/SPs/[VCB-Studio] Log Horizon 2 [SP05][Ma10p_1080p][x265_flac].mkv’: Transport endpoint is not connected
cp: failed to extend ‘./[VCB-Studio] Log Horizon 2 [SP05][Ma10p_1080p][x265_flac].mkv’: Transport endpoint is not connected
cp: failed to close ‘t/sync20160314/bt_tmp/[VCB-Studio] Log Horizon 2 [Ma10p_1080p]/SPs/[VCB-Studio] Log Horizon 2 [SP05][Ma10p_1080p][x265_flac].mkv’: Transport endpoint is not connected

Log:

Filesystem mounted and ready.
2017/10/17 15:10:24 Dispatch 2: LOOKUP, NodeId: 1. names: [err.txt] 8 bytes
2017/10/17 15:10:24 Serialize 2: LOOKUP code: OK value: {NodeId: 3 Generation=0 EntryValid=1.000 AttrValid=1.000 Attr={M0100644 SZ=169 L=1 0:0 B8*4096 i0:106004617 A 1508222424.062991072 M 1508222422.041082754 C 1508222422.041082754}}
2017/10/17 15:10:24 Dispatch 3: OPEN, NodeId: 3. data: {O_RDONLY,0x8000}
2017/10/17 15:10:24 Serialize 3: OPEN code: OK value: {Fh 2 }
2017/10/17 15:10:24 Dispatch 4: READ, NodeId: 3. data: {Fh 2 off 0 sz 4096  L 0 RDONLY,0x8000}
2017/10/17 15:10:24 Serialize 4: READ code: OK value:  169 bytes data

2017/10/17 15:10:24 Dispatch 5: GETATTR, NodeId: 3. data: {Fh 2}
2017/10/17 15:10:24 Serialize 5: GETATTR code: OK value: {A1.000000000 {M0100644 SZ=169 L=1 0:0 B8*4096 i0:106004617 A 1508222424.062991072 M 1508222422.041082754 C 1508222422.041082754}}
2017/10/17 15:10:24 Dispatch 6: FLUSH, NodeId: 3. data: {Fh 2}
2017/10/17 15:10:24 Serialize 6: FLUSH code: OK value:
2017/10/17 15:10:24 Dispatch 7: RELEASE, NodeId: 3. data: {Fh 2 0x8000  L0}
2017/10/17 15:10:24 Serialize 7: RELEASE code: OK value:
2017/10/17 15:10:29 Dispatch 8: LOOKUP, NodeId: 1. names: [sync20160314] 13 bytes
2017/10/17 15:10:29 Serialize 8: LOOKUP code: OK value: {NodeId: 4 Generation=0 EntryValid=1.000 AttrValid=1.000 Attr={M040755 SZ=4096 L=3 0:0 B8*4096 i0:9438014 A 1508223152.944000946 M 1462019437.860740971 C 1462019437.860740971}}
2017/10/17 15:10:29 Dispatch 9: LOOKUP, NodeId: 4. names: [bt_tmp] 7 bytes
2017/10/17 15:10:29 Serialize 9: LOOKUP code: OK value: {NodeId: 5 Generation=0 EntryValid=1.000 AttrValid=1.000 Attr={M040755 SZ=20480 L=207 500:100 B40*4096 i0:36183444 A 1508136618.869234753 M 1487772954.491800157 C 1487772954.491800157}}
2017/10/17 15:10:29 Dispatch 10: LOOKUP, NodeId: 5. names: [[VCB-Studio] Log Horizon 2 [Ma10p_1080p]] 41 bytes
2017/10/17 15:10:29 Serialize 10: LOOKUP code: OK value: {NodeId: 6 Generation=0 EntryValid=1.000 AttrValid=1.000 Attr={M040755 SZ=4096 L=6 500:100 B8*4096 i0:125829206 A 1508136643.945378779 M 1456058470.000000000 C 1457264362.099268846}}
2017/10/17 15:10:29 Dispatch 11: LOOKUP, NodeId: 6. names: [SPs] 4 bytes
2017/10/17 15:10:29 Serialize 11: LOOKUP code: OK value: {NodeId: 7 Generation=0 EntryValid=1.000 AttrValid=1.000 Attr={M040755 SZ=4096 L=2 500:100 B8*4096 i0:28246063 A 1508136644.154371644 M 1456058470.000000000 C 1457078708.583157033}}
2017/10/17 15:10:29 Dispatch 12: LOOKUP, NodeId: 7. names: [[VCB-Studio] Log Horizon 2 [SP05][Ma10p_1080p][x265_flac].mkv] 62 bytes
2017/10/17 15:10:29 Serialize 12: LOOKUP code: OK value: {NodeId: 8 Generation=0 EntryValid=1.000 AttrValid=1.000 Attr={M0100644 SZ=149948714 L=1 500:100 B292872*4096 i0:57933834 A 1508219623.484851758 M 1456075291.000000000 C 1457078708.583157033}}
2017/10/17 15:10:29 Dispatch 13: OPEN, NodeId: 8. data: {O_RDONLY,0x28000}
2017/10/17 15:10:29 Serialize 13: OPEN code: OK value: {Fh 2 }
2017/10/17 15:10:29 Dispatch 14: READ, NodeId: 8. data: {Fh 2 off 0 sz 393216  L 0 RDONLY,0x28000}
panic: runtime error: slice bounds out of range

goroutine 1 [running]:
panic(0x6b90e0, 0xc820010080)
	/usr/lib/go-1.6/src/runtime/panic.go:481 +0x3e6
github.com/rfjakob/gocryptfs/internal/fusefrontend.(*file).doRead(0xc82008c980, 0xc82014e000, 0x0, 0x60000, 0x0, 0x60000, 0x0, 0x0, 0x0, 0x0)
	/home/Galaxy/go/src/github.com/rfjakob/gocryptfs/internal/fusefrontend/file.go:176 +0x165f
github.com/rfjakob/gocryptfs/internal/fusefrontend.(*file).Read(0xc82008c980, 0xc82014e000, 0x60000, 0x60000, 0x0, 0x0, 0x0, 0xc800000000)
	/home/Galaxy/go/src/github.com/rfjakob/gocryptfs/internal/fusefrontend/file.go:239 +0x442
github.com/hanwen/go-fuse/fuse/pathfs.(*pathInode).Read(0xc8200f07b0, 0x7efc6fc23220, 0xc82008c980, 0xc82014e000, 0x60000, 0x60000, 0x0, 0xc8200743e0, 0x0, 0x0, ...)
	/home/Galaxy/go/src/github.com/hanwen/go-fuse/fuse/pathfs/pathfs.go:729 +0x6b
github.com/hanwen/go-fuse/fuse/nodefs.(*rawBridge).Read(0xc820012de0, 0xc8200743c8, 0xc82014e000, 0x60000, 0x60000, 0x0, 0x0, 0xc820146000)
	/home/Galaxy/go/src/github.com/hanwen/go-fuse/fuse/nodefs/fsops.go:455 +0x13d
github.com/hanwen/go-fuse/fuse.doRead(0xc820084000, 0xc820074240)
	/home/Galaxy/go/src/github.com/hanwen/go-fuse/fuse/opcode.go:321 +0xb5
github.com/hanwen/go-fuse/fuse.(*Server).handleRequest(0xc820084000, 0xc820074240, 0xc820074240)
	/home/Galaxy/go/src/github.com/hanwen/go-fuse/fuse/server.go:405 +0x6b0
github.com/hanwen/go-fuse/fuse.(*Server).loop(0xc820084000, 0x0)
	/home/Galaxy/go/src/github.com/hanwen/go-fuse/fuse/server.go:377 +0xde
github.com/hanwen/go-fuse/fuse.(*Server).Serve(0xc820084000)
	/home/Galaxy/go/src/github.com/hanwen/go-fuse/fuse/server.go:325 +0x54
main.doMount(0xc820072120, 0x0)
	/home/Galaxy/go/src/github.com/rfjakob/gocryptfs/mount.go:147 +0xd1a
main.main()
	/home/Galaxy/go/src/github.com/rfjakob/gocryptfs/main.go:261 +0x1312
@rfjakob
Copy link
Owner

rfjakob commented Oct 17, 2017

It crashes on a READ request with size 393216 bytes = 384 kiB:

  • READ, NodeId: 8. data: {Fh 2 off 0 sz 393216 L 0 RDONLY,0x28000}

This is a buffer overrun - the Linux kernel limits the requests to 128 kiB and this is the size of the buffer - is this on MacOS?

@galaxy001
Copy link
Author

galaxy001 commented Oct 17, 2017

It is on a NAS machine from synology, which is Linux 3.10.102 #15152 SMP Fri Oct 6 18:13:48 CST 2017 x86_64 GNU/Linux synology_cedarview_1813+.

The hash different error below is not fully checked, we should focus on the buffer overrun error above.
Hoping there could be a patch to detect this and provide some walkarounds.

I notice SHA1 hash of the file changed from bd736ef31f64aaf3c42544e8f3fb33e96c9f6f40 to b386265ed53ec32aae38bbb228cdf8c3f2302507 after rsync to another Linux server, thus I came back to the NAS to check the same file.
On that Linux server, fusermount is OK, just this file has different hash value. And I check the first hundreds of bytes and find they are the same.

@galaxy001
Copy link
Author

# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15704
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 15704
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Which should I enlarge ?

@rfjakob
Copy link
Owner

rfjakob commented Oct 17, 2017

This is a kernel constant, FUSE_MAX_PAGES_PER_REQ. You can only change it in the kernel source code and compile yourself. Which seems to be what Synology has done, probably to improve performance with NTFS (the ntfs-3g driver is FUSE-based).

I will fix this in gocryptfs. We will probably not take advantage of the larger request size, but we should not crash.

rfjakob added a commit that referenced this issue Oct 17, 2017
Our byte cache pools are sized acc. to MAX_KERNEL_WRITE, but the
running kernel may have a higher limit set. Clamp to what we can
handle.

Fixes a panic on a Synology NAS reported at
#145
@rfjakob
Copy link
Owner

rfjakob commented Oct 17, 2017

Fix pushed as 3009ec9

@rfjakob
Copy link
Owner

rfjakob commented Oct 18, 2017

Do you still get the crash?

If not we can focus on the hash mismatch.

@galaxy001
Copy link
Author

No crash now.

@rfjakob
Copy link
Owner

rfjakob commented Oct 19, 2017

Ok good. Closing this.

@rfjakob rfjakob closed this as completed Oct 19, 2017
rfjakob added a commit that referenced this issue Oct 21, 2017
We cannot return less data than requested to the kernel!

From https://libfuse.github.io/doxygen/structfuse__operations.html:

  Read should return exactly the number of bytes
  requested except on EOF or error, otherwise the
  rest of the data will be substituted with
  zeroes.

Reverts commit 3009ec9 minus
the formatting improvements we want to keep.

Fixes #147
Reopens #145
rfjakob added a commit that referenced this issue Oct 21, 2017
We use fixed-size byte slice pools (sync.Pool) and cannot
handle larger requests. So ask the kernel to not send
bigger ones.

Fixes #145
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants