Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slowdown copy files into gocrypt directory #160

Closed
Kekskruemel opened this issue Nov 21, 2017 · 17 comments
Closed

Slowdown copy files into gocrypt directory #160

Kekskruemel opened this issue Nov 21, 2017 · 17 comments

Comments

@Kekskruemel
Copy link

Hi,

sadly I have slowdown during copy of large files into a gocrypt directory.
RAM and CPU is barely used.

It slow down from ~100MB/S to 2MB/S goes back up to full speed some time and again slow down.

If I copy the files into a non crypted directory all is fine.
How can I troubleshoot at best ?

Using unraid 6.3.5 on a hp microserver g8.

Thx!

@Aikhjarto
Copy link

@Kekskruemel What HDD are you using? This is the typical behavior of HDDs using SMR, like the Seagate Archive.

@rfjakob
Copy link
Owner

rfjakob commented Nov 21, 2017

Good point, can you look at the "%util" column in

iostat -x 1 -d

while it is slow? If 100%, we are blocked by the disk.

Otherwise may be this kernel bug, what version do you run? #11

@Kekskruemel
Copy link
Author

Hi,

thx for the fast answer drive is a Samsung HD204UI

%util says 100.00 So it's the drive not the controller ?

I am planning to buy new drives. Which Feature is here required to fix the problem ?

Thx !

@Aikhjarto
Copy link

The Samsung HD204UI is known for its large access times.
Large access times reduce the throughput significantly when you have lots of small files.

@rfjakob
Copy link
Owner

rfjakob commented Nov 21, 2017

I wonder why it is fast without gocryptfs, though. How do you copy the files?

Nfs -> gocryptfs -> ext4 -> samsung hd?

@Kekskruemel
Copy link
Author

Tested via

local copy to the same disk with rsync and via nfs share.

Both are fast if I copy into a non crypted folder and have alowdowns with the crypted one.

I use

@Kekskruemel
Copy link
Author

btrfs.

Sorry weiting on phone xD

@rfjakob
Copy link
Owner

rfjakob commented Nov 21, 2017

No problem :)

I have seen slowdowns with Btrfs before, can you try mounting with

-noprealloc

This fixed the issue at #63

@Kekskruemel
Copy link
Author

./gocryptfs -noprealloc cipher plain

same problem :/ I can switch the fs on one disk for testing.

@rfjakob
Copy link
Owner

rfjakob commented Nov 21, 2017

Hmm. gocryptfs uses 128kib writes (kernel limit for FUSE), can you compare with

dd if=/dev/zero of=foo bs=128k

?

@Kekskruemel
Copy link
Author

dd if=/dev/zero of=foo bs=128k
^C48174+0 records in
48174+0 records out
6314262528 bytes (6.3 GB, 5.9 GiB) copied, 73.5896 s, 85.8 MB/s

@rfjakob
Copy link
Owner

rfjakob commented Nov 21, 2017

Is it stable at 85mb/s? You can see it in iostat, it has a write speed column. Maybe use "-m" to get mb/s instead of kb/s

@Kekskruemel
Copy link
Author

Kekskruemel commented Nov 21, 2017

Ok something new.

If I use rsync --progress source file to copy into the directory and to see the speed there is something very interesting.

I can see high MB/s in iostats if rsync show low speed and low MB/s in iostat when rsync show high speeds is this caching in some way ?

md1 shows also some high write speed. Isn't this some kind of software raid (sorry linux noob).

Summary:

Local Disk copy in a non encryptet directory iostats show stable 80 MB/s
dd if=/dev/zero of=foo bs=128k iostats show stable 75 MB/s with a 1 second slow down to 35 and back to 100 and after this 75 again which is fine.

local copy into a gocryptfs directory and I have the situation explained in this post.

iostats show 75 mb/s and rsync 200kb, after some time iostats drop to 4 mb/s for 7 seconds and rsync display 80 mb/s and so on.

@rfjakob
Copy link
Owner

rfjakob commented Nov 21, 2017

Yes this seems to be writeback caching.

When you run the dd test on gocryptfs (let it run for a minute or two) what average speed do you get?

@Kekskruemel
Copy link
Author

Hmm unraid 6.4.0 rc 13 fixed the problem for me.

They changed something with the buffer in the betas. Maybe this fixed the problem.

Improved shfs/mover (-rc1)

The LimeTech user share file system (shfs) has been improved in two areas. First, we now make use of FUSE read_buf/write_buf methods. This should result in significant throughput increases. Second, the mover script/move program no longer uses rsync to move files/directories between the cache pool and the parity array. Instead the move program invokes a new shfs ioctl() call. This should result in complete preservation of all metadata including atime and mtime.

@rfjakob
Copy link
Owner

rfjakob commented Nov 28, 2017

Good to hear that it's fixed, closing the ticket.

@rfjakob rfjakob closed this as completed Nov 28, 2017
@krim404
Copy link

krim404 commented Feb 22, 2019

EDIT: created new issue from this one.

It seems like this issue was not fixed. He just didn't rely on rsync anymore.
I have the same issue: rsync will extremly slow down the whole machine.

See #369

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants