Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rtorrent io read heavy #409

Open
colinhd8 opened this issue Mar 19, 2016 · 4 comments
Open

rtorrent io read heavy #409

colinhd8 opened this issue Mar 19, 2016 · 4 comments

Comments

@colinhd8
Copy link

  1. while the rtorrent shows the upload speed is 10MB/s, the dsm shows the upload speed is 100MB/s( I set up a synology nas and share the disk to rtorrent by nfs, i confirm that it is MB/s not Mb/s, and there was no other machine or process read those files which shared by the nas.)
    The download speed of rtorrent is 8MB/s and the dsm shows the download speed as 20MB/s.
    It seems that the speed of read files is 10x of the fact speed of the upload speed.
  2. I set the max memory usage to 2G, but rtorrent only use about 700MB. Is there other way to cache the read and write operation?

My env:
OS: Fedora 22 16G mem
rtorrent:0.9.6/0.13.6 10 torrents(200G)

DSM: 5.2-5644

@chros73
Copy link
Contributor

chros73 commented Mar 21, 2016

I also noticed the 2. point. I'm not sure if it's a good thing or a bug :)

@zottelbeyer
Copy link

I'm having a similar Problem. My setup is a freenas NFS share which is accessed by the rtorrent server.

When downloading, the written amount to the NAS seem reasonable.
When uploading, the read amount is about 10 times as high.

See the following nethogs image taken on the server running rtorrent:
Nethogs stats

when looking at nfsstats (also on the client side running rtorrent) you can see a massive amount of statfs traffic:

 root@RUST ~ # nfsstat                                                                                                                         :(
Server rpc stats:
calls      badcalls   badclnt    badauth    xdrcall
0          0          0          0          0       

Client rpc stats:
calls      retrans    authrefrsh
971319875   56         971319889

Client nfs v4:
null         read         write        commit       open         open_conf    
0         0% 265620422 27% 6279114   0% 23805     0% 258774    0% 56        0% 
open_noat    open_dgrd    close        setattr      fsinfo       renew        
1541      0% 0         0% 258476    0% 444       0% 12        0% 1500      0% 
setclntid    confirm      lock         lockt        locku        access       
9         0% 9         0% 0         0% 0         0% 0         0% 11082927  1% 
getattr      lookup       lookup_root  remove       rename       link         
112458575 11% 27268834  2% 3         0% 2646      0% 30        0% 0         0% 
symlink      create       pathconf     statfs       readlink     readdir      
0         0% 15        0% 9         0% 548046392 56% 11        0% 16262     0% 
server_caps  delegreturn  getacl       setacl       fs_locations rel_lkowner  
21        0% 0         0% 0         0% 0         0% 0         0% 0         0% 
secinfo      exchange_id  create_ses   destroy_ses  sequence     get_lease_t  
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
reclaim_comp layoutget    getdevinfo   layoutcommit layoutreturn getdevlist   
0         0% 0         0% 0         0% 0         0% 0         0% 0         0% 
(null)       
0         0%

@rakshasa
Copy link
Owner

If you are compiling for a 64bit CPU arch, try setting the max memory way higher (past the 3/4GB barrier due to 32bit address spaces). I haven't looked into the max memory default setting and the autoconf scripts that decide on that in many years. It makes the assumption that it will be compiled as a 32bit binary.

Also the read issues can also probably be alleviated by setting higher socket buffer sizes, e.g. such as:

network.receive_buffer.size.set = 4M
network.send_buffer.size.set    = 12M

@Ondjultomte
Copy link

Yes, this is a "known" problem with excessive reads of ~10x the UL. It is really taxing the io subsystem with todays common high speeds connections and it will only get worse with increased connectionspeeds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants