Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very slow read benchmark results when direct is not enabled #495

Closed
jordeu opened this issue Sep 1, 2023 · 2 comments
Closed

Very slow read benchmark results when direct is not enabled #495

jordeu opened this issue Sep 1, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@jordeu
Copy link

jordeu commented Sep 1, 2023

Mountpoint for Amazon S3 version

mount-s3 1.0.0

AWS Region

eu-west-1

Describe the running environment

I'm conducting these tests on an r6id.8xlarge EC2 instance using AMI ami-0c5cd894db560d66c and both the bucket and the instance are in the same region.

What happened?

While running the Fio benchmarks on your repository, I observed a significant discrepancy between the results of seq_read_direct.fio and seq_read.fio. With direct mode enabled, I achieve a bandwidth of ~1300MB/s; however, without direct mode, the bandwidth drops sharply to ~3MB/s.

Furthermore, simple transfer benchmarks using tools like dd, cp, and cat also yield slower results — much below the expected ~1300MB/s.

To mount the bucket, I'm using the following command:

mount-s3 <BUCKET> <PATH> --allow-delete --allow-root --allow-other --dir-mode=0777

Is there a potential misstep in my approach? Do I need to specify any additional parameters? Alternatively, could this be a bug related to the absence of direct mode?

Relevant log output

$ fio --filename=/mount-s3/workdir/mountpoint-benchmark/bench_file_name mountpoint-s3/scripts/fio/read/seq_read_direct.fio
sequential_read_direct_io: (g=0): rw=read, bs=256K-256K/256K-256K/256K-256K, ioengine=sync, iodepth=1
fio-2.14
Starting 1 process
Jobs: 1 (f=1): [R(1)] [100.0% done] [1479MB/0KB/0KB /s] [5917/0/0 iops] [eta 00m:00s]
sequential_read_direct_io: (groupid=0, jobs=1): err= 0: pid=19357: Fri Sep  1 08:08:36 2023
  read : io=38933MB, bw=1297.8MB/s, iops=5190, runt= 30001msec
    clat (usec): min=68, max=288157, avg=191.82, stdev=3151.40
     lat (usec): min=68, max=288157, avg=191.87, stdev=3151.40
    clat percentiles (usec):
     |  1.00th=[   88],  5.00th=[   94], 10.00th=[   97], 20.00th=[  100],
     | 30.00th=[  102], 40.00th=[  105], 50.00th=[  108], 60.00th=[  111],
     | 70.00th=[  114], 80.00th=[  118], 90.00th=[  124], 95.00th=[  131],
     | 99.00th=[  151], 99.50th=[  181], 99.90th=[16064], 99.95th=[58112],
     | 99.99th=[187392]
    lat (usec) : 100=18.74%, 250=81.03%, 500=0.02%, 750=0.01%, 1000=0.02%
    lat (msec) : 2=0.04%, 4=0.02%, 10=0.01%, 20=0.02%, 50=0.04%
    lat (msec) : 100=0.03%, 250=0.02%, 500=0.01%
  cpu          : usr=0.60%, sys=14.61%, ctx=311466, majf=0, minf=75
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=155730/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=38933MB, aggrb=1297.8MB/s, minb=1297.8MB/s, maxb=1297.8MB/s, mint=30001msec, maxt=30001msec


$ fio --filename=/mount-s3/workdir/mountpoint-benchmark/bench_file_name mountpoint-s3/scripts/fio/read/seq_read.fio
sequential_read: (g=0): rw=read, bs=256K-256K/256K-256K/256K-256K, ioengine=sync, iodepth=1
fio-2.14
Starting 1 process
Jobs: 1 (f=1): [R(1)] [100.0% done] [3584KB/0KB/0KB /s] [14/0/0 iops] [eta 00m:00s]
sequential_read: (groupid=0, jobs=1): err= 0: pid=20833: Fri Sep  1 08:12:31 2023
  read : io=98304KB, bw=3273.9KB/s, iops=12, runt= 30027msec
    clat (usec): min=55, max=362468, avg=78032.33, stdev=67842.46
     lat (usec): min=55, max=362468, avg=78032.47, stdev=67842.42
    clat percentiles (usec):
     |  1.00th=[   57],  5.00th=[   67], 10.00th=[   77], 20.00th=[   92],
     | 30.00th=[39168], 40.00th=[55552], 50.00th=[66048], 60.00th=[90624],
     | 70.00th=[116224], 80.00th=[132096], 90.00th=[158720], 95.00th=[199680],
     | 99.00th=[284672], 99.50th=[309248], 99.90th=[362496], 99.95th=[362496],
     | 99.99th=[362496]
    lat (usec) : 100=25.00%, 250=2.60%
    lat (msec) : 50=5.47%, 100=30.21%, 250=34.90%, 500=1.82%
  cpu          : usr=0.00%, sys=0.09%, ctx=504, majf=0, minf=74
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=98304KB, aggrb=3273KB/s, minb=3273KB/s, maxb=3273KB/s, mint=30027msec, maxt=30027msec
@jordeu jordeu added the bug Something isn't working label Sep 1, 2023
@jordeu
Copy link
Author

jordeu commented Sep 1, 2023

Apologies, I've now observed that this is most likely related to the previously opened issue #488.

Please feel free to close this one in favor of that if you deem it appropriate.

@monthonk
Copy link
Contributor

monthonk commented Sep 1, 2023

Hey, thanks for reporting the issue.

I have the same thought that it is happening only when there is kernel readahead (non-direct io mode). Let's close this issue and track it in #488.

@monthonk monthonk closed this as completed Sep 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants