Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the sample job file and execution commands #1651

Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 11 additions & 10 deletions agent/bench-scripts/pbench-fio.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,16 +199,17 @@ Now format the filesystems for each cinder volume and mount them:
Then you construct a fio job file for your initial sequential tests, and this will also create the files for the subsequent tests. certain parameters have to be specified with the string `$@` because pbench-fio wants to fill them in. It might look something like this:

[global]
# size of each FIO file (in MB)
size=$@
# size of each FIO file (in MB).
# size = ( size of /dev/vdb ) / numjobs
size=1GiB
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems incorrect, given the comment on line 199.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The size is not filled as well.

# do not use gettimeofday, much too expensive for KVM guests
clocksource=clock_gettime
# give fio workers some time to get launched on remote hosts
startdelay=5
# files accessed by fio are in this directory
directory=$@
# write fio latency logs in /var/tmp with "fio" prefix
write_lat_log=/var/tmp/fio
directory=/mnt/fio/files
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about this change either, see our basic job templates:

The fio-shared-fs.job template uses directory with the $target environment variable reference that allows us to pass that from the command line.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the command line I've used, this parameter is not getting filled. Also the pdsh commands above assumes the directory path is fixed.

# write fio latency logs
write_lat_log=fio
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to change this file name and location without also changing the comment?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the absolute path provided, the /var/lib/pbench-agent/fio__2020.05.08T08.24.48/1-write-4KiB/sample1/result.json is not created.
Logs:
running fio job: /var/lib/pbench-agent/fio__2020.05.08T08.24.48/1-write-4096KiB/fio.job (sample1)
Warning: ALREADY_ENABLED: '8765:tcp' already in 'public'
Warning: ALREADY_ENABLED: '8765:tcp' already in 'public'
fio job complete
fio-postprocess: could not find any result files to process, exiting

# write a record to latency log once per second
log_avg_msec=1000
# write fio histogram logs in /var/tmp/ with "fio" prefix
Expand All @@ -231,13 +232,13 @@ Then you construct a fio job file for your initial sequential tests, and this wi
# do not create just one file at a time
create_serialize=0

And you write it to a file named fio-sequential.job, then run it with a command like this one, which launches fio on 1K guests with json output format.
And you write it to a file named fio-sequential.job, then run it with a command like this one, which launches 4 fio jobs on each guests with json output format.

/usr/local/bin/fio --client-file=vms.list --pre-iteration-script=drop-cache.sh \
--rw=write,read -b 4,128,1024 -d /mnt/fio/files --max-jobs=1024 \
--output-format=json fio-sequential.job
pbench-fio --client-file=vms.list --pre-iteration-script=drop-cache.sh \
-t write,read -b 4,128,1024 -d /mnt/fio/files --numjobs=4 \
--job-file fio-sequential.job
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why make all these changes when the comments at line 234 and 240 don't match the new changes?

If the original line was supposed to work with the base fio program itself, then we should fix it. If it is intended to be run with pbench-fio wrapper as this new change suggests, we should consider updating the text surrounding the command to reflect the intention.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fio doesn't take the parameters --client-file, --pre-iteration-script.
And pbench-fio does not take the parameters --output-format=json, --max-jobs --rw. So as such this command will neither work for fio nor pbench-fio. I'll change the texts around it.


This will write the files in parallel to the mount point. The sequential read test that follows can use the same job file. The **--max-jobs** parameter should match the count of the number of records in the vms.list file (FIXME: is --max-jobs still needed?).
This will write the files in parallel to the mount point. The sequential read test that follows can use the same job file.

Since we are using buffered I/O, we can usually get away with using a small transfer size, since the kernel will do prefetching, but there are exceptions, and you may need to vary the **bs** parameter.

Expand Down