Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IO does not get accounted to the process that caused it #313

Closed
Rudd-O opened this issue Jul 6, 2011 · 2 comments
Closed

IO does not get accounted to the process that caused it #313

Rudd-O opened this issue Jul 6, 2011 · 2 comments
Milestone

Comments

@Rudd-O
Copy link
Contributor

Rudd-O commented Jul 6, 2011

Open atop

Change to the disk io view

Start a disk intensive process to read and write from a ZFS filesystem

See that the process does not get accounted for its disk usage

This has ramifications: the I/O scheduler probably does not take process disk priorities into account because of this, IO starved processes probably do not get their requests serviced, et cetera. And, of course, it makes it hard to diagnose performance issues.

@behlendorf
Copy link
Contributor

That's not good. However, I would have expected atop to read these stats from /proc/[pid]/io which is updated properly on my machines for ZFS. Do you know what interface it's using to access these stats?

@Rudd-O
Copy link
Contributor Author

Rudd-O commented Jul 8, 2011

Afaik that one.

Sent from my Android phone with K-9 Mail. Please excuse my brevity.

behlendorf [email protected] wrote:

That's not good. However, I would have expected atop to read these stats from /proc/[pid]/io which is updated properly on my machines for ZFS. Do you know what interface it's using to access these stats?

Reply to this email directly or view it on GitHub:
#313 (comment)

behlendorf added a commit to behlendorf/zfs that referenced this issue Nov 15, 2013
Because ZFS bypasses the page cache we don't inherit per-task I/O
accounting for free.  However, the Linux kernel does provide helper
functions allow us to perform our own accounting.  These are most
commonly used for direct IO which also bypasses the page cache, but
they can be used for the common read/write call paths as well.

Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#313
Issue openzfs#1275
unya pushed a commit to unya/zfs that referenced this issue Dec 13, 2013
Because ZFS bypasses the page cache we don't inherit per-task I/O
accounting for free.  However, the Linux kernel does provide helper
functions allow us to perform our own accounting.  These are most
commonly used for direct IO which also bypasses the page cache, but
they can be used for the common read/write call paths as well.

Signed-off-by: Pavel Snajdr <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes openzfs#313
Closes openzfs#1275
mmaybee pushed a commit to mmaybee/openzfs that referenced this issue Apr 6, 2022
Reduce the output width of `zcache stats` by:
* Delimit columns with one space rather than two
* Remove `healed` column

Bonus change: Rename some column headers to be more consistent, e.g.
LOOKUPS / HITS / INSERTS

old: 202 columns
```
$ zcache stats -a
TIMESTAMP    CACHE-LOOKUP    --------INDEX-ACCESS--------     CACHE-HITS     CACHE-INSERT       INSERT-SOURCE        INSERT-DROPS   BUF-BYTES-USED       CACHE-OTHER          ALLOCATOR     ALLOCATOR-FREE
2022-03-24  count   bytes   pendch  entry$  chunk$   disk   count   ratio   count   bytes    read   write   spec-r  full-q  lkbusy  demand   spec   evicts  pending healed  alloc   avail   space   slabs
----------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------  ------
  06:54:05       0       0      0%      0%      0%      0%       0      0%       0       0      0%      0%      0%       0       0  43.5KB  3.50KB       0   33.0M       0  3.31TB   794GB   19.0%   19.0%
```

new: 171 columns
```
TIMESTAMP     LOOKUPS    --------INDEX-ACCESS-------  ----HITS---     INSERTS       INSERT-SOURCE     INSERT-DROPS   BUFFER-USED      OTHER       ALLOCATOR     AVAILABLE
2022-03-24  count  bytes pendch entry$ chunk$   disk  count  ratio  count  bytes   read  write spec-r full-q lkbusy demand   spec evicts pendch  alloc  avail  space  slabs
---------- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
  18:22:28    358  466KB   2.0%   0.3%     0%  97.8%    170  47.5%  18.2K 49.4MB   1.3%     0%  98.7%   126K      0 1.71MB  256MB      0  3.65M  305GB 26.0GB   7.9%   4.6%
```
mmaybee pushed a commit to mmaybee/openzfs that referenced this issue Apr 6, 2022
…penzfs#314)

If a zettacache insert fails due to block allocation failure (cache
full), it is counted as an insertion, and shows up in the `CACHE-INSERT`
columns of `zcache stats`.

This commit changes the accounting so that these failures show up as
`INSERT-DROPS alloc`.

Bonus changes:
* `INSERT-DROPS full-q` is renamed to `INSERT-DROPS buffer`, to match
the `BUF-BYTES-USED` column (soon to be `BUFFER-USED`, see openzfs#313).
* The same problem occurs if we hit the the hard limit of pending
changes memory use.  Since this is expected to be very rare, rather
than adding another column of output, we account it with `INSERT-DROPS
alloc`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants