Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Significant performance issue on manually mounted NTFS partition #6985

Closed
1 of 2 tasks
suxpert opened this issue May 25, 2021 · 3 comments
Closed
1 of 2 tasks

Significant performance issue on manually mounted NTFS partition #6985

suxpert opened this issue May 25, 2021 · 3 comments
Labels

Comments

@suxpert
Copy link

suxpert commented May 25, 2021

Windows Build Number

Microsoft Windows [Version 10.0.21387.1]

WSL Version

  • WSL 2
  • WSL 1

Kernel Version

5.10.16

Distro Version

Kali rolling 2021.1

Other Software

No response

Repro Steps

In #6984 I've reported that it failed to mount NTFS partition using wsl --mount, but I can mount the partition manually with two steps:

  1. wsl --mount \\.\PHYSICALDRIVE0 --bare in PowerShell
  2. Open WSL terminal and run sudo mount -t ntfs /dev/sdc2 /mount_point

I do this because it has been reported that file IO between windows and WSL (drvfs) is much slower than it should be, so I thought mount the whole disk in wsl might improve performance.

But during my test using dd, the performance of this manually mounted NTFS is far worth than drvfs.

Expected Behavior

I thought that mounting a physical disk directly in WSL should achieve at least the same performance compared with drvfs, if not better. Because in this way, WSL can directly access the disk without transforming though 9p.

Actual Behavior

First I've tried dd to shared memory, everything seems OK:
$ dd if=/dev/zero of=/run/shm/largefile bs=1M count=5000

5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 1.26603 s, 4.1 GB/s

Then WSL filesystem (vhdx files on SSD), the first run is slower than the second run (after I delete the file)
$ dd if=/dev/zero of=/tmp/largefile bs=1M count=5000

5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 4.7826 s, 1.1 GB/s

$ dd if=/dev/zero of=$HOME/largefile bs=1M count=5000

5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 2.11225 s, 2.5 GB/s

For mounted drivers (drvfs), the performance was still acceptable on the SSD side:
$ dd if=/dev/zero of=/mnt/c/largefile bs=1M count=5000

5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 5.07921 s, 1.0 GB/s

Where /mnt/c is auto mounted in WSL:

drvfs on /mnt/c type 9p (rw,noatime,dirsync,aname=drvfs;path=C:;uid=1000;gid=1000;metadata;umask=22;fmask=11;symlinkroot=/mnt/,mmap,access=client,msize=262144,trans=virtio)

But for HDD, the speed is much lower:
$ dd if=/dev/zero of=/mnt/d/largefile bs=1M count=5000

5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 16.7523 s, 313 MB/s

Where /mnt/d is also auto mounted in WSL:

drvfs on /mnt/d type 9p (rw,noatime,dirsync,aname=drvfs;path=D:;uid=1000;gid=1000;metadata;umask=22;fmask=11;symlinkroot=/mnt/,mmap,access=client,msize=262144,trans=virtio)

Last, the manually mounted NTFS partition (the same D:\ driver), mounted via:
C:\> wsl --mount \\.PHYSICALDRIVE0 --bare
$ sudo mount -t ntfs /dev/sdc2 /tmp/disk
and resulting:

/dev/sdc2 on /tmp/disk type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)

Performance
$ dd if=/dev/zero of=/tmp/disk/largefile bs=1M count=5000

5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB, 4.9 GiB) copied, 95.7824 s, 54.7 MB/s

Diagnostic Logs

No response

@OneBlue
Copy link
Collaborator

OneBlue commented May 25, 2021

@suxpert: Thanks for sharing this.

Given that the wsl --mount drive is an hdd, It's expected that it's slower that your SSD host drive (there's a also a IO penalty since you're accessing it from a VM).

When writing to drvfs a bunch of caches are in play on both the host and the guest so if you're benchmarking I'd recommend writing much bigger files (> 10GB).

@suxpert
Copy link
Author

suxpert commented May 26, 2021

@OneBlue I agree with you that HDD should be slower than SSD, but I don't think writing speed at 54MB/s for HDD is acceptable, even if it is mounted on VM. So I disagree to close this issue and said that this is by design.

My computer have 32GB memory, so I tried a much larger file (~50GB) in case of cache, here is the result:

$ dd if=/dev/zero of=/mnt/d/largefile bs=1M count=50000

50000+0 records in
50000+0 records out
52428800000 bytes (52 GB, 49 GiB) copied, 375.05 s, 140 MB/s

Yes, slower than the previous 5GB test, but is still much faster than manually mounting with -t ntfs, around 3 times!

I also noticed that in task manager, activite time is about 90% almost all the time with drvfs, but is clumbing up and down, from 0% to 60% and then 0% again and again when manually mounted, so this must be an issue, but not a feature that is by design.

@OneBlue
Copy link
Collaborator

OneBlue commented May 26, 2021

@suxpert There are multiple things that are causing this different numbers: the underlying medias [HDD vs SSD], the cost of passing through the disk to the WSL2 (via SCSI), and the ntfs driver that you're using (most likely ntfs-3g).

I can't say for sure which of those are the most impactful but it's not something that we can change from a WSL perspective since WSL2 is accessing the disk using the hyper-V disk passthrough driver.

I suspect that accessing a lot of small files might be faster with wsl --mount though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants