Why is VHDX so slow?


josh7777777

New member
Local time
10:01 AM
Posts
5
OS
Windows 11
I've been trying to set up some virtualization, and originally my understanding was that VHDX just offered a slight performance hit.
However, once I got to actually testing them, it's actually more like a 50% or greater hit to performance! And every time I add a virtualization layer (e.g. a VHDX inside a hyper-v guest on a VHDX), it cuts the performance by another 50%.
I've done quite a bit of reading, and I can't see any obvious reasons why the performance should be so bad. Here are some screenshots of my tests on the bare NVMe drive versus the mounted VHDX (which is on the same drive). I'm running the tests one at a time, so they're not affecting each other.
Regardless of what I do, the first test (sequential read/write) is almost always the same. However, then next three tests (random read/write) are about 2x worse.
I'm wondering if the reading I've done so far hasn't considered how damn fast these NVMe drives are, and VHDX just can't keep up.
These are Samsung 4TB 990 Pro drives.
I've tried a lot of different combinations, and the results are all about the same. REFs instead of NTFS, eFAT, 64k instead of 4k, striped Storage Spaces, parity, mirrored. The first (sequential IO) test is usually pretty close, and the next three (random IO) are 50% or worse.

Drive-Ntfs-4k-Nvme-Peak.png

Vhdx-Ntfs-4k-Nvme-Peak.png
 
Windows Build/Version
Windows Server Datacenter 2022

My Computer

System One

  • OS
    Windows 11
    Computer type
    PC/Desktop
Hang on - you are comparing oranges and apples.

A vhdx file can be used like a normal drive and many of us install Insider versions and create a dual boot entry for the vhdx.

Running OS a dual boot option from a vhdx file will access your physical hard drives directly, with a slight reduction in performance.

When you create a virtual machine, your VM interfaces through the VM device drivers to physical hardware.

For example, inside the vm, it looks like vm is connected to the internet by ethernet but outside the vm, you can be using wifi.

Same is true of hard drives i.e. you are interfacing between an emulated drive in vm to a physical drive.

I never nest vhdx files inside a vm as they take up a VM ram space - better to add via the Hyper-V setup menus.

So your speed issue relate to the vm interface drivers.

Bear in mind, drive performance is only one part of the VM performance as well.

People's experiences vary but I find other VMs fare even worse.
Look for posts of mine where I compared the vms.
 

My Computer

System One

  • OS
    Windows 10 Pro + others in VHDs
    Computer type
    Laptop
    Manufacturer/Model
    ASUS Vivobook 14
    CPU
    I7
    Motherboard
    Yep, Laptop has one.
    Memory
    16 GB
    Graphics Card(s)
    Integrated Intel Iris XE
    Sound Card
    Realtek built in
    Monitor(s) Displays
    N/A
    Screen Resolution
    1920x1080
    Hard Drives
    1 TB Optane NVME SSD, 1 TB NVME SSD
    PSU
    Yep, got one
    Case
    Yep, got one
    Cooling
    Stella Artois
    Keyboard
    Built in
    Mouse
    Bluetooth , wired
    Internet Speed
    72 Mb/s :-(
    Browser
    Edge mostly
    Antivirus
    Defender
    Other Info
    TPM 2.0
I'm using the VHDX as a normal drive. There aren't any VMs involved at all with the two test result screenshots. (I mentioned VMs in my post, but most of my testing is done outside of them). I'm just adding the VHDX as another drive/mount point rather than booting from it. Does mounting as a drive versus booting from a VHDX cause difference as far as performance goes? I would expect both to act the same.

I'm wondering if maybe the VHDX causes the NVMe read/write cache to be bypassed.
 
Last edited:

My Computer

System One

  • OS
    Windows 11
    Computer type
    PC/Desktop
Hmmmmm....I've run OS's on my physical machine from .vhdx. I have Windows 11 setup as a .vhdx to test out my hardware and such on 11, while still running the actual machine on Windows 10.

My performance felt just fine.....but I've also not run benchmarks to see any actual numbers. I could take a look at that later when I'm at home.

Does your performance feel terrible when running from the .VHDX file?
 

My Computers

System One System Two

  • OS
    Windows 11 Pro
    Computer type
    PC/Desktop
    Manufacturer/Model
    Beelink SEI8
    CPU
    Intel Core i5-8279u
    Motherboard
    AZW SEI
    Memory
    32GB DDR4 2666Mhz
    Graphics Card(s)
    Intel Iris Plus 655
    Sound Card
    Intel SST
    Monitor(s) Displays
    Asus ProArt PA278QV
    Screen Resolution
    2560x1440
    Hard Drives
    512GB NVMe
    PSU
    NA
    Case
    NA
    Cooling
    NA
    Keyboard
    NA
    Mouse
    NA
    Internet Speed
    500/50
    Browser
    Edge
    Antivirus
    Defender
    Other Info
    Mini PC used for testing Windows 11.
  • Operating System
    Windows 10 Pro
    Computer type
    PC/Desktop
    Manufacturer/Model
    Custom
    CPU
    Ryzen 9 5900x
    Motherboard
    Asus Rog Strix X570-E Gaming
    Memory
    64GB DDR4-3600
    Graphics card(s)
    EVGA GeForce 3080 FT3 Ultra
    Sound Card
    Onboard
    Monitor(s) Displays
    ASUS TUF Gaming VG27AQ. ASUS ProArt Display PA278QV 27” WQHD
    Screen Resolution
    2560x1440
    Hard Drives
    2TB WD SN850 PCI-E Gen 4 NVMe
    2TB Sandisk Ultra 2.5" SATA SSD
    PSU
    Seasonic Focus 850
    Case
    Fractal Meshify S2 in White
    Cooling
    Dark Rock Pro CPU cooler, 3 x 140mm case fans
    Mouse
    Logitech G9 Laser Mouse
    Keyboard
    Corsiar K65 RGB Lux
    Internet Speed
    500/50
    Browser
    Chrome
    Antivirus
    Defender.
> Does your performance feel terrible when running from the .VHDX file?
I haven't done enough random IO work to be able to tell. I'm just trying to set up the environment at this point. I'm relying on the performance testing tool to tell me. :)

I just started playing around with the "dskspd.exe" performance measuring tool, and that's giving me results closer to what I expect. The VHDX is showing as roughly 10% slower... not 50%. I'm not sure if I'm using the tool correctly or not yet, though. (Left is physical drive, right is mounted VHDX)

1709233746598.png
 

My Computer

System One

  • OS
    Windows 11
    Computer type
    PC/Desktop
I'm using the VHDX as a normal drive. There aren't any VMs involved at all with the two test result screenshots. (I mentioned VMs in my post, but most of my testing is done outside of them). I'm just adding the VHDX as another drive/mount point rather than booting from it. Does mounting as a drive versus booting from a VHDX cause difference as far as performance goes? I would expect both to act the same.

I'm wondering if maybe the VHDX causes the NVMe read/write cache to be bypassed.
If you are using vhdx on host, I cannot see why you get such a performance reduction. I have found loss of performace to be less than 5% at most.

One thing to be aware of, a vhdx file needs spare space like any drive to avoid fragmentation.

If your vhdx file is on an hdd rather than an ssd, that could lead to a reduction in performance but 50% is excessive.

A VHDX mounted as a host drive uses nvme drivers, so only performance loss is time it takes to convert data in vhdx file to "physical data". This is normally very low.

Something else is wrong if performance on a host mounted vhdx file is causing a 50% reduction in speed.
 

My Computer

System One

  • OS
    Windows 10 Pro + others in VHDs
    Computer type
    Laptop
    Manufacturer/Model
    ASUS Vivobook 14
    CPU
    I7
    Motherboard
    Yep, Laptop has one.
    Memory
    16 GB
    Graphics Card(s)
    Integrated Intel Iris XE
    Sound Card
    Realtek built in
    Monitor(s) Displays
    N/A
    Screen Resolution
    1920x1080
    Hard Drives
    1 TB Optane NVME SSD, 1 TB NVME SSD
    PSU
    Yep, got one
    Case
    Yep, got one
    Cooling
    Stella Artois
    Keyboard
    Built in
    Mouse
    Bluetooth , wired
    Internet Speed
    72 Mb/s :-(
    Browser
    Edge mostly
    Antivirus
    Defender
    Other Info
    TPM 2.0
There is indeed an expected performance drop but it varies and the small numbers are numbers that may need some interpretation in practical case.

My 2 cents after seeing the screenshots, which are very clear.
The big chunks (SEQ1M) are read written at similar speeds. No significant drop there.
It's the smaller bits (RND4K) that note the drop in performance.

From the host OS perspective, the VHDX is a file on the physical drive.
When you read write in the file you get an extra overhead, there is also file fragmentation to consider and is also possible that the antivirus software may look over the VHDX file when read/written.
 

My Computers

System One System Two

  • OS
    Windows 11 Pro
    Computer type
    PC/Desktop
    Manufacturer/Model
    Intel NUC
    CPU
    i3 8109U
    Motherboard
    Intel
    Memory
    16GB DDR4 @2400
    Graphics Card(s)
    Intel Iris Plus Graphics 655
    Sound Card
    Intel / Realtek HD Audio
    Monitor(s) Displays
    LG-32ML600M
    Screen Resolution
    1920x1080
    Hard Drives
    Intel SSD 250GB + Samsung QVO SSD 1TB
    PSU
    Adapter
    Cooling
    The usual NUC airflow
    Keyboard
    Logitech Orion G610
    Mouse
    SteelSeries Rival 100 Red
    Internet Speed
    Good enough
    Browser
    Chromium, Edge, Firefox
    Antivirus
    Windows Defender
  • Operating System
    CentOS 9 Stream / Alma / Rocky / Fedora
    Computer type
    Laptop
    Manufacturer/Model
    TOSHIBA
    CPU
    Intel i7 4800MQ
    Motherboard
    TOSHIBA
    Memory
    32GB DDR3 @1600
    Graphics card(s)
    NVIDIA Quadro K2100M
    Sound Card
    Realtek HD Audio
    Monitor(s) Displays
    Built-in
    Screen Resolution
    1920x1080
Thanks for the input.
Fragmentation isn't an issue. I either choose fixed or if it's dynamic, the drive the vhdx is on empty. And the test is like 1gb of 4 tb. Maybe 2% fragmentation max.

I really think cache is a big part of it. I found out crystaldisk actually uses dskspd in the background, and using that directly one can disable caching in the tests. That brings the numbers closer together. I'd love to know why cache would work differently between the two.

Making the cluster size of the vhdx and the drive hosting it match seems to help a little.

As a tangent: I just did some storage spaces tests, and when using storage spaces, making the vhdx sector size a multiple greater than the storage space slice (forget the name) size makes a BIG difference (in my storage spaces tests). Eg. If storage space slice size is 16kb, with 2 columns in a stripe raid then format vhdx to 32kb.

I'll check the anti-virus idea. I hadn't thought that'd tax a physical drive less than a vhdx drive, but maybe. I know win11 just added a special dev drive vhdx that causes anti-virus to scan the vhdx in async mode to speed it up.
 

My Computer

System One

  • OS
    Windows 11
    Computer type
    PC/Desktop
Good point on the cache.
I'll check the anti-virus idea. I hadn't thought that'd tax a physical drive less than a vhdx drive, but maybe. I know win11 just added a special dev drive vhdx that causes anti-virus to scan the vhdx in async mode to speed it up.
Was an idea, for instance if you use Defender, that you can add an exclusion to the folder where you keep the VHDX files or add exclusion on the VHDX file itself. That way it won't have to scan your file, would be a worst-case if it happens during benchmark or heavy usage.
 

My Computers

System One System Two

  • OS
    Windows 11 Pro
    Computer type
    PC/Desktop
    Manufacturer/Model
    Intel NUC
    CPU
    i3 8109U
    Motherboard
    Intel
    Memory
    16GB DDR4 @2400
    Graphics Card(s)
    Intel Iris Plus Graphics 655
    Sound Card
    Intel / Realtek HD Audio
    Monitor(s) Displays
    LG-32ML600M
    Screen Resolution
    1920x1080
    Hard Drives
    Intel SSD 250GB + Samsung QVO SSD 1TB
    PSU
    Adapter
    Cooling
    The usual NUC airflow
    Keyboard
    Logitech Orion G610
    Mouse
    SteelSeries Rival 100 Red
    Internet Speed
    Good enough
    Browser
    Chromium, Edge, Firefox
    Antivirus
    Windows Defender
  • Operating System
    CentOS 9 Stream / Alma / Rocky / Fedora
    Computer type
    Laptop
    Manufacturer/Model
    TOSHIBA
    CPU
    Intel i7 4800MQ
    Motherboard
    TOSHIBA
    Memory
    32GB DDR3 @1600
    Graphics card(s)
    NVIDIA Quadro K2100M
    Sound Card
    Realtek HD Audio
    Monitor(s) Displays
    Built-in
    Screen Resolution
    1920x1080
Hi there
Not sure why your HYPER-V should run so slowly -- You must have got some windows stuff hosed up-- e.g contention on paging / other system files, poor RAID set up (if you are using it), too much load on HOST while the VM is active / too much activity on the VM etc.

In general CPU power isn't an issue on modern machines, nor should RAM be either. Ever since PC's came into popular use many years ago the biggest cause for poor performance was generally the whole disk i/o subsystem followed by poor / overloaded graphics. This was even true on older IBM mainframes too on systems like MVS/SP etc. Poor disk i/o and bad physical blocking could almost slow one of these mainframes into a crawl. Any sort of nvme on a pc should be able to fly along -- so I suspect something seriously wrong with your Windows installation. There's only a tiny (shouldn't even be noticeable) overhead in using vhdx files.

Hyper-V beats the socks off VMware or Virtual box if the guest(s) are set up properly.

Running a Win 10,Win11 and a Linux guest on Hyper-V on a Win 11 Pro canary build Host on any sort of decently 'spec'd machine should cause no issues at all even if running all 3 concurrently.

Cheers
jimbo
 

My Computer

System One

  • OS
    Windows XP,7,10,11 Linux Arch Linux
    Computer type
    PC/Desktop
    CPU
    2 X Intel i7
Hi Jimbo,
Thanks for the response. This is a pretty decent machine, and I'm just designing the virtualization environment now, so there's barely anything running other than my tests and very basic fresh-install Windows stuff.

12 x 4tb Samsung 990 pro
512 GB ddr5 ram
RTX 4090
Xeon w9-3475x with a billion cores

HOWEVER
After my last comment, I recreated my entire environment to use Windows Server Data Center vNext (aka 2025) (instead of 2022). My results seem to be considerably different in this environment.

And I think I can probably retract the "VHDX is slow comment" as long as all the numbers in all the layers are set up correctly. I'm actually seeing 0% loss in most of my tests now.

Here are some of my unprofessional conclusions:

1. Don't use very small sector sizes. When you get down to 4k, that's when VHDX starts to show performance loss. Here's an example. The numbers are matching until the last test, which is 4K writes.

Physical disk using NTFS 4k:

1709820982944.png

VHDX using NTFS 4k on the physical 4k disk:
1709821111025.png
For the above, the sector size on the VHDX doesn't seem to change things much. I tried as high as an NTFS 256kb on the VHDX and it's close to the same.

However, then when I use higher sizes, I have better results:

Physical Disk using NTFS 128KB:

1709821510191.png

VHDX using NTFS 128k on the physical NTFS 128k disk:
1709821562312.png


The last test still has a slight drop, but everything else looks great.

However, the news is not quite as great once I try to use that VHDX inside a VM. It's still good, but I get the same drop on the 4K test.

Hyper-V drive using NTFS 128k which is actually a VHDX on the physical NTFS 128k disk:
1709822180042.png


I'm OK with this because I'm hoping when you get down to 4k, the reads/writes are small enough where caching can help out. I know that's a big part of Storage Spaces Direct is adding a caching layer.

Just a guess, but I'm wondering if the Hyper-V VM with dynamic memory enabled doesn't have enough available memory to use caching on that last test. It's down to a pretty low amount of RAM even though I told it that it could go up to 1TB of ram and it has 16GB of ram on boot. It also says the ram's compressed, which doesn't seem good:
1709822396756.png





2. Use the same sector sizes all the way up/down the chain (e.g. make the physical disk, vhdx disk, and possibly MS Storage Spaces Stripe size all the same).

I believe this means I'll make everything 128KB sector size or larger. This is because the smallest Microsoft Storage Spaces Interleavel size is 16KB, and I want 8 disks on storage so 16KB x 8 disks = 128KB. So if you had 3 disks in storage spaces and were using parity mode, then you could use 16kb x 2 =32kb sector sizes everywhere (the parity disk is ignored in the sector size to choose).


3. Don't use anything less than Win Server 2k5 / Windows 11 (which I think is the same thing).

The screenshots explain themselves on this one, but I couldn't tell you why.
 

Attachments

  • 1709822056418.png
    1709822056418.png
    409 KB · Views: 0
  • 1709817367322.png
    1709817367322.png
    16.2 KB · Views: 0

My Computer

System One

  • OS
    Windows 11
    Computer type
    PC/Desktop
Back
Top Bottom