Hyper V guest slow disk access RRS feed

  • Question

  • I have a Hyper V cluster(2012) with a 2012 guest running 2012 acting as my file server.

    My 2 hosts are connected to a gig switch stack which in turn is connected to my Dell MD3200i.

    My file server has a "data" disk that is a VHD(fixed) on the iSCSI storage. As is it OS(dynamic)

    I'm coping 200GB right now and only seeing data thoughtput of 8MB/sec.

    When I check perfrmance on the file server the disk queue is huge. But when I check the queue for C: on the host, which is where the cluster storage appears to sit, its got a low if not no queue.

    Also did a copy using the host and saw 58MB/sec so I'm happy the network should be able to pass the data.

    Switch CPU is at 5%

    Can anyone tell me what I'm doing wrong?

    Tuesday, August 6, 2013 8:53 PM

All replies

  • Will the same issue occurs after a reboot?

    Also you could test to boot the VM to Safe Mode only to check if the same issue still exists.

    TechNet Subscriber Support in forum |If you have any feedback on our support, please contact tnmff@microsoft.com.

    Thursday, August 8, 2013 1:28 PM
  • By testing in Safe Mode it could help confirm if it related to third party program or drivers. If there is any progress please let us know.

    TechNet Subscriber Support in forum |If you have any feedback on our support, please contact tnmff@microsoft.com.

    Monday, August 26, 2013 2:41 AM
  • Hi!

    Have you found the solution?! I have the same issue: guest OS have very slow access to its fixed-size VHDX disks: about 25 Mb/s read and 2 Mb/s write (!!!) speeds.

    While on host - speed is about 150 - 200 Mb/s.

    Monday, August 26, 2013 12:52 PM
  • Hi,

    I am in the process of investigating some performance issues myself and I have noticed that VMs that have Fixed size VHDX files are experiencing a significant performance degradation. Using SQLIO as the test base, 64K random blocks are written at 40 - 60 MB/s for fixed disks but 400 - 600 MB/s for dynamic disks. I can also reproduce this on demand by converting and compressing the fixed disks. I am trying to find information on this and why we are experiencing this problem.

    I would also add that this has been the same on an iSCSI CSV based cluster and SMB 3 share hosted VMs. Any information on this would be great. Thanks


    Monday, October 7, 2013 6:52 AM
  • Did you find an answer - I'm having the problem as well

    Check out my CNC (and more) projects at http://www.backyardworkshop.com

    Friday, April 4, 2014 11:08 AM
  • No still have the issue and its bringing my network to a halt as its my main file server having this problem.

    I do have a ticket open with Microsoft though

    Friday, April 4, 2014 11:11 AM
  • ClusterSharedVolumes is a mount point, not the same volume as C:.  So checking the queue depth for C: didn't tell you anything about your iSCSI device.  You'll have to find the physical disk that corresponds to CSV, not to C: if you want this data.

    I would suspect that your iSCSI disk isn't giving you the throughput you need.

    Friday, April 4, 2014 8:35 PM
  • I found that change my secondary hard drives on the VMs from IDE attached VHDs to SCSI attached VHDs made a big differnce

    IDE 5MB/s

    SCSI Fixed sized VHD 15-20MB/s

    SCSI Dynamic VHD 60-70MB/s

    I have only done this on a few machines so far, but its one for others to try.

    Working on trying with more important servers next

    Wednesday, April 16, 2014 1:49 PM
  • I have an MD storage with SSD drives and experiencing the same behavior.

    SCSI Fixed sized VHDx 500-1000 IO/s
    SCSI Dynamic VHDx  40k IO/s

    Pretty weird that Dynamic disks perform better that Fixed
    and that Fixed are performing so poorly.

    Thursday, April 17, 2014 9:58 AM
  • I haven't checked the IOs can you tell me how to record that?
    Thursday, April 17, 2014 10:51 AM
  • I used SQLIO


    Thursday, April 17, 2014 11:34 AM
  • Fixed VHD always performs better than Dynamic VHD in most scenarios by roughly 10% to 15% within the exception of 4K writes, where fixed VHD performs significantly better according to the Windows Server Performance Team Blog.

    Hyper-V and VHD Performance - Dynamic vs. Fixed:


    Witch doesn't match your tests, so I think you missed something in your configuration schema. Read the following guideline:

    Performance Tuning Guidelines for previous versions of Windows Server:



    If you found this post helpful, please give it a "Helpful" vote. If it answered your question, remember to mark it as an "Answer". This posting is provided "AS IS" with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

    Thursday, April 17, 2014 11:45 AM
  • Did you ever find a resolution to this issue?  I'm having a similar issue where after I converted to Hyperv from VMware my disk performance has severely decreased (with the same backend storage) and after switching from dynamic to fixed I get even worse performance.
    Friday, November 7, 2014 3:49 PM
  • I was having the same issue. I´ve installed an Intel S3610 SATA SSD drive on a dual CPU Lenovo X3650 a Windows 2012 R2 Server and I created a new fixed SCSI VHDx on this drive for a Windows10 Pro VM. My sustained reads and writes were perfectly fine (avg. 400MB/s) but random 4K were really slow (10MB/s read & 13MB/s write). A dynamic VHD did not help either as expected. I also tried attaching the SSD drive directly to the VM as pass-through and performance was as bad as before, which made no sense. Doing a CrystalDiskmark on the SSD itself on the host gave me around 35MB/s read and 70MB/s write for random 4K.

    Then I figured I should check the power setting from the Host and it was on Balanced Power. After changing this to High Performance (in Windows) I tested again on the virtual machine and there it was, the random reads went to 23MB/s and writes to 51MB/s.

    Hope this helps someone trying to optimise Hyper-V on a SSD!

    • Proposed as answer by Arjan M Thursday, October 5, 2017 4:41 PM
    Monday, March 27, 2017 11:38 AM