none
Windows 2012 R2, Hyper-V, iSCSI, Storage Performance.

    Question

  • Hey Team, 

    So i have the following setup: 

    1) 1x Windows 2012 R2 box, with 4 HDDs (7.2k) 

    2) RAID 10 Configured 

    3) No current ssd caching solution. working on fixing my Intel CAS solution.

    4) VM's Boot off iSCSI Volume. 

    5) About 8 VMs running, All Exchange, and 1 Brightmail messaging gateway. (Symantec) 

    Here is the problem: 

    Terrible VM performance. So when i check my VMs i see the following for Disk Response Time: 

    Read Speed: 2.0 MB/s | Write Speed 2.3MB/s Disk Active Time 100% | Response Time: 5553 ms. 

    Problem: When i check the backend storage box, i see Disk Active time @ 100% but i see the response time at a highest of 75 ms. 

    Any ideas? On how i can troubleshoot this? I cant imagine i built this box that poorly. 

    Thanks, 

    Robert 


    Robert

    Tuesday, April 11, 2017 5:15 PM

All replies

  • Hi Sir,

    >> VM's Boot off iSCSI Volume. 

    Generally , I will connect iSCSI volumes to hyper-v host then put the VM files on it .

    As for the performance troubleshooting  , I'd like to first check network bandwidth between vm/host and iSCSI target .

    I also suggest you power on one VM and shutdown other VMs then test the disk performance .

    I suspect that issue is related to the iSCSI connection .

     

    Best Regards,

    Elton


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Wednesday, April 12, 2017 10:14 AM
    Moderator
  • "About 8 VMs running, All Exchange, and 1 Brightmail messaging gateway"

    You do not give us any information about how much memory is on the physical host and how close these 9 VMs are using.  Are the VMs using dynamic memory?  Exchange is an IO-intensive application.  In the physical world, it is not uncommon to dedicate a physical host to the Exchange application.

    You speak of iSCSI for your storage, so that implies you have external storage for the VMs.  Do you have all 8 of the Exchange VMs reading/writing to the same iSCSI LUN or have you separated each Exchange data store to its own LUN?


    tim

    Wednesday, April 12, 2017 1:24 PM
  • Hey,

    Just to add. Could you please give more information about your storage box? What network throughput do you you have between you VM box and Storage box? 

    Cheers,

    Alex Bykovskyi

    StarWind Software

    Blog:   Twitter:   LinkedIn:  

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    Wednesday, April 12, 2017 2:00 PM
  • Hey Guys,

    Thanks for the responses. So here we are

    1) 1x Single 4x 2 TB Disk Array = 4TB total space - All data is read/written from these 4 disks.

    2) Each Server is installed on its own CSV

    3) 32GB of RAM in Each Hyper-V host (2 hosts)

    4) Windows 2012 R2 Storage Box, MPIO is not installed, No teaming.

    5) Exchange 2016 VMs - 8GB of RAM Each, Exchange 2013 VMs 4-6 GB of RAM each, Exchange 2010 VMs 4 GB RAM Each.

    6) 1GB Connection to the switch from the storage box and from the 2x Hyper-V Hosts.

    I used to use a product called Intel CAS Software http://www.intel.com/content/www/us/en/software/intel-cache-acceleration-software-performance.html however my SSD drive overheated and I cant use it anymore.

    When I had the SSD Caching (Intel CAS) the environment was way faster than it is now. I have crystal disk mark performance benchmarks I can share if needed. I currently have 5 of 18 VMs running.

    Robert


    Robert


    Wednesday, April 12, 2017 2:54 PM
  • Here is one exchange server: same server for both tests

    No caching Enabled:

    -----------------------------------------------------------------------
    CrystalDiskMark 5.2.1 x64 (C) 2007-2017 hiyohiyo
                               Crystal Dew World : http://crystalmark.info/
    -----------------------------------------------------------------------
    * MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
    * KB = 1000 bytes, KiB = 1024 bytes
       Sequential Read (Q= 32,T= 1) :    80.949 MB/s
      Sequential Write (Q= 32,T= 1) :    36.044 MB/s
      Random Read 4KiB (Q= 32,T= 1) :     1.800 MB/s [   439.5 IOPS]
     Random Write 4KiB (Q= 32,T= 1) :     1.746 MB/s [   426.3 IOPS]
             Sequential Read (T= 1) :    30.410 MB/s
            Sequential Write (T= 1) :    27.682 MB/s
       Random Read 4KiB (Q= 1,T= 1) :     0.330 MB/s [    80.6 IOPS]
      Random Write 4KiB (Q= 1,T= 1) :     1.721 MB/s [   420.2 IOPS]

      Test : 1024 MiB [C: 52.2% (66.0/126.5 GiB)] (x5)  [Interval=5 sec]
      Date : 2017/04/12 8:05:11
        OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

    and with caching enabled:

    -----------------------------------------------------------------------
    CrystalDiskMark 5.2.1 x64 (C) 2007-2017 hiyohiyo
                               Crystal Dew World : http://crystalmark.info/
    -----------------------------------------------------------------------
    * MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
    * KB = 1000 bytes, KiB = 1024 bytes
       Sequential Read (Q= 32,T= 1) :   114.150 MB/s
      Sequential Write (Q= 32,T= 1) :   107.138 MB/s
      Random Read 4KiB (Q= 32,T= 1) :    75.976 MB/s [ 18548.8 IOPS]
     Random Write 4KiB (Q= 32,T= 1) :    91.835 MB/s [ 22420.7 IOPS]
             Sequential Read (T= 1) :    81.589 MB/s
            Sequential Write (T= 1) :    92.487 MB/s
       Random Read 4KiB (Q= 1,T= 1) :     4.359 MB/s [  1064.2 IOPS]
      Random Write 4KiB (Q= 1,T= 1) :     6.684 MB/s [  1631.8 IOPS]
      Test : 1024 MiB [C: 46.9% (59.3/126.5 GiB)] (x5)  [Interval=5 sec]
      Date : 2017/03/27 14:27:20
        OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)

    Thanks,

    Robert


    Robert

    Wednesday, April 12, 2017 3:07 PM

  • Disk IO is always the first bottleneck and you're running 8 Windows OSes off 4 slow 7.2K disks?  Yeah, that will be slow.  You've got a huge amount disk IO contention there. 

    With 3 chatty Exchange VMs, I'm surprised you're not getting disk timeouts.

    You want 18 running VMs total off a 4 disk RAID10 with slow HDDs?  

    You need a huge disk upgrade.  SSDs or a bunch of 15K SAS drives.  More spindles means more speed too, RAID10 for HDDs, RAID 5 only if you're using SSDs.

    Wednesday, April 12, 2017 4:58 PM
  • Agree with essjae. 

    You should upgrade your underlying storage. My usual practice is one 7.2k disk can handle 2 not performance hungry VMs. In your case, Exchange can create huge workload on your storage subsystem. 
    I would recommend going with SSDs in RAID 5. 

    Cheers,

    Alex Bykovskyi

    StarWind Software

    Blog:   Twitter:   LinkedIn:  

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    Thursday, April 13, 2017 10:15 AM
  • "1GB Connection to the switch from the storage box and from the 2x Hyper-V Hosts. "

    In addition to what the others said about an underconfigured storage subsystem, you are using a pretty slow connection to the storage for such IO intensive workloads.  

    Caveat:  I am making the assumption that you mean a 1 GE connection (1 Gbps) because nobody measures network connections in GBps as you have stated, but they do measure them in Gbps.  If you actually have a 1 GB connection, that would be closer to a 10 Gbps connection.  In which case, you have a better connection.

    Furthermore, you state you are not using MPIO.  That is always something that you should configure, particularly when you have multiple processes accessing storage.


    tim

    Saturday, April 15, 2017 12:48 PM
  • This is what it looks like with the Intel CAS Software + Samsung 850 EVO SSD:

    -----------------------------------------------------------------------
    CrystalDiskMark 5.2.1 x64 (C) 2007-2017 hiyohiyo
                               Crystal Dew World : http://crystalmark.info/
    -----------------------------------------------------------------------
    * MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
    * KB = 1000 bytes, KiB = 1024 bytes

       Sequential Read (Q= 32,T= 1) :   114.850 MB/s
      Sequential Write (Q= 32,T= 1) :   112.037 MB/s
      Random Read 4KiB (Q= 32,T= 1) :    60.210 MB/s [ 14699.7 IOPS]
     Random Write 4KiB (Q= 32,T= 1) :    99.078 MB/s [ 24189.0 IOPS]
             Sequential Read (T= 1) :    89.545 MB/s
            Sequential Write (T= 1) :    96.898 MB/s
       Random Read 4KiB (Q= 1,T= 1) :     9.726 MB/s [  2374.5 IOPS]
      Random Write 4KiB (Q= 1,T= 1) :     9.406 MB/s [  2296.4 IOPS]

      Test : 1024 MiB [C: 42.2% (53.3/126.4 GiB)] (x5)  [Interval=5 sec]
      Date : 2017/04/13 11:34:31
        OS : Windows 10 Enterprise [10.0 Build 14393] (x64)

    The above is from a Windows 10 VM but I get similar results from my exchange + scvmm servers etc. So I think the thread is correct the issue simply storage performance when only 4x SATA Drives are used.

    Robert


    Robert

    Saturday, April 15, 2017 5:05 PM
  • "1GB Connection to the switch from the storage box and from the 2x Hyper-V Hosts. "

    In addition to what the others said about an underconfigured storage subsystem, you are using a pretty slow connection to the storage for such IO intensive workloads.  

    Caveat:  I am making the assumption that you mean a 1 GE connection (1 Gbps) because nobody measures network connections in GBps as you have stated, but they do measure them in Gbps.  If you actually have a 1 GB connection, that would be closer to a 10 Gbps connection.  In which case, you have a better connection.

    Furthermore, you state you are not using MPIO.  That is always something that you should configure, particularly when you have multiple processes accessing storage.


    tim

    Tim,

    My understanding of MPIO is that it is used when there is more than 1 physical path to access the storage? For instance: My setup:

    2x Hyper-V nodes (only ssd boot drives) (1 Network card each server)

    1x Storage box : 4x TB SATA Drives 1x 500GB SSD Drive (1 network card)

    1 Single path, single nic, single cable, non teamed. Would I still have to use MPIO in my current setup and if so how would it help?

    Thanks,

    Robert


    Robert

    Saturday, April 15, 2017 6:52 PM
  • Correct, MPIO should be configured to storage that has two or more paths. 

    But I come back to a single 1 Gbps connection to the storage for all those IO intensive VMs.  If you have that many IO intensive VMs trying to perform IO over a 1 Gbps NIC, things are almost guaranteed to be slow.


    tim

    Monday, April 17, 2017 12:59 PM