locked
Hyper-V with Simple Storage Space CSV RRS feed

  • Question

  • I'm looking for ways to improve the performance of our virtual machines, mainly through IO, because I believe that CPU, RAM and networking are pretty well taken care of.  Ever since Server 2008 R2 with Hyper-V we've had issues with disk latency.  Most of which was resolved by upgrading to 2012 R2 with the new CSV technology and SMB 3.0 backend.  We're still seeing issues though, and I really need to track down what's causing it, because of VM performance isn't anywhere near where it should be.

    We recently invested in a new SAN array, not just for virtualization, but it utilizes quad 8Gb controllers and currently has several shelves of disks (important to note they are all 7.2k, but striped heavily for increased read and write operations).  I'm interested in testing this array to see how it compares, but I'm looking for suggestions on how to implement it properly.  My main idea is to get several disks presented to our host nodes (most likely 2TB each), place them into a simple storage space (with no redundancy) and then use that storage space as a CSV for the Hyper-V cluster.  I'm sure the SAN will allow the host to utilize all allocated space/IOPs for a specific LUN, but the host operating system will most likely not support channeling as many IOPs as we need into one LUN with FC overhead, processing power allocation to one LUN and so on.  My feeling is this will help us by allowing the host to load balance the load across several disks, even if they are from the same array, hopefully helping it to utilize more available IOPs.

    Any thoughts on this? Or even if it will work?

    Tuesday, May 24, 2016 3:15 PM

Answers

  • Hi Dascione,

    >> My main idea is to get several disks presented to our host nodes (most likely 2TB each), place them into a simple storage space (with no redundancy) and then use that storage space as a CSV for the Hyper-V cluster.

    It would work.

    You could implement MPIO if you have multiple Fibre Channel ports on the server.

    Here is the reference:

    https://technet.microsoft.com/en-us/library/ee619734(v=ws.10).aspx 

    Best Regards,

    Leo


    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact tnmff@microsoft.com.

    • Marked as answer by dascione Tuesday, May 31, 2016 11:53 AM
    Wednesday, May 25, 2016 7:35 AM

All replies

  • Hi Dascione,

    >> My main idea is to get several disks presented to our host nodes (most likely 2TB each), place them into a simple storage space (with no redundancy) and then use that storage space as a CSV for the Hyper-V cluster.

    It would work.

    You could implement MPIO if you have multiple Fibre Channel ports on the server.

    Here is the reference:

    https://technet.microsoft.com/en-us/library/ee619734(v=ws.10).aspx 

    Best Regards,

    Leo


    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact tnmff@microsoft.com.

    • Marked as answer by dascione Tuesday, May 31, 2016 11:53 AM
    Wednesday, May 25, 2016 7:35 AM
  • Leo,

    Thanks for the insight.  We currently are using MPIO to serve our existing disks, that said the performance is still not where we want it to be.  But if you're sure the storage space CSV will work, I think that's the directly I'm going to try this week.

    Thank you

    Tuesday, May 31, 2016 11:53 AM
  • I would say that you need to measure your latency and compare with your expectation, right before moving forward and try different ways. Try to use Perfmon and storport tracing. Will be a pretty good start. Also make sure your system is running with the latest updated drivers, such as: Storpor.sys, MPIO, NTFS...
    Tuesday, May 31, 2016 1:18 PM