locked
Using iscsi initiator within a VM vs a passthrough disk, vs vhdx RRS feed

  • Question

  • With the advent of H-V verison 3 (2012) I think this quesion needs to be revisited.  I was told long ago (about a year and a half :) ), that it was frowned upon to use an iSCSI connection from within a child VM.  If the data was large (more than 2TB), then using a pass-though disk was perferred. You lost HA, live migraton, etc, but oh well. Now, things have definitely changed, but the matrix of determining the right answer is too much for me.  So I ask the gods of all Microsoft Knowledge, what is the current state of this question?  Can I (should I) use an iSCSI initiator in a VM to attach to iSCSI SAN storage?  Should I just convert the disk to a vhdx and gain all things good and holy that the MS gods want me to realize with a virtual environment?  Should I stick with pass-through because it still offers APPRECIABLY better performance (in my instance, I'm talking about file server storage, but we can assume for everyone's benefit that we might be talking about more robust performance needs)?  What say you?

    Thanks,


    DLovitt

    Thursday, December 20, 2012 10:14 PM

Answers

  • Hi,

    > that it was frowned upon to use an iSCSI connection from within a child VM.

    Firstly we need to claim that we can use iSCSI connection in guest VM, but we are not really often use it. In my opinion, it’s not necessary to use iSCSI connection in guest VM. It’s easy to explain the reason.

    Why do you want to use iSCSI connection in guest VM?

    High IOPS?

    Configure iSCSI on Hyper-V host and then use pass through disk can do the same and even better with HBA card.

    Larger disk space? More than 2 TB?

    Configure iSCSI on Hyper-V host and then use pass through disk can do the same.

    lost HA, live migraton?  

    You can use pass through disk in a Hyper-V cluster.

    The reason why we don’t use iSCSI in guest VM is that virtual machines can only connect to iSCSI devices with a software iSCSI initiator (low performance and cost VM CPU resource), whereas the management operating system will be able to use hardware host-bus adapters (HBAs). The only reason we use iSCSI in guest VM is that we want to create a guest VMs cluster environment, since cluster need shared volumes.

    > things have definitely changed, but the matrix of determining the right answer is too much for me. 

    Yes, new features have been added or changed on Hyper-V in Windows Server 2012, such as Virtual Fibre Channel feature which makes it possible to virtualize workloads and application that require direct access to Fibre Channel-based storage. But I don’t find changes or new features for iSCSI in guest VM. And I think it’s not really necessary.

    For more information please refer to following MS articles:

    Hyper-V & iSCSI - in the parent or in the virtual machine
    http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/07/02/hyper-v-amp-iscsi-in-the-parent-or-in-the-virtual-machine.aspx
    What's New in Microsoft Hyper-V Server 2012
    http://technet.microsoft.com/en-us/library/hh833682.aspx
    Hyper-V: How to Add a Pass-Through Disk on a Failover Cluster
    http://social.technet.microsoft.com/wiki/contents/articles/440.hyper-v-how-to-add-a-pass-through-disk-on-a-failover-cluster-en-us.aspx

    Hope this helps!

    TechNet Subscriber Support

    If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.


    Lawrence

    TechNet Community Support



    • Edited by Lawrence, Friday, December 21, 2012 3:26 AM
    • Marked as answer by Lawrence, Wednesday, January 2, 2013 7:00 AM
    Friday, December 21, 2012 3:23 AM
  • Ben's blog post was talking about some possible reasons to not use iSCSI for guests.  It would be interesting to see if he wants to update that with the release of VHDX and improvements in the hardware NICs with ready access from the VMs to the NIC, bypassing the virtual switch, and SR-IOV for putting together whopping big pipes if the IO is needed.

    If you are after the absolute best performance, I would agree that pass-thru disks will most likely give you marginally better performance.  But I would like to see some hard statistics to back-up some of the points mentioned in Ben's article now that we have 2012, which is what the original poster was asking about.

    The management aspect of iSCSI over pass-thru is also a consideration.  If people were interested only in performance and not ease of management, I don't think virtualization would be where it today.  It the management and ability to share resources that has caused it to growth more than its ability to perform as well as physical hardware.


    tim

    • Marked as answer by Lawrence, Wednesday, January 2, 2013 7:00 AM
    Friday, December 21, 2012 11:35 PM
  • Actually, I run across early adaptors all the time that ask why folks still use pass-through disks at all.

    As, there is really no sound, solid reason to still use them beyond habit, or policy.

    And now you even have LBFO teaming in the box, so you can make a bigger pipe for the VM and its traffic - or multi-home it.

    Frankly, only community folks have recommended using pass-through disks.  Maybe back in the Hyper-V v1 days.  But I have not heard it at all since 2008 R2 SP1.

    I am sure that other will jump in with their opinions as well.


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    • Marked as answer by Lawrence, Wednesday, January 2, 2013 7:00 AM
    Thursday, December 20, 2012 10:57 PM

All replies

  • Actually, I run across early adaptors all the time that ask why folks still use pass-through disks at all.

    As, there is really no sound, solid reason to still use them beyond habit, or policy.

    And now you even have LBFO teaming in the box, so you can make a bigger pipe for the VM and its traffic - or multi-home it.

    Frankly, only community folks have recommended using pass-through disks.  Maybe back in the Hyper-V v1 days.  But I have not heard it at all since 2008 R2 SP1.

    I am sure that other will jump in with their opinions as well.


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    • Marked as answer by Lawrence, Wednesday, January 2, 2013 7:00 AM
    Thursday, December 20, 2012 10:57 PM
  • Hi,

    > that it was frowned upon to use an iSCSI connection from within a child VM.

    Firstly we need to claim that we can use iSCSI connection in guest VM, but we are not really often use it. In my opinion, it’s not necessary to use iSCSI connection in guest VM. It’s easy to explain the reason.

    Why do you want to use iSCSI connection in guest VM?

    High IOPS?

    Configure iSCSI on Hyper-V host and then use pass through disk can do the same and even better with HBA card.

    Larger disk space? More than 2 TB?

    Configure iSCSI on Hyper-V host and then use pass through disk can do the same.

    lost HA, live migraton?  

    You can use pass through disk in a Hyper-V cluster.

    The reason why we don’t use iSCSI in guest VM is that virtual machines can only connect to iSCSI devices with a software iSCSI initiator (low performance and cost VM CPU resource), whereas the management operating system will be able to use hardware host-bus adapters (HBAs). The only reason we use iSCSI in guest VM is that we want to create a guest VMs cluster environment, since cluster need shared volumes.

    > things have definitely changed, but the matrix of determining the right answer is too much for me. 

    Yes, new features have been added or changed on Hyper-V in Windows Server 2012, such as Virtual Fibre Channel feature which makes it possible to virtualize workloads and application that require direct access to Fibre Channel-based storage. But I don’t find changes or new features for iSCSI in guest VM. And I think it’s not really necessary.

    For more information please refer to following MS articles:

    Hyper-V & iSCSI - in the parent or in the virtual machine
    http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/07/02/hyper-v-amp-iscsi-in-the-parent-or-in-the-virtual-machine.aspx
    What's New in Microsoft Hyper-V Server 2012
    http://technet.microsoft.com/en-us/library/hh833682.aspx
    Hyper-V: How to Add a Pass-Through Disk on a Failover Cluster
    http://social.technet.microsoft.com/wiki/contents/articles/440.hyper-v-how-to-add-a-pass-through-disk-on-a-failover-cluster-en-us.aspx

    Hope this helps!

    TechNet Subscriber Support

    If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.


    Lawrence

    TechNet Community Support



    • Edited by Lawrence, Friday, December 21, 2012 3:26 AM
    • Marked as answer by Lawrence, Wednesday, January 2, 2013 7:00 AM
    Friday, December 21, 2012 3:23 AM
  • Ben's blog post was talking about some possible reasons to not use iSCSI for guests.  It would be interesting to see if he wants to update that with the release of VHDX and improvements in the hardware NICs with ready access from the VMs to the NIC, bypassing the virtual switch, and SR-IOV for putting together whopping big pipes if the IO is needed.

    If you are after the absolute best performance, I would agree that pass-thru disks will most likely give you marginally better performance.  But I would like to see some hard statistics to back-up some of the points mentioned in Ben's article now that we have 2012, which is what the original poster was asking about.

    The management aspect of iSCSI over pass-thru is also a consideration.  If people were interested only in performance and not ease of management, I don't think virtualization would be where it today.  It the management and ability to share resources that has caused it to growth more than its ability to perform as well as physical hardware.


    tim

    • Marked as answer by Lawrence, Wednesday, January 2, 2013 7:00 AM
    Friday, December 21, 2012 11:35 PM
  • any idea on performance difference between ISCSI within the VM vs Virtual FC adapter if everything else is equal ?

    Sunday, December 23, 2012 8:38 PM
  • any idea on performance difference between ISCSI within the VM vs Virtual FC adapter if everything else is equal ?

    If you run the same or comparable back end they are identical.

    StarWind iSCSI SAN & NAS

    Monday, December 24, 2012 1:57 PM
  • Hi,

    As this thread has been quiet for a while, we assume that the issue has been resolved. At this time, we will mark it as 'Answered' as the previous steps should be helpful for many similar scenarios.

    If the issue still persists and you want to return to this question, please reply this post directly so we will be notified to follow it up. You can also choose to unmark the answer as you wish.

    In addition, we'd love to hear your feedback about the solution. By sharing your experience you can help other community members facing similar problems.

    Thanks!


    Lawrence

    TechNet Community Support

    Wednesday, January 2, 2013 7:00 AM
  • Well, I don't agree that anything has been answered at this point.  So let me summarize what I see so far:

    There appears to be "best practices" that favor using pass-through disks.  I can only assume that those best practices are based solely on performance.  As Tim points out, best practices could and probably should also take flexibility and ease of management into account.  With those factors in play, then iSCSI in the guest as an option or even as a goto solution make loads of sense. 

    Nobody has addressed the use of vhdx files for large storage needs.  Why?  Is there some concern or "feeling" that people have about this?  I must admit, putting extremely large files "out there" and hoping they don't become corrupt or cause some unforeseen management or backup or portability or whatever, problem has me slightly concerned.  Anyone have any experience with problems using something like this?

    Brian, just to clarify, you have people who use iSCSI in guests and it all works fine for them?  Works well for them?  They think others are a little ridiculous using pass-through disk?  Can you characterize these admins and their systems and what they are doing and feeling about this decision to go against a "best practice" for me. Are they competent, rock stars, smelly, living in mom's basement setting up labs with this config.? :)

    Lawrence, you mention HA/failover/clustering and migration works with p[ass-through disks.  For the life of me, I can't visualize how this can possibly work.  Can you elaborate?

    Part of my continued "problem" from my original post is that MS has put a lot into H-V3/2012 and it's at least muddying the waters for me.  It's almost as if they are saying "look at all we've done to improve your life and our Hypervisor and associated tech. Use physical and virtual machines and physical and virtual storage in any way you want and what works best for you- it all works""  But when pressed with something like my question, the back pedaling starts - "well, there are 'best practices' involved and things are not necessarily going to work the way you might think".  As a matter of fact, I think we should stop using terms like "best practices" and start working with real world scenarios, pros and cons, statistics, and performance measurements especially when dealing with obfuscated technology like virtualization that has many layers where things can and do go wrong.  MS labs should be publishing "Hey, this is what we did, and these were the results.  Then, when we did it this way, we got this." Not "best practices".  Just my opinion.

    So, bump for any additional insights?

    Thank you for the responses,

    DML

     

    DLovitt

    Tuesday, January 29, 2013 9:01 PM
  • I have yet to see definitive "best practice" when it comes to storage configuration.  I have seen lots of convention and conjecture, but little that I would consider best practice.  I have seen folks insist on configurations only because that is what they know, and are unwilling to investigate anything different.

    In regards to VHDX - it is dynamic by nature, it fixes a large bag of problems that VHD has, and it is the underlying enabler above Storage Spaces (the latest iteration of Windows software RAID) - I think there is a big investment in it and therefore scenarios that involve very large virtual disks.  If you are buying in to the MSFT Storage Server or using Storage Spaces you are buying into very large virtual disks (at least in theory).

    In regards to using iSCSI from the VMs - that was the official MSFT story for almost two years.  iSCSI from your VMs direct to a SAN if you want to cluster VMs.  This only changed when Clustering added the capability of a file share as a witness disk.  But you still don't have many option when clustering VMs.

    http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/07/02/hyper-v-amp-iscsi-in-the-parent-or-in-the-virtual-machine.aspx

    In the end it comes down to folks implementing what they know, they are comfortable with, what they accept the risk around.  This is where we get convention.  Best practices are actual scenarios that are defined and tested, generally by the manufacturer. 

    That said.  Many virtualization best practices are around to avoid historic potholes.  Or have grown out of convention "I have always done it that way". 

    Back in 2004 when I created my first virtual machine pass through disks were the only thing for large volumes.  And it was all performance.  This has continued to stick in spite of publications showing otherwise. This is old, but take a look: http://download.microsoft.com/download/0/7/7/0778C0BB-5281-4390-92CD-EC138A18F2F9/WS08_R2_VHD_Performance_WhitePaper.docx


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Tuesday, January 29, 2013 9:53 PM
  • Came across this thread looking for information and best practices on this subject.  For what its worth, I did some comparison testing of passthrough vs direct and was surprised by the results.

    This testing was done on a Nimble CS260 with (2) 500GB drives set to SQL Server 2012 performance policy. I passed one disk through to a VM (host is configured MPIO with dual NICs set to least queue depth) and connected the other disk at the guest level with MPIO also set to least queue depth.

    I ran SQLIO on a 60GB test file on each disk doing random and sequential reads and writes. A total of 4 tests per disk.

    To summarize the results, I basically found that the disk connected at the guest level outperformed the passthrough disk on random writes, sequential writes and sequential reads by about 15%. The passthrough disk eeked out the guest-level disks by a hair on random reads. In general, the passthrough disk had higher latency numbers than the disk connected at the guest level.

    I'm not really sure what this means but it would be interesting to hear if anyone else has done similar tests with similar results.  At this point, it looks like we will be connecting our iSCSI drives at the guest.


    • Edited by danwheeler Monday, April 22, 2013 11:50 PM
    Monday, April 22, 2013 11:49 PM
  • Came across this thread looking for information and best practices on this subject.  For what its worth, I did some comparison testing of passthrough vs direct and was surprised by the results.

    This testing was done on a Nimble CS260 with (2) 500GB drives set to SQL Server 2012 performance policy. I passed one disk through to a VM (host is configured MPIO with dual NICs set to least queue depth) and connected the other disk at the guest level with MPIO also set to least queue depth.

    I ran SQLIO on a 60GB test file on each disk doing random and sequential reads and writes. A total of 4 tests per disk.

    To summarize the results, I basically found that the disk connected at the guest level outperformed the passthrough disk on random writes, sequential writes and sequential reads by about 15%. The passthrough disk eeked out the guest-level disks by a hair on random reads. In general, the passthrough disk had higher latency numbers than the disk connected at the guest level.

    I'm not really sure what this means but it would be interesting to hear if anyone else has done similar tests with similar results.  At this point, it looks like we will be connecting our iSCSI drives at the guest.


    Sounds like it's a SAN vendor specific issue.

    StarWind iSCSI SAN & NAS

    Tuesday, April 23, 2013 8:49 AM
  • Any new insights on what Dan found?  I'm about to implement one of these strategies (for me, this whole conversation was about a server that needs to be retired, and it has come time to do it) and am still up in the air about the best solution.  Particularly, any new insights on VHDX and very large volumes?

    DLovitt

    Wednesday, May 22, 2013 12:26 AM