none
iSCI vSwitch setup for Hyper-V VMs

    Question

  • Hi All,

    We have 3 Hyper-V servers 2 of which are part of a cluster, 2 physical adaptors have been dedicated to iSCSI and LUNs are presented to the hosts for shared storage. Disk failover is controlled by the iSCSI MPIO.

    We have a requirement for VM's to have direct iSCSI connection(s) to access the data on the disks, i think this is called a RAW connection (correct me if i'm wrong). To configure this my thoughts were to create two vSwitches and have each vSwitch connect to the physical iSCSI interface (all iSCSI vSwitchs will have the same name across all hosts).

    With this is mind it raises questions:

    1. Is the correct way to do this?
    2. How do the VMs deal with disk failover events if the host is controlling the disk pathing via MPIO? My thoughts were to enable the 'NIC Teaming' option by ticking 'Enable this network adaptor to be part of a team in the guest operating system' within 'Advanced Features'. Would this allow the VMs to continue to work gracefully in the event of a  failover when the hosts utilises MPIO for redundancy/load balancing?

    I look forward for all of you professional guidance.

    Wednesday, January 18, 2017 6:06 AM

Answers

  • 1. Yes, if you want the VMs to have direct access to iSCSI LUNs, the best way is to create two virtual NICs and configure them with MPIO to access the iSCSI LUNs.  However, what is the need for direct access to the iSCSI targets?  You are not likely to gain any performance benefit.  And virtual machines are designed to work with virtual hard disks.  By 'tying' the VM to the physical iSCSI environment, you are limiting what you can do with VMs.  And, make sure that your VMs are referencing completely different iSCSI LUNs than the physical host if you do this.  Otherwise you will have immediate data corruption.

    2.  It is recommended to use MPIO for storage access and NOT NIC teaming.  You would define MPIO within the VMs, just like you do on the physical hosts.  These are completely independent operating systems going over completely independent network links.


    . : | : . : | : . tim


    • Edited by Tim CerlingMVP Wednesday, January 18, 2017 2:35 PM
    • Proposed as answer by Eric SironMVP Wednesday, January 18, 2017 2:44 PM
    • Marked as answer by DaveDLUX Thursday, January 19, 2017 11:37 PM
    Wednesday, January 18, 2017 2:33 PM
  • "If the host is using the vEthernet to communicate iSCSI traffic, and the same vEthernet is used by the vSwitch for the guest VMs, wouldn't that mean that both the HOST and VMs are sharing the same network link? "

    Yes, when a physical NIC is converted into a virtual switch, and that switch is shared with the host OS, both physical and virtual are sharing the same NIC, but it is still two separate instances of the operating system talking through the switch. They are not using the same vEthernet.  They are using the same virtual switch, but each has its own virtual NIC.  Think of a physical environment.  You have a physical switch.  You have two different physical computers talking to the physical switch.  They are sharing the switch, but they each have their own data streams talking to the switch.  In the virtual case, this is going over a single wire from the NIC, but the host and the VM could be talking on completely different subnets and/or VLANs.

    You should NOT configure a NIC used for iSCSI traffic to also be a virtual switch. You should dedicate iSCSI NICs to that use and that use only. You would create additional virtual switches for the VMs to use for assigning virtual NICs to use for accessing iSCSI.

    Technically it might work (it's amazing what the works even though the engineers may not have designed for it), but I doubt that anyone has tested such a scenario because it would not be recommended.  So you would be breaking new ground.


    . : | : . : | : . tim


    • Edited by Tim CerlingMVP Thursday, January 19, 2017 10:54 PM
    • Marked as answer by DaveDLUX Thursday, January 19, 2017 11:37 PM
    Thursday, January 19, 2017 10:53 PM

All replies

  • 1. Yes, if you want the VMs to have direct access to iSCSI LUNs, the best way is to create two virtual NICs and configure them with MPIO to access the iSCSI LUNs.  However, what is the need for direct access to the iSCSI targets?  You are not likely to gain any performance benefit.  And virtual machines are designed to work with virtual hard disks.  By 'tying' the VM to the physical iSCSI environment, you are limiting what you can do with VMs.  And, make sure that your VMs are referencing completely different iSCSI LUNs than the physical host if you do this.  Otherwise you will have immediate data corruption.

    2.  It is recommended to use MPIO for storage access and NOT NIC teaming.  You would define MPIO within the VMs, just like you do on the physical hosts.  These are completely independent operating systems going over completely independent network links.


    . : | : . : | : . tim


    • Edited by Tim CerlingMVP Wednesday, January 18, 2017 2:35 PM
    • Proposed as answer by Eric SironMVP Wednesday, January 18, 2017 2:44 PM
    • Marked as answer by DaveDLUX Thursday, January 19, 2017 11:37 PM
    Wednesday, January 18, 2017 2:33 PM
  • Hi Tim,

    Thanks for reply.

    What is the need for direct access to the iSCSI targets? - I will V2V about 30 servers from VMware to Hyper-V, these servers are configured with direct access. If we do not configure the direct access then this will result in loss of service. It will take to long to migrate the data first to a vDisk, so really this is our only option to begin with.

    It is recommended to use MPIO for storage access and NOT NIC teaming.  You would define MPIO within the VMs, just like you do on the physical hosts.   - As mentioned, the Hyper-V hosts have 2 physical adapters connected to the iSCSI network. The hosts have been configured to utilise MPIO, are you suggesting to use both MPIO on the host and guest OS? Wont this cause an issues if the HOST MPIO changes the path of iSCSI traffic through one physical adapter and the guest MPIO uses another? Or will the guest VM MPIO handle the traffic and the HOST is simply a pass-through and there will be no load balancing etc?

    These are completely independent operating systems going over completely independent network links. - The network connections will be vSwitches connected to the physical adapter on the host E.g


    • Edited by DaveDLUX Thursday, January 19, 2017 12:15 AM
    Thursday, January 19, 2017 12:14 AM
  • Hi DaveDLUX,

    >>The hosts have been configured to utilise MPIO, are you suggesting to use both MPIO on the host and guest OS?

    As Tim mentioned, they are completely independent operating systems going over completely independent network links. They would not affect each other.

    Once you have created external virtual switch, the physical NIC is acting as a physical switch for the virtual NICs on VMs and hosts. So, the network connections on host and guest are independent.

    Best Regards,

    Leo


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, January 19, 2017 9:39 AM
    Moderator
  • Hi Leo,

    Thanks for the reply, when  a vSwitch is created and assign to a physical NIC, the physical NICs connection 'items' are removed, the vEthernet then takes the adapters IP information.

    If the host is using the vEthernet to communicate iSCSI traffic, and the same vEthernet is used by the vSwitch for the guest VMs, wouldn't that mean that both the HOST and VMs are sharing the same network link?

    I am probably missing something so please educate me on this.

    Kind regards.

    Thursday, January 19, 2017 10:41 PM
  • "If the host is using the vEthernet to communicate iSCSI traffic, and the same vEthernet is used by the vSwitch for the guest VMs, wouldn't that mean that both the HOST and VMs are sharing the same network link? "

    Yes, when a physical NIC is converted into a virtual switch, and that switch is shared with the host OS, both physical and virtual are sharing the same NIC, but it is still two separate instances of the operating system talking through the switch. They are not using the same vEthernet.  They are using the same virtual switch, but each has its own virtual NIC.  Think of a physical environment.  You have a physical switch.  You have two different physical computers talking to the physical switch.  They are sharing the switch, but they each have their own data streams talking to the switch.  In the virtual case, this is going over a single wire from the NIC, but the host and the VM could be talking on completely different subnets and/or VLANs.

    You should NOT configure a NIC used for iSCSI traffic to also be a virtual switch. You should dedicate iSCSI NICs to that use and that use only. You would create additional virtual switches for the VMs to use for assigning virtual NICs to use for accessing iSCSI.

    Technically it might work (it's amazing what the works even though the engineers may not have designed for it), but I doubt that anyone has tested such a scenario because it would not be recommended.  So you would be breaking new ground.


    . : | : . : | : . tim


    • Edited by Tim CerlingMVP Thursday, January 19, 2017 10:54 PM
    • Marked as answer by DaveDLUX Thursday, January 19, 2017 11:37 PM
    Thursday, January 19, 2017 10:53 PM
  • Hi Tim,

    We have two network 192.168.100.xxx and 192.168.115.xxx. The .115 is used for iSCSI and the .100 is data.

    When creating the vNICS how do i connect and ensure the vNICS send their traffic out via the physical iSCSI adapters?

    Thanks

    Friday, February 3, 2017 4:24 AM
  • From your picture, it looks like you have created external virtual switches on your iSCSI NICs.  If that's the case, then for the VMs you simply use that virtual switch for the VM virtual NICs and assign the address on your .115 subnet.  Do not configure any routing or route paths and all iSCSI access will be over the .115 network.

    You have only .115 for your iSCSI network?  Except for when using Dell storage, you should be configured with two separate subnets for your two NICs.  Dell has special instructions for configuring their iSCSI access.


    . : | : . : | : . tim

    Friday, February 3, 2017 2:03 PM
  • Tim,

    Thanks for the reply, ignoring the picture for now. I created two vNICS iSCSI_1 and iSCSI_2, using the commands:

    Add-VMNetworkAdapter -ManagementOS -Name viSCSI_1 -SwitchName vSwitch_iSCSI
    Add-VMNetworkAdapter -ManagementOS -Name viSCSI_2 -SwitchName vSwitch_iSCSI
    Remove-VMNetworkAdapter -ManagementOs -Name 'vSwitch_iSCSI'

    So now that there are 2 vNICS and they are pointing to the vSwtich_iSCSI, how do i ensure the vNICS are using the iSCSI physical adapters?

    I have assigned them .115.xxx address and configured the virtual adapters but if DHCP is left enabled they are being assigned .100.xxx addresses ???

    Sunday, February 5, 2017 9:44 PM
  • how do i ensure the vNICS are using the iSCSI physical adapters?

    If the vNICs are on the external vswitch defined on the iSCSI network, there is no other network they can use.

    I have assigned them .115.xxx address and configured the virtual adapters but if DHCP is left enabled they are being assigned .100.xxx addresses ???

    You don't have the NICs isolated with VLANs.  No big deal.  You should be fine by assigning static IP addresses.  I am still concerned about two NICs on the same subnet.  Other than for Dell storage, iSCSI NICs should be configured on different IP subnets, even if they are going through the same switch.  Although having everything going through a single NIC as you have, somewhat defeats the purpose of having multiple paths.  Loss of that single NIC loses all access.


    . : | : . : | : . tim

    Monday, February 6, 2017 2:32 PM
  • Hi Tim, 

    I am confused about one of your earlier posts:

    You should NOT configure a NIC used for iSCSI traffic to also be a virtual switch. You should dedicate iSCSI NICs to that use and that use only. You would create additional virtual switches for the VMs to use for assigning virtual NICs to use for accessing iSCSI.

    My understanding of this was that since the two physical NICs are connected to the iSCSI network i should not configure a vSwitch against either of the physical NICS; I should instead createtwo vNICS and connect a vSwitch to those vNICS.

    To accomplish this i created a vSwitch (vSwitch_iSCSI), the vSwitch was configured as an internal switch. I then created the two vNICS and connected them to the vSwitch.

    I am guessing I have misunderstood something and taken the wrong steps, could you re-clarify and guide me to do this correctly please?

    Also sorry, I realised  I forgot to answer you question about the .115.xxx. We are using Dell Storage (EqualLogic), our previous vMware environment did not require separate subnets for the iSCSI so i figured it would work the same in Hyper-V. Let me know if i have assumed wrong.

    Monday, February 6, 2017 11:11 PM
  • "My understanding of this was that since the two physical NICs are connected to the iSCSI network i should not configure a vSwitch against either of the physical NICS; I should instead createtwo vNICS and connect a vSwitch to those vNICS."

    Recommended practice is to isolate physical NICs between host usage and virtual machine usage.  That was my reasoning behind the statement to not define virtual switches on the physical NICs that are currently defined for use by iSCSI.  Recommended practice would be to use another pair of physical NICs and create virtual switches on them to be used for iSCSI by the VMs.  If you do not have enough physical NICs, or you do not except high volume, you could get by with defining external virtual switches on the host's physical NICs that are defined to be used by iSCSI and share those with the VMs.  By creating the iSCSI virtual switches as internal switches, you have to enable routing in order to get the traffic off the internal network and onto the external network.  That is more overhead.  Not recommended.


    . : | : . : | : . tim

    Monday, February 6, 2017 11:51 PM
  • Thanks Tim,

    Last question, i have 2 adapters configured for iSCSI for the HOST @ 10GB and 2 spare adapters available @ 1GB.

    I would like to use these two for the VM directly attached iSCSI targets, seeing as our iSCSI network scope is 115.xxx/24, how can ensure the HOST does not use the 1GB adapters for iSCSI traffic?

    I have no issue with the VM's using the 1GB adapters via the vSwitches but i dont want FCM start directing iSCSI traffic through those ports.

    Thanks.

    Wednesday, February 8, 2017 4:22 AM
  • When you create your external virtual switches on those 1GE NICs, do not check the box to allow the host OS to share the switch.  Then the host will not even see them.  Only the VMs will be able to put traffic onto those physical NICs by going through the virtual switches that are visible only to the VMs.

    . : | : . : | : . tim

    Wednesday, February 8, 2017 1:27 PM