none
Deploying Hyper V on Cisco UCS (Converged networking in VMM or on hardware) RRS feed

  • Question

  • Hello Everyone

    This is kind of a high level question and wanted to know peoples opinions on how to setup networking.  Currently we have a few clusters running on 6 1 x GB links and use converged networking to team all nics together and use VMM logical switches to split traffic up into management, cluster, live migration, and VM guest network which using Vlan tags to identify which vlan they are on.

    With Cisco UCS we now have 10 GB networking and have the ability to split traffic up at the hardware level and still take advantage of shared bandwidth.  If i am understanding documentation right, I can carve up each nic into smaller Vnics and priortize traffic but still have the full bandwidth on link.

    Just wondering what the pros and cons are for each setup.  The obvious one is moving converged networking to UCS is that it would take the network processing load out of host and into FIs.  

    Also we are using ISCSI with Nimble SANs.

    Please let me know your thoughts and any hyper v best practices.

    Thanks

    Tuesday, June 16, 2015 3:56 AM

Answers

  • Hi,

    Maybe I can provide you with some information. I have designed and implemented a Secure Multi-Tenancy environment based on a Cisco FlexPod. This environment is running for two years now and still very stable. We use boot from SAN for Stateless Computing. We don't have FC or FCoE, we use iSCSI all over the environment.

    There is a lot to tell about converged networking, a bit to much for one reply. But a quick answer is; yes, definetly use converged networking with Cisco UCS. By configuring multiple vNIC's, one for each purpose with the right policies. Of course there are pros en cons using converged networking with UCS vs Hyper-V. I will explain you a few at the end.

    Because I had to design a Secure Multi-Tenancy enviroment my design might be somewhat different then yours. But if you also use boot from SAN with iSCSI and don't require a Security Multi-Tenancy, this is what you need at least.

    • iSCSI-A
    • iSCSI-B
    • Management (with Fabric Failover)
    • Cluster (with Fabric Failover)
    • Live Migration (with Fabric Failover)
    • VM-Ethernet (with Fabric Failover)


    If you need RAW device mappings with iSCSI within VMs, you can simple add another VM-iSCSI-A and VM-iSCSI-B. There are some rules you have to keep in mind:

    • Never use NIC teaming! Always use Fabric Failover or MPIO. As you know Cisco UCS uses a different architecture with the Fabric Interconnects. If you happen to configure NIC teaming; L2 traffic from host to host on the same Blade Chassis or between Rack Servers can flow through your uplink swithes. This is something you need to avoid, L2 traffic should only flow through the Fabric Interconnects.
    • Do not enable Fabric Failover on iSCSI vNIC's. Use MPIO only.
    • For optimal performance, configure a VMQ Conenction Policy for a vNIC that is ging to be configured with a Hyper-V vSwitch.
    • For optimal performance, configure Jumbo Frames where it suits (e.g. your vNIC's for iSCSI and Live Migration)


    As you know the nice thing about UCS is that you have the option to configure as many vNIC's as you want (depending on your hardware 256 vNIC's :). Each vNIC on UCS can be configured with the right setting for each purpose. For example you can configure Jumbo Frames (MTU 9000) on the Live Migration and iSCSI interfaces. You can configure a VMQ policy on vNIC used by VM's. And I can go on for hours...

    As you know of course, using converged network with Hyper-V is optional. If you decide to configure less vNIC with UCS and configure converged networking (vNIC's) with Hyper-V then keep the following in mind:

    • Your vNIC's (host) en vmNIC's (guest) can never use more than 3.2~4.3Gbps. Because they can't use RSS (Receive Side Scaling) and must use VMQ (Virtual Machine Queuing). This is normal for VM's, but a downside for Management, Cluster and Live Migration.
    • You cannot mix MTU 1500 and MTU 9000 (Jumbo Frames).
    • You cannot set different vNIC policies within UCS.
    • UCS will handle the traffic equally because all traffic mathces the same QoS policy on UCS.


    There is only one downside to mention with converged network on UCS in combination with virtualization. Cisco offers QoS out-of-the-box and it works perfectly. But Windows does think it has 10GbE for each interface while it has not, it's shared. This is not a big deal for Windows, but if you mix QoS on UCS and QoS on Hyper-V (with SCVMM) things can get weird or complicated. Then QoS is suddenly not what is seems to be.

    If you would use a minimal number of vNIC's with the exact bandwith available to your UCS server, QoS on Hyper-V would match perfectly. Do you get what I mean?

    Why should you prefer converged networking on UCS then instead of Hyper-V? This could be reasons why:

    • Cisco UCS offers you a vNIC that is presented to the OS which is perfectly fine-tuned with the right settings and policy for each purpose. Wether the vNIC is used for Management, Live Migration or a Hyper-V vSwitch. This offers optimal performance!
    • NIC's used for iSCSI, Management, Cluster and Live Migration can use RSS and use full bandwidth.
    • You don't have to worry about dependencies, misconfiguration of converged networking in Windows. You also don't have the chicken egg situtation when you are trying to configure converged network with SCVMM. For example your Management interface may become unavailable when a NIC Team is applied.
    • Your NIC configurion is much simpler.
    • You have VMQ available on a vNIC with the correct settings and plenty of queues.
    • You have granual QoS for each vNIC.


    ...again I can continue for hours. The bottom line is, why not provide you the vNIC's with Cisco UCS and keep it simple for Hyper-V and SCVMM.

    I hope this information makes sense to you. If you have any questions, let me know. And if your are interested you can always read my blogs:

    My first FlexPod! (Part 1 – Introduction)
    http://www.boudewijnplomp.nl/2014/03/my-first-flexpod-part-1/


    Boudewijn Plomp | ITON Consultancy

    Please remember, if you see a post that helped you please click "Vote as Helpful", and if it answered your question, please click "Mark as Answer". This posting is provided "AS IS" with no warranties, and confers no rights.

    Sunday, June 21, 2015 9:48 PM
  • "With Cisco UCS we now have 10 GB networking "  Actually, if you are using the latest components and are properly configured, you have 20 Gbps per vNIC defined in your service profile.

    "We use boot from SAN for Stateless Computing. We don't have FC or FCoE, we use iSCSI all over the environment."  May want to consider this KB article that Microsoft just published.  https://support.microsoft.com/en-us/kb/2969306?wa=wsignin1.0  "The Hyper-V virtual switch is not supported when you start Windows Server 2012 or Windows Server 2012 R2 from an iSCSI boot disk. This is true even when the external network adapter is not involved in the iSCSI boot feature."

    Cisco has published Validated Designs for Hyper-V solutions going back to 2008.  So they exist for 2008, 2008 R2, 2012, and 2012 R2.  www.cisco.com/go/cvd  Full step-by-step instructions for configuring UCS, networking, and storage for a Microsoft IaaS solution.


    . : | : . : | : . tim

    Monday, June 22, 2015 9:55 PM

All replies

  • Hi Sir,

    >> If i am understanding documentation right I can carve up each nic into smaller Vnics and priortize traffic

    If it is possible please share this documetnation to us .

    Best Regards,

    Elton Ji


    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com .

    Wednesday, June 17, 2015 7:16 AM
    Moderator
  • Hi,

    Maybe I can provide you with some information. I have designed and implemented a Secure Multi-Tenancy environment based on a Cisco FlexPod. This environment is running for two years now and still very stable. We use boot from SAN for Stateless Computing. We don't have FC or FCoE, we use iSCSI all over the environment.

    There is a lot to tell about converged networking, a bit to much for one reply. But a quick answer is; yes, definetly use converged networking with Cisco UCS. By configuring multiple vNIC's, one for each purpose with the right policies. Of course there are pros en cons using converged networking with UCS vs Hyper-V. I will explain you a few at the end.

    Because I had to design a Secure Multi-Tenancy enviroment my design might be somewhat different then yours. But if you also use boot from SAN with iSCSI and don't require a Security Multi-Tenancy, this is what you need at least.

    • iSCSI-A
    • iSCSI-B
    • Management (with Fabric Failover)
    • Cluster (with Fabric Failover)
    • Live Migration (with Fabric Failover)
    • VM-Ethernet (with Fabric Failover)


    If you need RAW device mappings with iSCSI within VMs, you can simple add another VM-iSCSI-A and VM-iSCSI-B. There are some rules you have to keep in mind:

    • Never use NIC teaming! Always use Fabric Failover or MPIO. As you know Cisco UCS uses a different architecture with the Fabric Interconnects. If you happen to configure NIC teaming; L2 traffic from host to host on the same Blade Chassis or between Rack Servers can flow through your uplink swithes. This is something you need to avoid, L2 traffic should only flow through the Fabric Interconnects.
    • Do not enable Fabric Failover on iSCSI vNIC's. Use MPIO only.
    • For optimal performance, configure a VMQ Conenction Policy for a vNIC that is ging to be configured with a Hyper-V vSwitch.
    • For optimal performance, configure Jumbo Frames where it suits (e.g. your vNIC's for iSCSI and Live Migration)


    As you know the nice thing about UCS is that you have the option to configure as many vNIC's as you want (depending on your hardware 256 vNIC's :). Each vNIC on UCS can be configured with the right setting for each purpose. For example you can configure Jumbo Frames (MTU 9000) on the Live Migration and iSCSI interfaces. You can configure a VMQ policy on vNIC used by VM's. And I can go on for hours...

    As you know of course, using converged network with Hyper-V is optional. If you decide to configure less vNIC with UCS and configure converged networking (vNIC's) with Hyper-V then keep the following in mind:

    • Your vNIC's (host) en vmNIC's (guest) can never use more than 3.2~4.3Gbps. Because they can't use RSS (Receive Side Scaling) and must use VMQ (Virtual Machine Queuing). This is normal for VM's, but a downside for Management, Cluster and Live Migration.
    • You cannot mix MTU 1500 and MTU 9000 (Jumbo Frames).
    • You cannot set different vNIC policies within UCS.
    • UCS will handle the traffic equally because all traffic mathces the same QoS policy on UCS.


    There is only one downside to mention with converged network on UCS in combination with virtualization. Cisco offers QoS out-of-the-box and it works perfectly. But Windows does think it has 10GbE for each interface while it has not, it's shared. This is not a big deal for Windows, but if you mix QoS on UCS and QoS on Hyper-V (with SCVMM) things can get weird or complicated. Then QoS is suddenly not what is seems to be.

    If you would use a minimal number of vNIC's with the exact bandwith available to your UCS server, QoS on Hyper-V would match perfectly. Do you get what I mean?

    Why should you prefer converged networking on UCS then instead of Hyper-V? This could be reasons why:

    • Cisco UCS offers you a vNIC that is presented to the OS which is perfectly fine-tuned with the right settings and policy for each purpose. Wether the vNIC is used for Management, Live Migration or a Hyper-V vSwitch. This offers optimal performance!
    • NIC's used for iSCSI, Management, Cluster and Live Migration can use RSS and use full bandwidth.
    • You don't have to worry about dependencies, misconfiguration of converged networking in Windows. You also don't have the chicken egg situtation when you are trying to configure converged network with SCVMM. For example your Management interface may become unavailable when a NIC Team is applied.
    • Your NIC configurion is much simpler.
    • You have VMQ available on a vNIC with the correct settings and plenty of queues.
    • You have granual QoS for each vNIC.


    ...again I can continue for hours. The bottom line is, why not provide you the vNIC's with Cisco UCS and keep it simple for Hyper-V and SCVMM.

    I hope this information makes sense to you. If you have any questions, let me know. And if your are interested you can always read my blogs:

    My first FlexPod! (Part 1 – Introduction)
    http://www.boudewijnplomp.nl/2014/03/my-first-flexpod-part-1/


    Boudewijn Plomp | ITON Consultancy

    Please remember, if you see a post that helped you please click "Vote as Helpful", and if it answered your question, please click "Mark as Answer". This posting is provided "AS IS" with no warranties, and confers no rights.

    Sunday, June 21, 2015 9:48 PM
  • Wow

    Very very helpful.   I do have a question.  

    When you said you cannot team nics together does that include the new windows teaming switch independent?   If teaming is not present how does a Nic failover to the other fabric?  For example I would have a vnic a and vnic b for management. I can't believe each Nic would get an unique IP.  When you check that box enable failover does it then present the Nic as a single Nic ?

    Another question is I keep reading about FEX.  Is determined by which NIC you have in your servers?   Or is that some sort external device?  On another forum someone mentioned uplinking them like they were an actual device.

    Thanks again

    Monday, June 22, 2015 3:59 PM
  • When you said you cannot team nics together does that include the new windows teaming switch independent?   If teaming is not present how does a Nic failover to the other fabric?  For example I would have a vnic a and vnic b for management. I can't believe each Nic would get an unique IP.  When you check that box enable failover does it then present the Nic as a single Nic ?

    NIC Teaming is supported as usual. You can use NIC Teaming from Windows and the Cisco VIC drivers even include a NIC Teaming feature. But you have to think about how the network traffic flows. It is a bit off-topic from your post, but I'll try to explain...

    Fabric Failover (FF)

    The FI's (Fabric Interconnects) are connected with 2x1GbE. But that is not a data plane, just a management plane. Which means network traffic cannot flow between the FI's without an uplink switch. If you create a vNIC in UCS you have to connect it to Fabric A or B, which acts as the active link. Optionally, you can enable FF. The picture above shows two vNIC's with FF enabled. FF is an active/passive solution. The Operating System is presented with just a single NIC and has no knowledge about FF. With FF; if an FI or an uplink switch becomes unavailable  FF makes sure it automatically fails over to another Fabric (e.g. from Fabric A to B). Just like a switch. And the Operating System does not notice anything at all! So your vNIC is redundant and you don't have to configure anything in the Operating System. This is not a downside, this has to do with the architecture design.

    UCS Administrators use different setups for different scenario's. Some connect all network-related vNIC's to Fabric A and storage-related vNIC's to Fabric B, to use optiomal bandwidth. And you mix it as well, depening on your scenario. It's up to you, how you want it.

    Now suppose you don't want to use FF, NIC Teaming instead. Then you have to create two vNIC's, the first one connected to Fabric A and the second one connected to Fabric B. Then you configure NIC Teaming out-of-the box with Windows just the way you want. But the thing is, with NIC Teaming you don't on which vNIC the outbound or inbound traffic flows. Suppose you communicate from UCS Server A to UCS Server B. Both have NIC Teaming enabled. You cannot be sure that they both use Fabric A to communicate. Suppose Server A goes outbound to Fabric A and tries to connect to server B on Fabric B, then you need an uplink switch. It will work, but the path is not optimal. You know what, I'll suggest you look at the following video:

    Cisco UCS Networking, Fabric Failover with Hyper-V and bare metal OS
    https://www.youtube.com/watch?v=OTp2HJaK09k

    To answer your question. Yes, NIC Teaming is supported. But you have to forget about NIC Teaming when they are connected via the Fabric Interconnects. Think about FF, presented as a single NIC to the Operating System with fault tolerant capabilities.

    Another question is I keep reading about FEX.  Is determined by which NIC you have in your servers?   Or is that some sort external device?  On another forum someone mentioned uplinking them like they were an actual device.

    Thanks again

    Hard to tell what you have read. FEX mainly stands for Fabric Extender, but is used in many forms. A FEX is sometimes known as an I/O Module. A FEX can be hardware or software. But overall, with a FEX you have to think about this; If you need more ports on a switch you add and interconnect another switch, right? This adds another device to manage. Well, some devices (e.g. swtiches or Fabric Interconnects) allow you to connect a FEX. In that case a FEX is connected externally and nothing more than a extension for more ports, a device you don't have to manage at all.

    I do have a question. Do you have UCS already or are you going to use it? If so, I recommend you to read my blogs series.


    Boudewijn Plomp | ITON Consultancy

    Please remember, if you see a post that helped you please click "Vote as Helpful", and if it answered your question, please click "Mark as Answer". This posting is provided "AS IS" with no warranties, and confers no rights.

    Monday, June 22, 2015 6:44 PM
  • "With Cisco UCS we now have 10 GB networking "  Actually, if you are using the latest components and are properly configured, you have 20 Gbps per vNIC defined in your service profile.

    "We use boot from SAN for Stateless Computing. We don't have FC or FCoE, we use iSCSI all over the environment."  May want to consider this KB article that Microsoft just published.  https://support.microsoft.com/en-us/kb/2969306?wa=wsignin1.0  "The Hyper-V virtual switch is not supported when you start Windows Server 2012 or Windows Server 2012 R2 from an iSCSI boot disk. This is true even when the external network adapter is not involved in the iSCSI boot feature."

    Cisco has published Validated Designs for Hyper-V solutions going back to 2008.  So they exist for 2008, 2008 R2, 2012, and 2012 R2.  www.cisco.com/go/cvd  Full step-by-step instructions for configuring UCS, networking, and storage for a Microsoft IaaS solution.


    . : | : . : | : . tim

    Monday, June 22, 2015 9:55 PM
  • Hi Sir,

    Is there any update ?

    Best Regards,

    Elton Ji


    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com .

    Sunday, June 28, 2015 3:18 PM
    Moderator
  • Hi guys,

    Does anyone has any update on the Microsoft Article mentioned above by Tim.

    I can see a lot of posts saying that iSCSI boot been configured with Hyper-V 2012 but just trying to figure out if there is any work around for the Virtual switch issue.

    Kind regards,

    Shahin

    Friday, February 24, 2017 1:33 AM
  • Note in the referenced article (updated in August, 2016) that Microsoft has fixed this issue for Windows Server 2016.  Still an issue with earlier versions.

    . : | : . : | : . tim

    Friday, February 24, 2017 1:44 PM