none
2012 R2 Live migration - nic teaming or SMB multichannel?

    Question

  • I'm building a Hyper-V cluster with four nodes and have three 1 gbps physicle nic ports available pr node for live migration. How would you set it up? Team them with LACP within Windows or using SMB multichannel?

    The way I see it LACP has the advantage of being old and proven, but SMB multichannel is more resilient to switch failure since I can distribute connecions over several switches. Both points are more nice to have than need to have. The important thing is really performance, and on that point I have no idea :-)

    Monday, May 26, 2014 12:57 PM

Answers

  • Hi,

    for Live Migration in 2012R2 you have 3 options, old Style from 2012 - TCP/IP Session, Compression and SMB.

    Using SMB would use all 3 Nics also for a Single Live Migration, MultiChannel is a Feature for Scaling within a 10 Gig Port and also using multiple NICs, SMB3  without MultiChannel can not use the full Speed of a 10 Gig Port.

    So if you would configure the 3 Ports with IP in different Subnets you get all Ports Speed on a single Live Migration, seperate Subnets are a must on a Cluster !!!

    For 1 Gig Ports Compression is also a valid Option and Default on Setup but you need CPU cycles for that, so you need to know your Environment.

    So check what is better for your Needs.

    helps ?

    Udo

    Monday, May 26, 2014 8:07 PM
  • so Jorgen you have all choices

    using compression and CPU free Cycles or SMB3 with Multichannel which is also a good option without RDMA Cards, importent for me is you understand the how the 2 options are working and then test what is best for you.

    but as Ryan said, always good to know the complete Setup of the Host/Cluster  System, Networking Design and used Storage Protocol.

    in Case of using ISCSI or FC for Storage make sure you also understand the concept of Redirect Mode and when it will be used, this feature also will use SMB3 Protocoll to access your VM Storage through one other Node.

    so tell us all details about your Design.

    Udo

    Monday, May 26, 2014 9:34 PM

All replies

  • Use SMB Multichannel for 2 NICs and dedicate those for VM traffic, then dedicate one NIC for Live Migration traffic. Or if your VM network needs aren't going to exceed 1Gbps and you'd rather have faster Live Migration, reverse the above. Live Migration can utilize SMB multichannel: http://www.aidanfinn.com/?p=14907
    Monday, May 26, 2014 3:58 PM
  • Thanks for your reply Matt. I guess I was a bit unclear about the network layout. All other needs are already covered. The three remaining nics are all for live migration. My usage scenario for live migration is almost always "drain the host for some maintenance", so performance is good to have

    I've actually read Aidan post, but I found it a bit unclear. I always thought he referred to a single nic setup. Re read it now and I see it also covers multiple 1 gbps nics. Does this mean Hyper-V will tranparently utilize several nics on the same subnet without teaming? I thougt you needed SMB 3 for that.

    Monday, May 26, 2014 6:38 PM
  • I do believe it requires SMB 3.0 to transparently utilize multiple NICs for a single migration.

    I don't believe LACP will give give you more than 1Gbps of throughput anyway. Isn't it designed to scale for multiple streams? (i.e. with two 1Gbps NICs in LACP a single Live Migration can only achieve 1Gbps, but if you have two simultaneous migrations they can achieve 2Gbps?)

    Re-reading Aiden's post though, it looks like he only recommends SMB 3.0 if you have multiple 10Gbps connections, although if SMB 3.0 can utilize multichannel for a single Live Migration I don't understand why you wouldn't use it.

    I'd be curious to hear someone with more expertise on SMB 3.0 chime in, as now I am not sure.


    • Edited by Matt336 Monday, May 26, 2014 6:48 PM
    Monday, May 26, 2014 6:47 PM
  • Hi,

    for Live Migration in 2012R2 you have 3 options, old Style from 2012 - TCP/IP Session, Compression and SMB.

    Using SMB would use all 3 Nics also for a Single Live Migration, MultiChannel is a Feature for Scaling within a 10 Gig Port and also using multiple NICs, SMB3  without MultiChannel can not use the full Speed of a 10 Gig Port.

    So if you would configure the 3 Ports with IP in different Subnets you get all Ports Speed on a single Live Migration, seperate Subnets are a must on a Cluster !!!

    For 1 Gig Ports Compression is also a valid Option and Default on Setup but you need CPU cycles for that, so you need to know your Environment.

    So check what is better for your Needs.

    helps ?

    Udo

    Monday, May 26, 2014 8:07 PM
  • It's only worth using SMB if you've got RDMA NICs for SMB Direct. 3 NICs is a lot for live migration, you don't mention the rest of your setup. Are you using a converged network infrastructure?

    If you've not got RDMA them you'll be best using TCP compression. It's adaptable based on the number of failed transmissions, number of retransmits, etc. so it'll give you the best bang for your buck.

    Monday, May 26, 2014 9:11 PM
  • so Jorgen you have all choices

    using compression and CPU free Cycles or SMB3 with Multichannel which is also a good option without RDMA Cards, importent for me is you understand the how the 2 options are working and then test what is best for you.

    but as Ryan said, always good to know the complete Setup of the Host/Cluster  System, Networking Design and used Storage Protocol.

    in Case of using ISCSI or FC for Storage make sure you also understand the concept of Redirect Mode and when it will be used, this feature also will use SMB3 Protocoll to access your VM Storage through one other Node.

    so tell us all details about your Design.

    Udo

    Monday, May 26, 2014 9:34 PM
  • What about with a 2012 R2 Teamed 10GigE pair? Just the default TCP/IP?
    Monday, May 26, 2014 9:37 PM
  • Hi Guys!

    Thanks for all the answers. I'm not quite sure I've got the full understanding just yet but I'm getting there. I never picked up the difference between SMB and TPC/IP session. Always figured the old way was SMB and the new was SMB multichannel. Thanks to Udo for clearing that up.

    The way I see it I can go a lot of ways:

    • Three nics on same subnet is no good
    • Team the three nics in switch dependent address hash mode and then use the team nic with or without compression
    • Three nics on three subnets with compression (default)
    • Three nics on three subnets with SMB


    I guess I should give a bit more detail on my setup. As I mentioned it's four node cluster. I'm going to use Storage Spaces/Scale out FS (two head nodes) as shared storage. My infrastrucure domain is separate from my virtual machine domain on a dedicated subnet. 

    My Hyper-V nodes have 10 1 GbE nics (Intel i350):
    1 management
    1 cluster traffic
    3 storage
    3 live migration
    2 virtual machine traffic

    I think all bases are covered and like the idea of draining a host quickly as it's usually a manual process that involve waiting for the operation to complete. Since I have three nics to spare I thought I might as well use them all for live migration :-)

    In both reading and this thread I get both yes and no to using SMB, but never really why. Anyway, if  SMB is not an option then what about the other ones? What to choose...

    Tuesday, May 27, 2014 9:29 AM
  • It got awfully quiet in here :-)

    I guess I'll mark Udos contributions as answers, but feel free to contribute if you have any opinions :-)

    Thursday, May 29, 2014 2:39 PM