none
Multiple NICs same subnet

    Question

  • My host server has 8 NICs as follows;

    2 on subnet 172.16.0.0 - domain network

    3 on subnet 192.168.130.0 iSCSI controller 1

    3 on subnet 192.168.131.0 iSCSI controller 2

     

    Only the first 172 NIC has a default gateway and registers with DNS

     

    The purpose of the above is to allow Virtual Machines on the server, (Hyper-V), to have their own dedicated iSCSI interfaces and to share the bandwith required of serveral VM's over more than one physical ethernet connection.

     

    What worries me a bit with this set up is how will the host machine and it's TCP/IP stack cope with having multiple interfaces on the same subnet? What prompted me to pose this question is that the Server 2008 Failover Cluster validation wizard gave a warning, (not a faulure), that mulitple NICS were on the same network.

    Sunday, October 12, 2008 3:29 PM

Answers

  • Multiple interfaces on the same subnet is not a problem.

     

    It is no different than when setting up a Windows server with multi-homing (that was what we used to call it when you set a server on two subnets without enabling packet forwarding or routing).

     

    As long as your extra interfaces do not have gateway addresses, you are fine.

     

    Now, I am guessing that you want to segment traffic - however,if the VMs are making the iSCSI connection from within the VM operating system, then the Host does not require an address on this interface (since it would be an External virtual switch, the manager would give it an address by default, but you really don't need it).

     

    Failover Clustering will place the heartbeat on whatever subnet the two (or more) hosts can communicate with each other on.  If you have DNS on your domain network, then that is where Falover Clustering will place your interface.

     

    Hyper-V does not support true NIC teaming yet.  This would be where you can attach more than one physical NIC to a single External Virtual Switch.  The traditional type of NIC teaming is touch and go with hardware vendor drivers.  (long story that I can't go into the details about).

     

    Your concept looks good.  As long as the VMs make the iSCSI connection then you can disable the Host virtual NICs on the iSCSI network (but use the physical NIC with a vritual switch to get the VMs on the iSCSI subnet).

     

    If the Host is connecting to the iSCSI SAN, mounting a LUN and using it as shared VM vritual disk storage then your current configuration should be fine.

     

    If your Host is presenting an iSCSI volume as a Passthrough Disk to your VM, then you have a totally different issue if you use Failover Clustering.

     

    Monday, October 13, 2008 2:50 PM
    Moderator

All replies

  • Hi Tim,

     

    I would not be using multiple network interfaces on the same subnet without using teaming to tie them into a single virtual interface, load balancing is not automatic and it would most likely use a single interface anyways.  I assuming these are Gigabit Ethernet connection, is this correct?  Look at the NIC manufacturer for a driver/utility for teaming.  Once you have that you can try teaming.  I am not sure how teaming would affect the cluster or the storage network for that matter, so make sure you do some testing.

     

    -matt

     

    Monday, October 13, 2008 12:31 PM
  • Multiple interfaces on the same subnet is not a problem.

     

    It is no different than when setting up a Windows server with multi-homing (that was what we used to call it when you set a server on two subnets without enabling packet forwarding or routing).

     

    As long as your extra interfaces do not have gateway addresses, you are fine.

     

    Now, I am guessing that you want to segment traffic - however,if the VMs are making the iSCSI connection from within the VM operating system, then the Host does not require an address on this interface (since it would be an External virtual switch, the manager would give it an address by default, but you really don't need it).

     

    Failover Clustering will place the heartbeat on whatever subnet the two (or more) hosts can communicate with each other on.  If you have DNS on your domain network, then that is where Falover Clustering will place your interface.

     

    Hyper-V does not support true NIC teaming yet.  This would be where you can attach more than one physical NIC to a single External Virtual Switch.  The traditional type of NIC teaming is touch and go with hardware vendor drivers.  (long story that I can't go into the details about).

     

    Your concept looks good.  As long as the VMs make the iSCSI connection then you can disable the Host virtual NICs on the iSCSI network (but use the physical NIC with a vritual switch to get the VMs on the iSCSI subnet).

     

    If the Host is connecting to the iSCSI SAN, mounting a LUN and using it as shared VM vritual disk storage then your current configuration should be fine.

     

    If your Host is presenting an iSCSI volume as a Passthrough Disk to your VM, then you have a totally different issue if you use Failover Clustering.

     

    Monday, October 13, 2008 2:50 PM
    Moderator
  • Thanks for your help.

    I'll report back my findings once I've got all the VMs up and running. Most will use VHDs off the host but some will directly access the SAN using their own deidicated iSCSI NICs.

     

    My aim is to have a handfull of VMs made HA using Failover clustering in WS2008. In addition I'm going to try and have a pair of Exchange Servers on each node using SCC. Haven't decided yet if the Exchange servers will sit on the hosts directly or will be VM's on each host, if they are going to be VM's then they will access the SAN directly using their own dedicated iSCSI NICs

     

    Monday, October 13, 2008 5:13 PM