Random traffic on wrong NIC


  • Hello,

    I've already post my issue here 

    But it seem not link to ADDS configuration.

    Scenario of my issue :

    We have a lot of servers, so I give you an example for one server. But issue concern my whole platform.

    SERVER01 has 2 NIC :

    - Ethernet1 : x.x.229.140

    - Ethernet2 : x.x.227.140

    Default gateway is on Ethernet1 : x.x.229.1

    SERVER01 is member of AD domain. 

    DC01 has 1 NIC :

    - Ethernet1 : x.x.223.39

    Default gateway is on Ethernet1 : x.x.223.1

    Here route table of SERVER01:

    ##### ROUTE TABLE #####

    Active Routes:
    Network Destination        Netmask          Gateway       Interface  Metric
           x.x.229.1     x.x.229.140    261
           x.x.223.0     x.x.227.140      6
           x.x.227.0         On-link      x.x.227.140    261
         x.x.227.140         On-link      x.x.227.140    261
         x.x.227.255         On-link      x.x.227.140    261
           x.x.229.0         On-link      x.x.229.140    261
         x.x.229.140         On-link      x.x.229.140    261
         x.x.229.255         On-link      x.x.229.140    261
           On-link    306
           On-link    306         On-link    306
           On-link    306
           On-link      x.x.227.140    261
           On-link      x.x.229.140    261         On-link    306         On-link      x.x.227.140    261         On-link      x.x.229.140    261

    Persistent Routes:
      Network Address          Netmask  Gateway Address  Metric
           x.x.223.0       x.x.227.1       1
           x.x.229.1  Default

    So, regarding this route table, SERVER01 must use Ethernet2 NIC (x.x.227.140) to reach x.x.223.0 network and so DC01

    But randomly, SERVER01 use Ethernet1 NIC (x.x.229.140) to reach DC01. My firewall rule drop packet on this lan (not on 227.0 of course).

    So SERVER01 has some latency to reach DC01.

    I don't how I can force SERVER01 to use Ethernet2. It's very strange issue.

    If you have any idea, you're welcome :)

    Thanks you.


    Wednesday, December 18, 2013 1:26 PM

All replies

  • That is an odd setup. Normally the DC would not be reachable from the "outer" NIC of the router because the router would be running NAT.

    Is there any reason why you could not run NAT? Do you really need you DC to be reachable from other subnets?



    Wednesday, December 18, 2013 11:10 PM
  • We have spread "web servers" on x.x.229.0/24 network (production LAN). Each of them have a second NIC on x.x.227.0/24 for admin traffic (and so Active Directory traffic).
    DB servers are in another LAN, with similar configuration (2 NIC, one for production, one for administration).

    All servers for "services" NTP, AD, LDAP, DNS, vCenter, etc... are in services LAN.

    Do you think there is an issue with DC if we don't use NAT ?

    Also, if I disable Automatic metric on "Admin NIC" (x.x.227.140/24) and set value to "1". Do you think this change can solve my issue ?

    @Vegan Fanatic: because my firewall drop this some packets, I guess my platform could have some latency.
    Thursday, December 19, 2013 9:33 AM
  • Hi,

    Is the issue resolved?

    I noticed that you have a private in your route table. I guess the IP of DC you provided is a public IP, and I wonder if it has an internal private IP. If so, that means the DC is multi homed and this is not recommended.

    I would appreciate if you can also provide your physical network topology.

    Wednesday, December 25, 2013 3:35 AM
  • Hi,

    Issue is not solved :/

    IP of DC is an local IP: (and all servers are on this range 10.7.x.x)

    NAT will be a "patch" for this issue. Here, servers should be use only one NIC : Ethernet2 (on

    There is maybe an issue on network stack for Windows Server ? I don't have this behavior with Linux server (Suse are all linked with Active Directory with Samba)

    Thanks for your help.

    Tuesday, December 31, 2013 9:13 AM