none
Best multicast settings for WDS Server 2012 R2 Multicast

    General discussion

  • I've read a hundred forum posts everywhere about slow multicast speeds with WDS but none of them seem to fix my issue. I'm deploying an installation with WDS using multicast and what I'm seeing is a higher than zero number of NACK Packets. I've also installed and captured multicast packets using Wireshark and I'm seeing fragmented UDP packets from the multicast session. 

    Referenced registry settings are located at:

    HKLM\System\CurrentControlSet\Services\WDSServer\Providers\WDSMC\Protocol

    I have an IGMP V3 Router(Querier) setup on my server and I have apBlockSize set to 1472. I've also tried setting apBlockSize to 1385. Neither of these fixes my NACK Packets, however, 1472 seems to produce the best results. Both my host and client have Gigabit NICs and both NICs are Intel 82579LM. The host is Server 2012 R2 Datacenter and the client is the boot.wim from the Windows 8.1 install ISO.

    In wireshark I'm seeing packets of size 1579 bytes (8 byte UDP header, and 1571 bytes of data). Additionally the IPv4 header size is 20 bytes but this isn't factoried into the UDP packet size of 1579. With that said, what should my apBlockSize be for best performance and minimal NACK Packets?

    One more thing, does anyone know what the TpCacheSize affects? Should I be changing that all from the default 1190, or is it fine the way it is? What about TpMaxWindowSize and TpExpWindowSize?

    Thanks for the help.

     

    Thursday, October 24, 2013 1:46 AM

All replies

  • Hi

    I would check at the switch level, not all switch handle multicast correctly. (an exemple doc; Multicast Catalyst Switches Support Matrix)

    I seen lowend switch that prevent a lot of multicast (it think it's a DoS, so it drop a lot of the traffic)

    Else check the flow control to be sure it's at off at the nic level for both endpoint.


    Regards, Philippe

    Thursday, October 24, 2013 2:14 AM
  • At this point I'm not using a switch. I have the host and client connected directly. This is just for testing at this point. I have a cisco switch that is already setup with IGMP snooping that will be used later on.
    Thursday, October 24, 2013 3:25 PM
  • No response to this? I'm seeing my deployment start out alright, getting 28000+ KBps. Then something happens in the middle and I start getting a high number of NACKs and my transfer speed goes way down. Does anyone know what would cause this?
    Monday, November 04, 2013 9:05 PM