none
WDS on Server 2008 R2 slower than expected multicast deployments.

    Question

  • WDS on Server 2008 R2 slower than expected multicast deployments.

     

    I have recently been testing/evaluating the latest release of Windows Deployment Services (R2) which ships with Windows Server 2008 R2. However I have noticed that when multicasting an image the image is only multicasting to the clients at approx 20Mbps (20% network utilization).

     

    We currently have a number of other WDS servers in our organisation that are running  Windows Deployment Services on server 2008 (not R2) and when multicasting an image to clients from these servers the image is deployed at around 80 – 90Mbps (80-90% network utilisation).

     

    WDS on the new server has been configured to use a single stream for image transfer and I have also tried using multiple streams and am getting the same results (20% network utilisation). The server is connected to the network with a 1Gbps connection and the clients are connected with a 100Mbps connection.

     

    Can anyone shed any light on why I am not getting the expected multicast speeds with the new version of WDS.

     

    Please Note: We are using the latest boot.wim fom the Windows 7 installation media.  However I have also tried using the old server 2008 boot.wim (from the non R2 media) that is currently in use on our live production servers and that has been proven to multicast images at around 80-90% network utilisation. But again the multicast deployment still seems to run at about 20Mbps (20% network utilisation).

     

    I have also noticed that in the WDS Server properties on the network tab the network profile radio buttons are greyed out and do not allow you to set any sort of speed /throttling for WDS.  Is this a bug in WDS or is this by design I find it hard to believe that it is by design as why wouldn’t the buttons have just been removed if they are not meant to be there.

     

    The server hardware details are as follows:

     

    Intel DQ45CB chipset

    Processor: Intel Core 2 Quad Q9550 (2.83GHz, 133MHz Front side bus)

    4 GB RAM

    Server 2008 Enterprise R2 64-bit

    1GB Network connection

     

    The Hardware details of the client are as follows:

     

    Intel DQ35JOE chipset

    Processor: Intel Core 2 Duo E6320 @ 1.86 GHz

    2GB RAM

    100MB network connection

    Any help would be greatly appreciated or if any further information is required please let me know.

    Kind Regards,
    Matthew

    Wednesday, February 03, 2010 9:03 AM

Answers

  • Hi Matthew,

     

    Maybe there are certain slow clients on the network. I suggest testing below:

     

    Open WDS properties, on the Multicast tab, choose “Automatically disconnect clients below this speed (in KBps):”, type 50 and click OK.

     

    What’s the result?

     

    Related information:

     

    Q: My multicast transmissions are running very slowly.

     

    A: One typical cause of this occurs in environments that contain computers with different hardware configurations and architectures. In this case, some clients can run multicast transmissions faster than others. Because each transmission can be run only as fast as the slowest client, the entire transmission will be slow if there is one slow client. To resolve this issue, first determine the client that is holding back the transmission (this is called the master client). To do this, view the output of the following command: WDSUTIL /Get-AllMulticastTransmissions /Show:Clients. Next, disconnect the master client using WDSUTIL /disconnect-client /ID:<ID>, where ID is the client ID (which you can get using the /get-transmission option).

     

    Disconnecting the master client will force it to run the transmission by using the Server Message Block (SMB) protocol, and the other clients' multicast performance should speed up. If they do not speed up, there is a problem with the client's hardware (for example, a slow hard drive) or a network problem.  

     

    Hope it helps.

     

    Tim Quan - MSFT

     

    Thursday, February 04, 2010 7:57 AM
    Moderator
  • Monday, March 22, 2010 10:21 AM

All replies

  • Hi Matthew,

     

    Maybe there are certain slow clients on the network. I suggest testing below:

     

    Open WDS properties, on the Multicast tab, choose “Automatically disconnect clients below this speed (in KBps):”, type 50 and click OK.

     

    What’s the result?

     

    Related information:

     

    Q: My multicast transmissions are running very slowly.

     

    A: One typical cause of this occurs in environments that contain computers with different hardware configurations and architectures. In this case, some clients can run multicast transmissions faster than others. Because each transmission can be run only as fast as the slowest client, the entire transmission will be slow if there is one slow client. To resolve this issue, first determine the client that is holding back the transmission (this is called the master client). To do this, view the output of the following command: WDSUTIL /Get-AllMulticastTransmissions /Show:Clients. Next, disconnect the master client using WDSUTIL /disconnect-client /ID:<ID>, where ID is the client ID (which you can get using the /get-transmission option).

     

    Disconnecting the master client will force it to run the transmission by using the Server Message Block (SMB) protocol, and the other clients' multicast performance should speed up. If they do not speed up, there is a problem with the client's hardware (for example, a slow hard drive) or a network problem.  

     

    Hope it helps.

     

    Tim Quan - MSFT

     

    Thursday, February 04, 2010 7:57 AM
    Moderator
  •  

    Can anyone shed any light on why I am not getting the expected multicast speeds with the new version of WDS?


    The performance should be as good or greater in 2008R2 versus 2008. Are you attempting this in the same environment with the same clients that you're seeing 80-90% network utilization? What do the WDS multicast performance counters look like? (large number of NACKs, etc)

    I have also noticed that in the WDS Server properties on the network tab the network profile radio buttons are greyed out and do not allow you to set any sort of speed /throttling for WDS.  Is this a bug in WDS or is this by design I find it hard to believe that it is by design as why wouldn’t the buttons have just been removed if they are not meant to be there.

    These buttons are greyed out in Server08R2 unless you're remote managing a Server08 WDS server. For R2, we dynamically determine the settings that were previously set by these radio buttons. If you need to limit the amount of bandwidth that the server will use, set HKLM\SYSTEM\CurrentControlSet\Services\WDSServer\Providers\WDSMC\Protocol and set the TpMaxBandwidth to a value less then 100%, and restart the WDS service.
    Friday, February 05, 2010 12:49 AM
  • Hi Tim & Aaron,

    Sorry for not getting back sooner I have been away from the office.

    I have done some further testing with regards the issue outlined above. I can confirm that the performance is NOT as good or greater in 2008R2 as it was with Server 2008. I can confirm that I am doing the testing in the same environment with the same clients as I was seeing the 80-90% network utilization with. I have even tried a multicast deployment with a single client and with 2008R2 am getting transfer rate of around 30Mbps but if I switch to the server 2008 WDS server am getting a transfer rate of around 80-90Mbps.

    I have also done some testing by connecting the server and the client to the network at different speeds to see if this has an effect on the transfer rate using the same spec server and client in the same environment that I was getting 80-90Mbps transfer rates with using the previous version of WDS. The results are as follows:

    Test #1
    Server connected at 1Gbps
    Client Connetced at 100Mb
    Multicast transfer rate = 20Mbps (20% of maximum available)

    Test #2
    Server connected at 100Mbps
    Connetced at 100Mbs
    Multicast transfer rate = 50Mbps (50% of maximum available)

    Test #3
    Server connected at 1Gbps
    Client Connetced at 1Gbps
    Multicast transfer rate = 300Mbps (30% of maximum available)

    It seems strange by varying the server and client connection speeds I am getting different transfer rates.

    Test #
    1 Is the configuration that we will need to use as we do not have the ability to connect all our clients to the network with 1GB connections. You will probably agree that Test #1 is approx 1/5 of the transfer rate we are are currently getting with the previous version of WDS which means as things stand we would not want to upgrade all our live WDS servers to the new version as deployments would take 5 x as long.

    Test #2 It also seems strange that I am getting a faster transfer rate (50Mbps) when both the client and server have a 100Mb connection to the network. as opposed to Test #1 where the server had a 1GB connection to the network however still not the 80-90Mbps that I am seeing with the previous version of WDS.

    Test #3 Proves that the client, server and the environment are all capable of delivering data at a higher speed i.e. 300Mbps however this is still only 30% utilization of the maximum available.

    Why am I not seeing multicast transfer rates of arount 80-90Mbps (80-90% of the max available) as I was with the previous version of WDS using the same hardware in the same environment. Test#3 has already proven that they are all capable of these higher transfer rates.


    Your help is greatly appreciated.

    Kind Regards,

    Matthew






    Wednesday, February 10, 2010 9:19 AM
  • On Server08 in the same setup as Test #1, and with the network profile set to 1Gbps, do you still get 80-90 Mbps?
    Wednesday, February 10, 2010 10:28 PM
  • Hi Aaron, I will test this for you as soon as I get chance.

    Cheers, Matt
    Friday, February 12, 2010 8:54 AM
  • Hi Aaron,

    As requested I have carried out the same test as Test#1 above on Server08 (not R2) and the result is as follows:

    Server connected at 1Gbps
    Client Connetced at 100Mb
    Network Profile in WDS = 1Gb
    Multicast transfer rate = 95Mbps

    I also tested with the network profile in WDS set to 100mbps (our current setup)and the transfer rate was 80-82Mbps.

    Hope this helps,

    Matt
    Friday, February 26, 2010 10:17 AM
  • Hi,

    Sorry to bump this - it's marked as answered by Tim, but I notice that the marked answer doesn't actually address your specific problem at all. We are having the same issue. I was wondering if you managed to resolve this issue in any way?

    Thanks.
    Friday, February 26, 2010 4:01 PM
  • Diagnosing these kinds of problem is hard over the forums, so if you have a support agreement and an issue opened on this, feel free to ask the support engineer to contact me.

    Do you notice anything different/interesting when you compare the performance counters for WDS Multicast server to the slow case to the fast case?

    Friday, February 26, 2010 9:40 PM
  • Monday, March 22, 2010 10:21 AM
  • Hi MrBeatnik,

    Thankyou for the information you have provided in your thread this has enabled me to also resolve the issue with regards slow multicast speeds using WDS for server 2008 R2.

    I also set the ApBlockSize to be 1482  and can confirm that after doing this I was getting multicast deployment speeds with the server connected at 1Gbps and the client 100Mbps of around 80Mbps.

    I have however since set the value to be 1024 as advised here (It is recommended to set this value in 4-KB increments-for example, 1 KB (1024 bytes), 4 KB (4096), 8 KB (8192), 16 KB (16384)).

    I can now confirm that I am now getting multicast deployment speeds of around 95-100Mbps (100% network utilization on the clients) which I assume is the maximum speed I will ever get.

    Thankyou for your assistance and I hope Microsoft acknowledge this as a issue and fix this.

    Cheers, Matt

    Wednesday, March 24, 2010 1:34 PM
  • If you are only interested in the settings I have found to work well, they are tpExpWindowSize = 2, tpMaxWindowSize = 8, and apBlockSize at the default of 2251 (hex).

     

    If you want to know how I came up with these numbers, read on.

    Ok, like many others, I have had a lot of issues with Multicast from 1gb servers to 100mb clients with WDS on Server 2008 R2.  After a lot of testing and research, I have identified an alternative solution to what many other people have suggested.  All of the solutions involve modifying a combination of the following settings:  tpExpWindowSize, tpMaxWindowSize, and apBlockSize.

    Generally, people have recommended reducing apBlockSize to something smaller than the MTU of the network equipment.  We found that the suggested value of 1024 generally works well when multicasting from a 1gb server to a 100mb client, but performance suffered greatly when multicasting 1gb to 1gb.  This seemed to be caused by high CPU load on the server.  We also noticed that with a apBlockSize of 1123, we got no NACKS while multicasting, but with a setting of 1124 we did.  Since the MTU of ethernet would indicate that a block size of 1385 should work with no fragmentation, this seemed strange.  The answer lies in the window size in use during multicast.

    This prompted me to look at the other settings (tpExpWindowSize and tpMaxWindowSize).  Since Microsoft really doesn't provide any documentation about the large number of settings, I made the assumption that Exp indicated the minimum window size and max was the max.  The default settings for these are 8 and 64 respectively.  In other words, with the default settings, a minimum of 8 packets, each of which gets split into 6 fragments would have to be reassembled by the clients before deciding if a NACK should be sent.  With a 1 gb server and a 100 mb client, the server is going to be able to flood the network with packets much faster than the client can pick them up which results in a lot of errors.  Lowering these settings resolves the issue without having to change the apBlockSize and performance is good for both 100mb clients and 1gb clients.

    • Proposed as answer by MrBeatnik Wednesday, August 03, 2011 9:35 AM
    Saturday, March 26, 2011 10:04 PM
  • I am experimenting with these settings as well. I am finding that setting the tpMaxWindowSize to a lower setting, currently at 32, increases the multicasting transfer speed from 200-300KBps to 1500-1900KBps. I still think this is extrememly slow for 1GB server to 1GB client, but I'm not sure what to expect with multicasting.

    I've seen speeds up to 2300KBps, but it bounces around so much, there just isn't any consistency with transfer rates.

    I'll post back if i find anything more than what is on this post.

    Wednesday, May 04, 2011 2:36 AM
  • I have went through a lot of this in the past month setting up a new deployment area for a school system I work for. The issue you are having sounds like it could be related to IGMP not being setup correctly on the switch. I could be wrong be I experienced a similar issue using Procurve switches and I had to enable IGMP and IGMP Querier on the switch we used for imaging for it to function properly.

       It is also helpful to know if you are using HP switches that they for some reason exclude some multicast ip ranges from the switches so instead of using the default WDS IP Range of 239.0.0.1-239.0.0.254 you might try using 239.0.1.1- 239.0.1.254. The range of IP address they exclude are in the documentation for their switches but it took me a while to find it so for a while it was slowing down my transmissions.


    Monday, June 27, 2011 2:30 AM
  • I did find a resolution to my issue at least. I moved everything to a different server and speeds increaded to anywhere from 13000KBps to 36000KBps without the need to adjust and registry settings. I also ran into an issue where my multicast transmissions would slow down after several weeks of use. It turns out that by removing the WDS role completely then readding it, the speed issue was fixed. I don't know if it was because I created a number of boot images and imported/deleted many of those boot images as I was updating/adding new features, applications and drivers to my MDT deployment share, but that seemed to do the trick.

    Just an FYI.

    Mike

    Wednesday, July 06, 2011 7:55 PM