My organization is preparing a standardized MDT implementation to replace our ad-hoc Ghost imaging, and I'm trying to decided on the OS to use for the central MDT server. We've had a test lab setup for the last couple weeks, and I've had good success designing the solution around Server 2008 R2-WADK(8.0)-MDT 2012U1 running from a HP i5 based desktop. We've gotten the green light to go ahead with implementation and ordered a new Proliant server to host MDT.
I'm at the stage of ordering the OS licensing now, and the quote we got back from our vendor indicated that we would be purchasing a Server 2012 license. Rather than just use the downgrade rights (we have no 2012 in our environment) I decided to check and see if there were any role improvements that we could benefit from.
http://technet.microsoft.com/en-us/library/hh974416.aspx mentions performance improvements for multicasting, PXE booting, and the native ability to run without AD authorizations. All of these would be useful for our environment, so I decided to rebuild the test server with Server 2012 and see how it worked. I mirrored the configuration of 2008 R2 test box, but have run into performance issues while multicasting to test clients.
For reference, the previous server was setup to run DHCP, DNS, SQLExpress, WDS, RRAS (or whatever 2008 R2 called it) as LAN routing only (in order to multihome - we required 2 NICs attached to separate networks), and Hyper-v. MDT/WDS/DHCP/DNS/RRAS/SQLExpress ran on the bare-metal install, Hyper-V was used to host the virtual machine that is the "base image" and with the intent that DHCP and DNS would eventually move to a VM once we implemented on server hardware. Virtual NICs were used for each physical NIC. With that setup and a Cisco gigabit managed switch, we would get anywhere around 25-35MB throughput multicasts, using 2 stream autocasting (Slow/Fast). I figured this should be faster, but chalked it up to the fact we were running it on desktop hardware. These numbers were after messing around with the apblocksize and max window values in the registry.
I did a clean install of Server 2012 today and replicated the setup, reimported the deployment share and database, and tried some deployments. On the exact same hardware, with the same network configuration, throughput dropped to a total of 20-22MB for a single "Fast" client and 10MB per a stream when using 2 stream autocasting. It also did not skip caching the WIM as was outlined in the WDS improvements article. I tried disabling the virtual NIC for the interface facing the imaging LAN, but there was no noticeable change in speeds (maybe a MB or two).
So, does anyone know if there is a way to enable the direct application of the WIM file as outlined in the WDS 2012 doc, and are there any suggestions for ways to improve performance specific to Server 2012? I'll get in a take a look at the apblocksize and related settings tomorrow, but I don't remember the out-of-the-box performance on Server 2008 R2 being this low.
As a note, I saw the reports of issues with SMBv3, we are deploying Windows 7 so I am not sure if that is the problem. I guess it's a possibility since the wim transfer occurs within WinPE4.0, which is based on 8, but I am not sure if that uses SMBv3 or not.
Thanks for any help or suggestions that you may have.
- Edited by gjezekiel Friday, September 20, 2013 12:05 PM
As an update to this, I tried with SMBv2&3 disabled on the server and there was no change.
Next I configured the registry entries for apblocksize, tpexpwindowsize, tpmaxwindowsize to match what I was using with the 2008 box (values according to post in this thread http://social.technet.microsoft.com/Forums/windowsserver/en-US/a9e5291d-4665-4b33-9376-4fcd697f4975/wds-multicast-extremely-slow). Once that was done multicast jumped to the previous levels, which I think are as good as I am going to get with my testing hardware.
I also took a look at the direct image application feature in WDS 2012, and found that wdsmcast.exe has been updated with what I think is a new function "/apply-customimage". Checking the LTIApply.wsf I can see that MDT is using "Transfer-file". So, I guess work is required in LTIApply.wsf to allow it to use that for testing. Before I start trudging through the code, has anyone else worked on doing this?
I'll update this thread in case anyone else is searching for the same information I was.
So, tried a manual multicast with /apply-customimage today using syntax outlined by the Server 2012 version of wdsmcast. I booted WinPE, mapped z: to my deployment share and ran the command. Usual application method was 3-4 minutes to multicast a 4.5gb image, then 4-7 minutes to apply it. Using the /apply-customimage it was 4.5 minutes total. The multicast ran at the usual speed, and then there was a process that ran at the end for about a minute and a half, but it didn't indicate what it was. I guess it could have been an integrity check of some sort.
The image didn't boot afterwards, because the bcd entries and other assorted wizardry LTIApply performs weren't done, but I could verify the file contents on the hard drive. Seems promising, I guess I'll work on roughing out a version of LTIapply that uses this method instead and see how it works in other scenarios.
Thanks for sharing.
Multicast is very cool, and MDT can leverage it quite well.
When I worked for Microsoft's own IT Department last year, we played around with several Multicast settings trying to find what worked best for us.
What we found is that there was *always* some slow client out there that made the otherwise fast Multicast transfer run slow. We tried to separate the transmissions into "Slow-Fast-Medium" but just could not get it working in an optimal mode.
Finally we settled on "Automatically disconnect clients blow this speed" with the speed set rather high. (I don't recall what it was, but I think we tried 2048KBps, 1024KBps, and 768KBps. Note that if an MDT client can't perform multicast it will drop back down to unicast file download. Typically most clients would work just fine with a GigaBit network connection, and most clients with FastEthernet would work great too. If the client had problems it would be pushed off, and recover with unicast.
Keith Garner - keithga.wordpress.com
- Proposed as answer by Keith GarnerMVP, Moderator Tuesday, September 24, 2013 12:20 AM