locked
Server 2016 RDS connections maxing out and crashing dwm.exe? RRS feed

  • Question

  • We attempted a stress load on our server and found users unable to join. The RDS would blackscreen and drop. It happened after 8 users had joined. The performance also was dropping as each connection stacked and after we saw the Event Viewer had 450+ Critical Error 1000 with dwm.exe dwmcore.dll crashing.

    HP Dl380 Gen9

    2x Xeon E5-2697 v3

    192GB Ram

    Nvidia Quadro M6000 24GB (Current Driver) RemoteFX enabled

    Windows Server 2016

    Bare-Metal RD Terminal Sessions

    We currently have a similar environment with 2012R2 without a problem,

    Wednesday, November 30, 2016 6:12 PM

Answers

  • 385.08 Quadro for winserv2016 has been released - installing now

    Victory! we have no crashes with 11 users so far

    nVidia M6000 24GB


    Friday, July 28, 2017 7:34 PM

All replies

  • to add - we disabled all RemoteFX and still capped at 8 users. Gave GP a max of 9999 users and still capped at 8.

    Re-imaging Server and will run some new tests. Nvidia says its not an issue with their driver and looks like an OS issue.

    Wednesday, November 30, 2016 10:20 PM
  • well after a complete wipe and re-image (except for NVidia driver) we were able to get more than 8 logged. However after reinstall of the Driver we saw the same issues again.

    we will follow up with nVidia

    Thursday, December 1, 2016 7:21 PM
  • Hi,

    Sorry for the delayed reply.

    Based on my research, the problem may be caused by the compatibility between Windows Server 2016 and NVidia. Maybe NVidia need publish a Windows Server 2016 compatible drivers.

    https://social.technet.microsoft.com/Forums/office/en-US/1f2c1e8d-9ed7-43fa-b3f6-70b49b819262/application-error-event-id-1000-faulting-application-name-dwmexe?forum=win10itprogeneral

    Best Regards,

    Jay


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, December 8, 2016 3:07 PM
  • Thanks for looking. We have been working with nVidia but from everything they are seeing their drivers are not crashing.

    We can swap the M6000 out for the K4200 and we are able to log in 8 users.

    We disabled and removed the driver for the on-board Matrox gfx and were able to get 9 userson but the 10th user the desktop starts to build than crashes.

    same dmw.exe error 1000.

    Module is dwmcore.dll (Jay your link pointed to an nvidia module crashing)

    We are going to try the K4200 and see if we cap at any amount of users.

    there is something there regarding the graphics card but the drivers are not crashing the Desktop Windows Manager

    "A user-mode composition engine (dwmcore.dll) that is hosted in the Desktop Window Manager (DWM) process (dwm.exe), and performs the actual desktop composition."




    Friday, December 9, 2016 2:16 PM
  • New development - the K4200 did get maxed out at 13 connections. we never tried that many 

    testing now with just the on-board Matrox. this looks like an OS issue

    We were able to get 22 users with no loss of performance with only the onboard graphics installed. 

    one thought is the onboard gfx driver is WDDM 2.0 not WDDM 2.1 which all of nvidias are.



    Friday, December 9, 2016 7:15 PM
  • @Jay Gu - nvidia has since released Windows Server 2016 drivers...those are the ones we used.

    Tuesday, December 13, 2016 12:16 AM
  • Hi,

    Try the workaround which descripted under the thread below.

    Desktop Window Manager has stopped working after 10/12/10 MS updates loaded

    https://answers.microsoft.com/en-us/windows/forum/windows_7-update/desktop-window-manager-has-stopped-working-after/0951b902-3574-4db4-90a1-6af231c04d7d

    Note: please make that backup before performing the workaround.

    Best Regards,

    Jay


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, December 15, 2016 2:56 AM
  • @Jay Gu - the issue is with dwmcore.dll not an nVidia dll. if I disable dwmcore.dll pretty sure it will kill my machine.

    Friday, December 16, 2016 8:18 PM
  • Hi,

    Would you post the detailed information about event 1000? I want to check the fault module.

    Best Regards,

    Jay


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Monday, December 26, 2016 10:57 AM
  • This is reproducible...the same error each time it hits the wall. oddly the more resources the card has the less users it allows??

    Log Name:      Application
    Source:        Application Error
    Date:          1/4/2017 1:26:14 PM
    Event ID:      1000
    Task Category: (100)
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      <<REMOVED>>
    Description:
    Faulting application name: dwm.exe, version: 10.0.14393.0, time stamp: 0x578999ab
    Faulting module name: dwmcore.dll, version: 10.0.14393.479, time stamp: 0x5825897b
    Exception code: 0xc00001ad
    Fault offset: 0x00000000000c3bc0
    Faulting process id: 0x4350
    Faulting application start time: 0x01d266c06a26f3b8
    Faulting application path: C:\Windows\system32\dwm.exe
    Faulting module path: C:\Windows\system32\dwmcore.dll
    Report Id: bfd5e09c-251b-42be-99df-24eaa3241e51
    Faulting package full name:
    Faulting package-relative application ID:
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Application Error" />
        <EventID Qualifiers="0">1000</EventID>
        <Level>2</Level>
        <Task>100</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2017-01-04T19:26:14.886123300Z" />
        <EventRecordID>4917</EventRecordID>
        <Channel>Application</Channel>
        <Computer><<REMOVED>>
        <Security />
      </System>
      <EventData>
        <Data>dwm.exe</Data>
        <Data>10.0.14393.0</Data>
        <Data>578999ab</Data>
        <Data>dwmcore.dll</Data>
        <Data>10.0.14393.479</Data>
        <Data>5825897b</Data>
        <Data>c00001ad</Data>
        <Data>00000000000c3bc0</Data>
        <Data>4350</Data>
        <Data>01d266c06a26f3b8</Data>
        <Data>C:\Windows\system32\dwm.exe</Data>
        <Data>C:\Windows\system32\dwmcore.dll</Data>
        <Data>bfd5e09c-251b-42be-99df-24eaa3241e51</Data>
        <Data>
        </Data>
        <Data>
        </Data>
      </EventData>
    </Event>

    Thursday, January 5, 2017 4:05 PM
  • We have since opened a support case with Microsoft needless to say its slow going.

    They have seen now exactly what we have seen and think there is an issue in the kernel. Will update when I know more.

    Thursday, January 5, 2017 4:09 PM
  • We have the same errors here multiple 2016 deployments with the nvidia K2200, after 10 sessions the new sessions are crashing.

    They give a grey screen and then logoff automatic and we see the same events in the event viewer.

    Thit Microsoft give you an update on this ticket? we really love this feature, it is the reasen we embraced RDS 2016 verry quickly and now we have to dissapoint our customers.

    Please fix this



    • Edited by feemanarend Friday, January 27, 2017 10:44 AM
    Friday, January 27, 2017 10:42 AM
  • sorry feemanarend - Microsoft is now aware of this issue but have not been given any timetable... hope you have some 2012R2 laying around.
    Wednesday, February 22, 2017 8:06 PM
  • We have some 2012R2 but they can't do wat 2016 can with GPU acceleration. We really like this option and promissed our customers, but they stil can't use it.

    Wednesday, March 15, 2017 2:44 PM
  • Does anyone have any more information about this issue. I am am experiencing this problem and it is obviously a known issue. Is there a solution? 
    Monday, March 20, 2017 8:47 AM
  • Does anyone have any more information about this issue. I am am experiencing this problem and it is obviously a known issue. Is there a solution? 

    To add our situation is as follows - 

    HP Dl380 Gen9

    2x Xeon E5-2697 v3

    128GB Ram

    2 x Nvidia Quadro K4200 4GB (Current Driver) RemoteFX enabled

    Windows Server 2016

    Bare-Metal RD Terminal Sessions

    We have had a 2012R2 solution on the same hardware without a problem


    Monday, March 20, 2017 8:58 AM
  • Well last time a moderator looked at this was DEC.
    Monday, April 3, 2017 2:01 PM
  • We have the same problem...Any News?
    Wednesday, April 19, 2017 9:03 AM
  • We have the exact same problem that has been driving me insane. We're using zero clients on a multipoint server so I really thought it was the zero client driver, turns out it's not.

    DL380 G9, SSD, 64GB w/ Quadro K2200 latest NVIDIA 2016 drivers. 

    We max out at 13 RDP connections, anything more than that will result in the dwm.exe crash everyone else is seeing. 

    We've had an endless amount of problems with 2016 thus far. Feature wise it's one of the most impressive releases to date, stability wise not so much.

    Monday, May 1, 2017 4:37 PM
  • Fix:

    We have been able to confirm the 384 or later 2016 (released 28/7/2016) driver fixes the issue. We have been able to get 20 users on.

    http://www.nvidia.com/download/driverResults.aspx/121384/en-us.

    Previous notes (added fix information above so users don't need to scroll to the bottom to find the fix):

    Same issue with server 2016 in RDS with quadro cards wdm.exe crashing with more than 7 users on

    Support case raised with Microsoft, sitting with the engineers atm.

    Tested in a dl380 768gb ram

    • Nvidia quadro M6000 12 gb (has the issue)
    • Nvidia quadro M6000 24 gb (has the issue)
    • Nvidia quadro P5000 (has the issue)
    • Nvidia quadro M2000 (has the issue)

    Also tested this on desktop hardware and get the exact same result with 2016 in rds.

    Replaced the card as a test with a NVidia gtx 1070 had 20 users on(the gtx drivers do not support open gl etc under RDS).

    Funny thing is I swapped the card back to the m2000 without installing the quadro drivers (using the left over 1070 drivers that seem to partly support (enough for device manager to be happy)the m2000) and got 14 users on no problem (but no open GL). Installed the quadro drivers and the issue came back immediately.  

    I have narrowed it down to the quadro driver that supports open gl under RDS. 

    Any card that uses this driver or previous versions will likely have the issue

    http://www.nvidia.com/download/driverResults.aspx/116707/en-us

    Quadro Series:

    Quadro GP100, Quadro P6000, Quadro P5000, Quadro P4000, Quadro P2000, Quadro P1000, Quadro P600, Quadro P400, Quadro M6000 24GB, Quadro M6000, Quadro M5000, Quadro M4000, Quadro M2000, Quadro K6000, Quadro K5200, Quadro K5000, Quadro K4000, Quadro K4200, Quadro K2200, Quadro K2000, Quadro K2000D, Quadro K1200, Quadro K620, Quadro K600, Quadro K420, Quadro 6000, Quadro 5000, Quadro 4000, Quadro 2000, Quadro 2000D, Quadro 600, Quadro 410

    Quadro Blade/Embedded Series :

    Quadro M5000 SE, Quadro M3000 SE, Quadro K3100M, Quadro 500M, Quadro 1000M, Quadro 3000M, Quadro 4000M

    Quadro NVS Series:

    NVS 510, NVS 315, NVS 310

    NVS Series:

    NVS 810, NVS 510, NVS 315, NVS 310

    I have raised a support case with Microsoft.

    We ran a debug and found there seems to be an allocation of memory for the graphics component of RDS.  Each user, uses a similar size chunk of it.  It seems to be around 6-7 users with a screen res of 1080p, then wdm.exe crashes out when ever a new user attempt to start a session = memory allocation used. Has nothing to do with the total amount of physical ram the box has or video ram.  I would say there is a hard coded allocation of video ram for the total pool of rds sessions, once it is used WDM starts crashing.

    I also played around with color depth in the registry and max res and few other settings that would likely reduce the over all memory usage per user.  This did result in more users, managed to get 11 on, but the user exp was not so good. This also confirms there must be a hardcoded allocation, by reducing each users foot print, resulted in more users.

    Microsoft are purchasing one of these cards so they can replicate the issue, will keep you posted.

    I have just rolled our boxes back to 2012r2 (does not have the issue) until we get a fix. 






    Thursday, May 4, 2017 6:25 AM
  • Appreciate you leading the charge and reporting back with the detailed updates.

    Saves us all a lot of time and frustration going through the same song and dance.

    Thursday, May 4, 2017 5:02 PM
  • does anyone have a FirePro to test if its an nVidia interaction?

    Monday, May 8, 2017 1:39 PM
  • Can anyone with am open MS case share the case #. I'd like to reference it when I open mine.
    Thursday, May 11, 2017 8:36 PM
  • Just an FYI, but the latest NVIDIA drivers (377.35) released May 9th don't do anything to help the problem. 
    Wednesday, May 17, 2017 3:49 PM
  • Plus one with the same issue here.  Our setup is as follows:

    Dell R730 Rack Servers - 2x Intel 12-Core CPU - 128GB RAM

    2x NVIDIA Quadro M4000

    Server 2016 (Hyper-V Host)

    DDA GPU assigned to guest Server 2016 VMs (RDSH)

    Faulting application name: dwm.exe, version: 10.0.14393.0

    Drivers crash and are shut down at 10+ user sessions - once GPU is disabled in device manager, more users can begin new sessions 20+

    Please provide an update as soon as possible.  It's been over 6 months since this issue was originally reported.

    Tuesday, June 6, 2017 9:15 PM
  • Have you received any sort of update from MS?

    It's been another month, hopefully, they've made some progress. 

    Monday, June 12, 2017 6:20 PM
  • Quadro Desktop Driver Release 375

    Version:     R375 U8 (377.48)  <sup>WHQL</sup>                                            
    Release Date:                                             2017.6.14                                            
    Operating System:                                             Windows Server 2016                                            
                                                                                                        CUDA Toolkit:                                            
    Language:                                             English (US)
    File Size:                                             263.98 MB

    New drivers solve the problem?

    Sunday, June 25, 2017 2:45 PM
  • Hi, no the new 375 drivers do not resolve the issue.  This is sitting with Nvidia to work with the MS tech atm. A NVidia support case is being raised by Nvidia tech support.
    Monday, June 26, 2017 6:19 AM
  • I opened a ticket with them initially back before I started this and the nVidia Tech was aggressively adamant that it wasn't an nVidia issue...I reopened my ticket but havnt heard back! ha!
    Tuesday, June 27, 2017 8:06 PM
  • Update, things are moving slow. End up I have gone to nvidia support directly. This is our case number for your reference Nvidia # 170706-000673

    Microsoft have been really patient waiting for nvidia support, now we have a formal case number things might start moving. 

    Friday, July 7, 2017 12:02 AM
  • I have heard back from nvidia, it looks like it will be fixed in the driver due for release end of July.
    Wednesday, July 12, 2017 12:33 AM
  • I am facing this problem as well. They told me the Quadro drivers don't work in a RDS 2016 environment.

    I had to buy a Nvidia Tesla card instead.

    Hopefully Nvidia can resolvie it with the driverupdate.

    Like to have confirmation first, before I implement the drivers on a production server.

    Tuesday, July 18, 2017 2:45 PM
  • Downloading new drivers now

    Version: R375 U9 (377.55) 
    Release Date: 2017.7.24
    Operating System: Windows Server 2016
    Language: English (US)
    File Size: 263.86 MB


    Monday, July 24, 2017 6:33 PM
  • Downloading new drivers now

    Version: R375 U9 (377.55) 
    Release Date: 2017.7.24
    Operating System: Windows Server 2016
    Language: English (US)
    File Size: 263.86 MB


    It's not fixed, I just updated. This is getting pretty ridiculous. 

    I think at this point I'm going to try to get either my vendor to take these paper weights back or NVIDIA to foot the bill to replace them with their competitors and move to ATi. 



    • Edited by Jayson. C Monday, July 24, 2017 11:29 PM
    Monday, July 24, 2017 11:19 PM
  • Interesting they emailed me and said the driver version would be "384*" scheduled for the 24th July. Well I am about to test it, be back in 1/2 hr with an update

    Tuesday, July 25, 2017 5:22 AM
  • Tested the 375 server 2016 driver version and the problem is not solved, 7 users and wdm died.

    I went looking for a 384 quadro version and found a windows 10 "beta" version 384

    http://www.nvidia.com/download/driverResults.aspx/120302/en-us

    Funny enough it installed on 2016 (ticked perform clean install) and seems to be working fine.  Just had 11 users on, verified open gl works.

    So Nivdia must be about to release an official 2016 384 version.  In no way am I recommending using a beta win 10 driver in production but give it a go in dev and report back.


    Tuesday, July 25, 2017 6:51 AM
  • We have been able to confirm the 384 quadro or later 2016 driver fixes the issue. We have been able to get 20 users on.

    http://www.nvidia.com/download/driverResults.aspx/121384/en-us.

    Open gl, Direct x 9 and 11, Cuda and open CL all seem to work with this driver with 2016 in terminal services.

    Jayson. C

    If you can get "open gl v2 " or above working on a another brand name card with 2012 or 2016 in terminal services, I'd like to know what model of card. I looked for months and these quadro cards are the only cards that fill the 4 gpu modes of Open gl, Direct x 9 and 11, Cuda and open CL.  Most other cards will only do direct x  and "open gl 1" That just about leaves 90% of all gpu based sofware with no gpu support. 

    Random info:

    One other series of card that allows for open gl/cl directx and cuda in terminal services are the grid cards http://www.nvidia.com/object/grid-technology.html, but you need to run a hypervisor with support for grid for gpu passthrough. That tech is all about chopping a card up and providing a basic gpu to a bunch of hosts. 

    We are running a HPC (High performance computing) bunch of host with 1 x m6000 24gb (open gl/open cl/direct x) and 1 x k80 (cuda) the quadro driver support both cards with 2012 r2 in terminal services.










    Tuesday, July 25, 2017 7:41 AM
  • failed with 374 as well...+1 about not even testing the Win10 drivers...must wait for non-beta server drivers. ugh
    Tuesday, July 25, 2017 6:42 PM
  • Just got a response from nvidia, they have had delays with the release of the 2016 384 version.  They suggest it will be released soon ~ 1 week.
    Wednesday, July 26, 2017 12:12 AM
  • Whats one more week its already been 8 months 
    Wednesday, July 26, 2017 2:38 PM
  • Hi.

    Test started running the v.384 windows 10 Beta driver from the link above on server 2016 running on Hyper 2016 with DDA activated using Quadro M4000.
    I do not know if it is too soon to say anything about stability or maximum number of users. However, right now 25 users is logged on, and running graphics perfectly. Looks like a win to me.

    Anyone else tested and have some kind of feedback?

    Kind Regards

    Thursday, July 27, 2017 10:43 AM
  • 385.08 Quadro for winserv2016 has been released - installing now

    Victory! we have no crashes with 11 users so far

    nVidia M6000 24GB


    Friday, July 28, 2017 7:34 PM
  • FIX: Nvidia Quadro Driver version R383 U1 (385.08) (server 2016) Released 28/7/2017 or later driver corrects this issue of maximum number of users (confirmed 15 users). Fixes the issue of wdm.exe crashing with more than 6-8 users when 2016 using RDS.

    http://www.nvidia.com/download/driverResults.aspx/121384/en-us

    Tested both: nvidia quadro m6000 24gb and nvidia m6000 12 gb.

    Thanks DesignatedDecoy for starting this thread.  Great to see we have a fix. 

    I would also like to acknowlege Microsoft support for pin pinpointing the issue to a graphics driver dll. Thanks also to Nvidia for coming up with a fix.











    Wednesday, August 2, 2017 1:35 AM
  • Sorry friends. We are not home safe yet.
    When the user load reaches around 30 on our setup, the same error seems to occur. Tried both the 385.08 and the new 385.41 driver without luck.

    MS and NVIDIA needs to get back on the horses.

    Kind regards

    Wednesday, September 13, 2017 9:09 AM
  • Hi Lizard, what type of card are you running and more particular how much video ram does it have onboard? You may be running out of video ram.

    https://developer.nvidia.com/content/are-you-running-out-video-memory-detecting-video-memory-overcommitment-using-gpuview

    If this does not point to your issue, I suggest raising this case with nvidia directly.

    We are running m6000 24gb cards and have not seeen any issues since the later drivers.

    Monday, September 25, 2017 3:45 AM
  • Hi All,

    I'm now having the same type of issue system will crash when 27th user loges on to the system.

    Using P4000 Card with 390.77-quadro-winserv-2016-64bit-international-whql

    If i uninstall the P4000 i can get 120 users on without issue.

    Monday, February 12, 2018 11:50 AM
  • Has anyone had any success with resolving this problem?

    We had restricted the per RDS server user limit to 26, which seem to alleviate the issue for a month or two.

    But now it has started up again, so I have adjusted it to 25 users per server which isn't really enough for active users in our RDS environment


    Tuesday, August 7, 2018 12:34 AM
  • Confirmed the issue still exists with user count around 20-26.
    Wednesday, September 5, 2018 2:56 AM
  • I have just had to build another remote desktop session host to try and keep my user numbers below 20, it has now been stable for about 1 week.

    I have also configured a nightly scheduled task restart of each remote desktop session host to hopefully help.

    Does any know if its a GPU memory issue? and does having a larger memory card so each VM can have more memory, mean you can have more users before it crashes?

    We are running an old Grid K1 so each VM can only have 4GB of GPU memory. we are running the latest 370.28 driver released in August 2018

    Wednesday, September 5, 2018 4:04 AM
  • Hi Stoal76, My understanding is the problem is a Nvidia hard coded allocation of memory for all user profiles. Each user (depending on res) uses a chunk of this allocation (in both a state of disconnected or connected).  When this allocation is exhausted WDM.exe crashes on sub sequential sessions.

    I am keeping our hosts alive by kicking all disconnected users with no apps running daily.

    I have also lowered the maximum number of users allowed to 19 per host so the users spread out more.

    I have also creted an "event" based email task to email me if WDW.exe crashes.

    If you want to set this reporting up and you have a smtp server, find the the event in the system logs, right click it and create a task based on the event.  Setup a powershell or similar script to email you.

    Here is an example of one:

    $PSEmailServer = "smtp.smtpmailserver"
    $ComputernameMail = $env:computername
    $SubjectMail = "Quadro gpu issue detected in event logs: " + $ComputernameMail
    Send-MailMessage -To "recipient@mail.address.com"-From $ComputernameMail -Subject $SubjectMail

    Just letting you know the previous case "above" with Nvidia has been re-opened. A new driver for quadro is due out this month 410 series.  The feedback I have so far is to test that driver, if the problem still exists they will work on fixing it in a subsequent +410 version.

    I'll post back when I have more info / fix.






    Tuesday, September 11, 2018 11:52 PM
  • Same problem here!

    Running Nvidia Tesla M60 on AWS. Using DDA. But not using H.264/AVC 444 for RDP sessions..

    18 users max. 128GB of ram.

    When logging in with 19th user we see nwiz.exe application crash in session before we lose RDP window.

    Event Log:

    Faulting application name: dwm.exe, version: 10.0.14393.0, time stamp: 0x578999ab
    Faulting module name: dwmcore.dll, version: 10.0.14393.1715, time stamp: 0x59b0d15f

    Wednesday, September 12, 2018 3:48 PM
  • Just tested the 411.63 quadro driver (m6000) and has the same issue (18 users), save you testing it. Nvidia have been made aware the problem exists in the new version also.
    Thursday, September 27, 2018 12:10 AM
  • Double your quadro cards = almost double the sesisons.

    I had an idea of testing multiple cards to try and get more sessions and it worked.  The quadro driver will support mixed quadro cards on the rds host, I added a P1000 low end card to a host that had an existing 12gb m6000 and was able to get 29 users on (15 with only the m6000).

    I have heard nothing from nvidia in terms of them progressing to a fix for the driver, they do not seem interested in fixing it but have replicated the issue and are aware of it. 

    I also tested the latest 412 driver and the issue is not solved FYI.

    The multiple cards solution is a fairly cheap work around in the scheme of things to double your user density.

    Hope it helps write back if it works for you also.

    I also came up with a powershell script to do the testing.

    Here is sample of it, just duplicate it out for as many user accounts you want to test as, same it is as a ps1 and launch. Will pause between each connection. cmdkey is used to prepare the credentials, this is then pased automatically to the launch of mstsc, credentials are then reset  for the next connection:

    Replace :"hostname" with your target host name

    Replace:   /user:"domain\user1" /pass:"testpassword" with credentials that have RDS access to your host.

    cmdkey /generic:"hostname" /user:"domain\user1" /pass:"testpassword"
    mstsc /v: "hostname"
    read-host "Press ENTER to continue..."
    cmdkey /delete:"hostname"
    read-host "Press ENTER to continue..."

    cmdkey /generic:"hostname" /user:"domain\user2" /pass:"testpassword"
    mstsc /v: "hostname"
    read-host "Press ENTER to continue..."
    cmdkey /delete:"hostname"
    read-host "Press ENTER to continue..."

    Cheers

     

      

    Friday, February 1, 2019 1:26 AM
  • Has anyone heard anything on this issue recently? we are still getting it from time to time.

    Our fix was similar in that we have a K1 which is 4 gpu's and we were only using 2, so we built more VM's and spread the user load, but we are now hitting the 18-20 user per gpu number again and its crashing the driver.

    Thursday, May 30, 2019 4:42 AM
  • Hi Everyone we now have a fix but it requires 2 things (3 if you want more than 75 users per physical).

    240 odd users on one physical (5 x nvidia RTX 4000 installed in one HP dl380 gen 10):

    • Windows server 2019 1809 or later
    • Nvidia RTX Quadro cards (tested quadro rtx4000)
    • Hypervisor, 1 RTX gen card attached to each VM (Vm running windows server 2019 1809 or later)

    The older Nvidia M6000 gen cards use a different gpu stack part of the driver and will not work with more than 15 odd users (Info to save you the time of testing).  They Must be RTX cards for 40 plus users per gpu.

    Tested and recommended:

    I was able to get 240 users on one HP dl380 running xenserver (citrix's hypervisor) with 5 vm's with 1 x RTX 4000 gpu attached per VM (45 users per VM)

    • 1 x RTX 4000 per vm attached via xenserver hypervisor
    • Each Vm Runnning windows server 2019 (remote desktop session host "Terminal server")

    Xenserver snip

    Bare metal due to 2 cpu cores and whats looks to be a new limit with multiple gpu's there was a limit of 75 users total for the physical even with 5 RTX quadro cards. It seems in this scenario there is an issue with the number of GPUs that can be used for TS session is, the answer was to split up the hardware with a hypervisor.

    FIX:

    • So basically use xenserver or (untested vsphere) with 1 gpu directly attached to each vm. 
    • You must also run Windows server 2019 1809 or later on the vm's

    HyperV info:

    I tested hyper V and attaching 1 gpu per vm, found after a server reboot when all the vm's (5) were started the host blue screened.  It looks related to the way in hyper V you have to attach the GPU  via direct hardware address. Cause raised MS (unresolved after 4 months).  Gave up and now using xenserver for 6 months with no issues.

    The HyperV issue is from what I can see associated to the Hard wiring addresses on a Dual CPU socket physical on a single CPU model of hardware it would probably be fine. The addresses must change on server reboot and when the vm does the hardware call to the GPU the Phyical host blue screens.



    Thursday, June 18, 2020 7:04 AM