locked
Misc WSUS errors RRS feed

  • Question

  • So, WSUS is just not happy for this one client. I'm seeing tons of errors all over the place, most of which are intermittent, but I can't figure out what's going on.

    WSYNCMGR.log

    Sync failed: WSUS server not configured. Please refer to WCM.log for configuration error details.. Source: CWSyncMgr::DoSync	SMS_WSUS_SYNC_MANAGER	12/15/2017 12:11:23 PM	5748 (0x1674)
    STATMSG: ID=6703 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_WSUS_SYNC_MANAGER" SYS=servername.domain.COM SITE=site PID=2796 TID=5748 GMTDATE=Fri Dec 15 17:11:23.008 2017 ISTR0="CWSyncMgr::DoSync" ISTR1="WSUS server not configured. Please refer to WCM.log for configuration error details." ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0	SMS_WSUS_SYNC_MANAGER	12/15/2017 12:11:23 PM	5748 (0x1674)
    Sync failed. Will retry in 60 minutes	SMS_WSUS_SYNC_MANAGER	12/15/2017 12:11:23 PM	5748 (0x1674)
    

    and

    Sync failed: The operation has timed out. Source: Microsoft.UpdateServices.Internal.ClassFactory.CallStaticMethod	SMS_WSUS_SYNC_MANAGER	12/15/2017 10:42:02 AM	5748 (0x1674)
    STATMSG: ID=6703 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_WSUS_SYNC_MANAGER" SYS=servername.domain.COM SITE=site PID=2796 TID=5748 GMTDATE=Fri Dec 15 15:42:02.031 2017 ISTR0="Microsoft.UpdateServices.Internal.ClassFactory.CallStaticMethod" ISTR1="The operation has timed out" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0	SMS_WSUS_SYNC_MANAGER	12/15/2017 10:42:02 AM	5748 (0x1674)
    Sync failed. Will retry in 60 minutes	SMS_WSUS_SYNC_MANAGER	12/15/2017 10:42:02 AM	5748 (0x1674)
    


    WCM.log

    Attempting connection to local WSUS server	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 11:33:41 AM	5728 (0x1660)
    System.Net.WebException: The request failed with HTTP status 401: Unauthorized.~~   at Microsoft.UpdateServices.Administration.AdminProxy.CreateUpdateServer(Object[] args)~~   at Microsoft.UpdateServices.Administration.AdminProxy.GetUpdateServer()~~   at Microsoft.SystemsManagementServer.WSUS.WSUSServer.ConnectToWSUSServer(String ServerName, Boolean UseSSL, Int32 PortNumber)	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 11:34:06 AM	5728 (0x1660)
    Remote configuration failed on WSUS Server.	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 11:34:06 AM	5728 (0x1660)
    STATMSG: ID=6600 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_WSUS_CONFIGURATION_MANAGER" SYS=servername.domain.COM SITE=site PID=2796 TID=5728 GMTDATE=Fri Dec 15 16:34:06.300 2017 ISTR0="NRPSCCM01.nrp2003.com" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 11:34:06 AM	5728 (0x1660)
    Setting new configuration state to 3 (WSUS_CONFIG_FAILED)	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 11:34:06 AM	5728 (0x1660)
    

    but then

    Attempting connection to local WSUS server	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 12:28:04 PM	5728 (0x1660)
    Successfully connected to local WSUS server	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 12:28:12 PM	5728 (0x1660)
    Verify Upstream Server settings on the Active WSUS Server	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 12:28:12 PM	5728 (0x1660)
    No changes - WSUS Server settings are correctly configured and Upstream Server is set to Microsoft Update	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 12:28:22 PM	5728 (0x1660)
    WSUS Server configuration has been updated. Updating Group Info.	SMS_WSUS_CONFIGURATION_MANAGER	12/15/2017 12:28:26 PM	5728 (0x1660)
    

    WSUSCtrl.log

    Attempting connection to local WSUS server	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:04:23 AM	5740 (0x166C)
    System.Net.WebException: The operation has timed out~~   at Microsoft.UpdateServices.Administration.AdminProxy.CreateUpdateServer(Object[] args)~~   at Microsoft.UpdateServices.Administration.AdminProxy.GetUpdateServer()~~   at Microsoft.SystemsManagementServer.WSUS.WSUSServer.ConnectToWSUSServer(String ServerName, Boolean UseSSL, Int32 PortNumber)	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:23 AM	5740 (0x166C)
    STATMSG: ID=7000 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_WSUS_CONTROL_MANAGER" SYS=servername.domain.COM SITE=site PID=2796 TID=5740 GMTDATE=Fri Dec 15 16:07:23.032 2017 ISTR0="servername.domain.COM" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:23 AM	5740 (0x166C)
    Failed to set WSUS Local Configuration. Will retry configuration in 1 minutes	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:23 AM	5740 (0x166C)
    

    and then

    Attempting connection to local WSUS server	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:23 AM	5740 (0x166C)
    Successfully connected to local WSUS server	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:24 AM	5740 (0x166C)
    There are no unhealthy WSUS Server components on WSUS Server servername.domain.COM	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:29 AM	5740 (0x166C)
    Successfully checked database connection on WSUS server servername.domain.COM	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:31 AM	5740 (0x166C)
    Waiting for changes for 1 minutes	SMS_WSUS_CONTROL_MANAGER	12/15/2017 11:07:31 AM	5740 (0x166C)
    

    Other than those issues, I'm seeing tons of timeout errors all over. 

    App Event Log, event ID 1309:

    Event code: 3001 
    Event message: The request has been aborted. 
    Event time: 12/15/2017 11:38:55 AM 
    Event time (UTC): 12/15/2017 4:38:55 PM 
    Event ID: 17a2f2b866a8490296c3a7f27564d2e6 
    Event sequence: 4 
    Event occurrence: 1 
    Event detail code: 0 
     
    Application information: 
        Application domain: /LM/W3SVC/1226346637/ROOT/ServerSyncWebService-5-131578293811858230 
        Trust level: Full 
        Application Virtual Path: /ServerSyncWebService 
        Application Path: C:\Program Files\Update Services\WebServices\ServerSyncWebService\ 
        Machine name: servername
     
    Process information: 
        Process ID: 8492 
        Process name: w3wp.exe 
        Account name: NT AUTHORITY\NETWORK SERVICE 
     
    Exception information: 
        Exception type: HttpException 
        Exception message: Request timed out.

    Event ID 6703:

    On 12/15/2017 11:15:05 AM, component SMS_WSUS_SYNC_MANAGER on computer servername.domain.COM reported:   WSUS Synchronization failed.
     Message: WSUS server not configured. Please refer to WCM.log for configuration error details..
     Source: CWSyncMgr::DoSync.
      The operating system reported error 2147500037: Unspecified error
    When I launch the WSUS console, it takes a while (several minutes) to start, and then almost always times out. I've also tried using AdamJ's cleanup script here, which also times out.

    Last night I deleted the SUSDB and content folders, and went through the process of recreating everything, using the excellent instructions here. After doing that, I let it sit overnight, and while it did try to run through a WSYNC and got about 29%, it eventually failed due to too many timeouts. Now today it won't even get that far, it fails with the "Sync failed: WSUS server not configured" I showed above.

    Also, and while I don't have a link to any of the pages, I followed some other instructions on tweaking some of the WSUSPool app pool settings. Namely the Queue Length (25000), Private Memory Limit (0), and several others.

    Lastly, while I'm not seeing any crazy CPU usage, the IIS worker processes are using a good chunk of virtual memory.

    Also, the WSUSPool is FULL of requests. By full, I mean probably several hundred, if not more. They just seem to be getting stuck, preventing anything WSUS from working properly, but I don't know what could be causing that.

    Absolutely nothing has helped, and I'm at wits end here. The weird thing is, every so often it DOES actually complete a WSYNC, but the majority of them fail. If anyone has any advice, I would love to hear it. Because I have no clue where to go here.

    Friday, December 15, 2017 6:02 PM

Answers

  • > "If there's something else you would recommend I run to cleanup anything that would have been pulled down during the initial sync, I'm all ears."

    The Microsoft supplied script is in the first linked I posted above. I know of the script you are referring to but don't know exactly what it does. If it's simply kicking off the WSUS cleanup tasks, then it's not sufficient and does not actually decline superseded updates.

    WSUS has issues -- it's more or less in the ICU from a support perspective so the fact that yo are having issues is not necessarily surprising. I would definitely make sure to install the latest OS CU on the system and run the script to decline superseded updates.


    Jason | https://home.configmgrftw.com | @jasonsandys

    • Marked as answer by Steve Freeman Monday, December 18, 2017 3:09 PM
    Saturday, December 16, 2017 8:05 PM

All replies

  • Have you 

    - Declined all superseded updates directly in WSUS?

    - Adjusted the private memory limit on the WSUS IIS App Pool?

    - Adjusted the queue length on the WSUS IIS App Pool?

    - Have you reindexed the DB and rebuilt its statistics?

    - Have you applied the latest CU to the OS hosting the WSUS instance?

    References:

    - https://blogs.technet.microsoft.com/configurationmgr/2016/01/26/the-complete-guide-to-microsoft-wsus-and-configuration-manager-sup-maintenance/

    - http://blog.ctglobalservices.com/configuration-manager-sccm/kea/house-of-cardsthe-configmgr-software-update-point-and-WSUS/

    - https://blogs.msdn.microsoft.com/the_secure_infrastructure_guy/2015/09/02/windows-server-2012-r2-wsus-issue-clients-cause-the-wsus-app-pool-to-become-unresponsive-with-http-503/

    - https://blogs.technet.microsoft.com/configurationmgr/2017/08/18/high-cpuhigh-memory-in-wsus-following-update-Tuesdays/

    - https://damgoodadmin.com/2017/11/30/software-update-maintenance-its-a-thing-that-you-should-do/


    Jason | https://home.configmgrftw.com | @jasonsandys

    Friday, December 15, 2017 7:46 PM
  • Have you 

    - Declined all superseded updates directly in WSUS?

         *Yes, but like I said it's a brand new WSUS DB. That being said, the script I mentioned I've run numerous times declines supersded/expired updates plus numerous other things.

    - Adjusted the private memory limit on the WSUS IIS App Pool?

           *Yes, I believe I mentioned that

    - Adjusted the queue length on the WSUS IIS App Pool?

             *I mentioned that as well, but yes.

    - Have you reindexed the DB and rebuilt its statistics?

             *It's a brand new DB, re-created last night. However, before re-creating it I did reindex it and rebuild the stats. I also shrunk it, all to no avail.

    - Have you applied the latest CU to the OS hosting the WSUS instance?

              *Actually, apparently not. We had been making all patches available to servers, not forced, per the clients wishes. However, I just checked and there's no update history on this server at all, so I'm installing all available updates right now.

    Also, even though it seems like it's more of a WSUS or IIS issue, and not SCCM, I should have mentioned that they're running 1706 with KB4042949.

    Edit: The server is now fully updated, but it's still performing the same. The IIS WSUSPool requests are already back in the dozens, and the server's only been back up a couple minutes.

    • Edited by Steve Freeman Friday, December 15, 2017 8:42 PM Updated
    Friday, December 15, 2017 8:12 PM
  • > "Yes, but like I said it's a brand new WSUS DB. That being said, the script I mentioned I've run numerous times declines supersded/expired updates plus numerous other things."

    You don't have to decline expired updates and even in a brand new WSUS DB, nearly half of the updates, depending upon the products and classifications that you've chosen to include, are superseded so not sure if you are referring to the same script that is typically used to do this.

    You did mention the IIS App Pool settings yes, I was mainly just copying and pasting a canned answer of mine for anyone that ever runs across this thread.

    There have been numerous issues related to the size of the metadata required for the latest Win 10 CUs that effectively crater WSUS. This is addressed (or at east should be) in the latest server OS CUs applied to the system(s) hosting WSUS so hopefully this is your issue.

    As a note on the private memory limit, I wouldn't set it to 0 but would instead set it to an actual value like 8GB depending upon the amount of memory in the system hosting WSUS. I know you are probably doing that just for testing, but its worth pointing out.


    Jason | https://home.configmgrftw.com | @jasonsandys

    Saturday, December 16, 2017 12:35 AM
  • Got it...I didn't realize that on the initial sync it would pull down already superseded updates. And as far as the script you're referring to, I don't know. I'm using the CleanWSUS script from AdamJ (this one). It's supposed to be pretty good, but because of the timeout issues in this environment I can't really say...but among everything else it can do, running it with -firstrun basically runs through the WSUS cleanup wizard, along with a few other things. I was also just able to manually run through the wizard, only declining superseded updates, and there were 0 to decline. I also ran exec spGetObsoleteUpdatesToCleanup, and it's listing 0 updates to cleanup as well. If there's something else you would recommend I run to cleanup anything that would have been pulled down during the initial sync, I'm all ears.

    Do you know if maybe there's a newer (several months old or so) issue with WSUS and Server 2012? Not R2. When I first noticed this issue starting to happen 2-3 months ago, I thought it was because the guy who had been maintaining this system hadn't run any WSUS cleanups for a while (there were I want to say like 12k obsolete updates that we cleaned up). But after I noticed it, cleaned up the obsolete updates, re-indexed the DB, shrunk the DB, etc etc, the problem was still there as bad as ever.

    Also, to be clear, the ONLY operations that are slow on this server are those having to do with WSUS. Everything else is nice and quick. So I would imagine it has to have something to do with WSUS and/or SUSDB specifically. Because the CM DB and the CM primary site server is also on this server, and I have zero issues with CM.

    Saturday, December 16, 2017 2:15 AM
  • > "If there's something else you would recommend I run to cleanup anything that would have been pulled down during the initial sync, I'm all ears."

    The Microsoft supplied script is in the first linked I posted above. I know of the script you are referring to but don't know exactly what it does. If it's simply kicking off the WSUS cleanup tasks, then it's not sufficient and does not actually decline superseded updates.

    WSUS has issues -- it's more or less in the ICU from a support perspective so the fact that yo are having issues is not necessarily surprising. I would definitely make sure to install the latest OS CU on the system and run the script to decline superseded updates.


    Jason | https://home.configmgrftw.com | @jasonsandys

    • Marked as answer by Steve Freeman Monday, December 18, 2017 3:09 PM
    Saturday, December 16, 2017 8:05 PM
  • Well, I ran the script Saturday, and it found another 8000+ superseded updates in total. Their environment doesn't allow for any superseded updates to install, so I just declined everything. WSUS appears to be working much better now.

    There was another issue, where clients weren't getting the patch deployments or reporting back compliance information. However, I'm assuming that was related to WSUS being choked out, so the clients couldn't query WSUS. And I'm actually seeing compliance numbers start to show up in SCCM, so at least right now it looks like I was correct in that assumption.

    I'm going to keep monitoring it throughout the next few days just in case, but it definitely looks like your solution was correct. It's weird that the other scripts weren't showing any more supserseded updates, but it's possible they weren't showing anything because they were actually just timing out or something.
    Monday, December 18, 2017 3:04 PM