OSD failures after upgrade to 1710 when setting up machines with language packs RRS feed

  • Question

  • We see OSD task sequence failures after upgrading to 1710 (fast ring). The setup fails in full OS at different points:

    In Windows 7 either when installing the MUI for IE11 (Execution Manager handle is invalid. Trying to reconnect...) or when installing the Office 2013 language pack German (Failed to invoke Execution Manager for Package ID...). In Windows 10 it's hanging when installing Applications.

    In two cases I saw only Chinese characters when opening the smsts.log (very strange). Is anybody having the same issue?

    Thursday, November 30, 2017 11:12 PM

All replies

  • Haven't seen OSD failures after upgrading to CM 1710. You would provide more information (IOW all logs as a single lines does not help because it's taken out of context) to get an idea what's going on.

    Torsten Meringer |

    Friday, December 1, 2017 7:47 AM
  • We upgraded to 1710 yesterday and applications are not installing during the Task Sequence (TS). No errors during the upgrade. Update Pack Installation Status was successful for every step and the CMUpdate.log looked good too. Packages install fine during TS. Post TS, Applications are installing fine via Software Center and regular deployments. Just getting started with troubleshooting but thought I would add to this thread real quick.  

    Saturday, December 2, 2017 2:38 PM
  • Problem solved. The Configuration Manager Client Package did not distribute properly during the upgrade so the previous version of the client (1706) was still getting installed which causes applications to fail during the Task Sequence. Once I re-distributed the client package, apps started to install fine. 
    Sunday, December 3, 2017 3:26 PM
  • Many thanks for the quick reply. Unfortunately, our problem seems to be of a different nature: I suspect that it has to do with Client Peer Cache or Branch Cache. Both is neither activated in the client settings nor is the Allow clients to share content with other clients option set. Still, there are entries like those below in the CAS.log and the content of the affected Applications is obviously not downloading:

    Distribution Point='$/Content_7c2e5265-a753-478c-a0fc-50aafcc11fee', Locality='PEER'              ContentAccess 03.12.2017 13:47:21        4548 (0x11C4)
    Distribution Point='$/Content_7c2e5265-a753-478c-a0fc-50aafcc11fee', Locality='PEER'              ContentAccess 03.12.2017 13:47:21        4548 (0x11C4)
    Distribution Point='$/Content_7c2e5265-a753-478c-a0fc-50aafcc11fee', Locality='PEER'              ContentAccess 03.12.2017 13:47:21        4548 (0x11C4)
    Distribution Point='$/Content_7c2e5265-a753-478c-a0fc-50aafcc11fee', Locality='PEER'              ContentAccess 03.12.2017 13:47:21        4548 (0x11C4)
    Distribution Point='$/Content_7c2e5265-a753-478c-a0fc-50aafcc11fee', Locality='PEER'              ContentAccess 03.12.2017 13:47:21        4548 (0x11C4)


    I'm still puzzled why this is happening for some Applications (which are causing the OSD TS to hang) but not for all.

    Sunday, December 3, 2017 10:28 PM
  • Hi,

    Yes, from the log it appears that peer cache is enabled.

    Please review below blog which will guide how to verify if peer cache enabled from client side, also it provides how to check the issue in this case: 

    Now coming to the final question of I have disabled the same but the Clients are still searching for PeerCache. Why?

    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Monday, December 4, 2017 7:56 AM
  • Thanks, very helpful! We have finally fixed the issue by creating a copy of the Applications that caused the SCCM agent to look for peers and by then replacing the original Application in the task sequences. Since the copies have a new content ID and are obviously not cached somewhere our task sequences work again. Just to mention it again, our client settings look like this:

    Therefore, it seems quite strange to me that machines are looking for peers instead of going directly to a DP.

    Below is a query that was quite helpful to get the Application name from the content ID that appears in the logs:

    SELECT distinct LP.DisplayName , CPS.PkgID, CPS.ContentSubFolder
                     FROM dbo.CI_ContentPackages CPS
                          INNER JOIN dbo.CIContentPackage CP ON CPS.PkgID = CP.PkgID
                          LEFT OUTER JOIN dbo.CI_LocalizedProperties LP ON CP.CI_ID = LP.CI_ID
                     WHERE CPS.ContentSubFolder like '%7c2e5265-a753-478c-a0fc-50aafcc11fee%'

    Tuesday, December 5, 2017 10:03 PM
  • Experiencing same issue since upgrade to 1710

    OSD builds failing at various points same errors:

    Execution Manager handler is invalid. Trying to reconnect

    Failed to Reconnect to existing job, hr=0x87d01011

    We have redistributed the client - no change, created new package for client - no change

    Only resolution we have found - create package from previous client 1606 and install this in the OSD TS - works perfectly each time.

    Wednesday, December 13, 2017 12:05 PM
  • Similar issue, all since 1710 (Fast Ring) upgrade.

    Same error message in smsts.log

    • Execution Manager handler is invalid. Trying to reconnect...
    • Failed to Reconnect to existing job, hr=0x87d01011
    • Install Software failed, hr=0x87d01011
    • Process completed with exit code 2278559761

    When installing a Package (not App) during a TS. However, only happens via one DP (we have a remote build center). Two different packages, other packages and apps unaffected.

    No odd characters in my smsts.log though.

    Monday, December 18, 2017 12:01 PM
  • In my case, from investigation so far, looks like the SCCM client restarted itself during the package install window. (ccmrestart.log) This happens very shortly after the OS setup and client install step, like within 5 mins. But why should the SCCM client restart itself?

    Worked around by adding a 1 min pause to the TS so that the SCCM client can finish off restarting itself before packages are installed.

    But not very happy that this should be required in the first place.

    Monday, December 18, 2017 3:18 PM
  • We are hitting the same thing, is this just a timing issue and the pause will do the trick.   Does anyone have any links to how to setup that pause in the TS?

    Is this something that Microsoft will fix?

    Monday, January 8, 2018 6:26 PM
  • Same thing here. After upgrading to 1710, the OS installs and the client installs then a reboot immediately afterwards. We have some custom boot images and I made sure to upgrade them. I have to test redistributing the client and also to see if the pause will help.

    UPDATE:  So, after some more testing, I restored the 1706 client files (from the Client folder) to a new location and created a new package using the new location as the source files. I didn't create a program with the package. I then distributed the new package and used it in my task sequence, application installs work great once again with OSD task sequences. So, I opened a support case with MS and they told me that it may be an issue with the 1710 upgrade not having worked properly. I guess we'll wait for the next upgrade and see what happens. Either way, I got our OSD working again by reverting back to the 1706 client files.

    • Edited by UnderCoverGuy Tuesday, January 9, 2018 7:44 PM Updated info
    Tuesday, January 9, 2018 12:09 PM
  • To Microsoft - will this be fixed in a future version?    How can we alert that product team on this defect?
    Friday, January 12, 2018 6:50 PM
  • I have logged with Microsoft and pointed them to this blog amongst others and I have been told it will be logged as a BUG

    Also tried the pause step as indicated here and so far so good so the two solutions are:

    1. Install older CCM (pre 1710) client in OSD and upgrade client after

    2. Install 1710 client and add pause step after installation - I have added pause step immediately after client install then another pause step just before all my packages are installed

    (Pause step is a run command line:  timeout /t 181 /nobreak)

    • Edited by Pavlos34 Thursday, January 18, 2018 12:34 PM
    Thursday, January 18, 2018 12:31 PM
  • I don't know if this fixes the pause issue or not -

    If a Configuration Manager client restarts during the process of retrying a task sequence policy download, that task sequence does not run automatically after the restart. The task sequence can be manually retried after the restart. 

    Friday, January 19, 2018 1:01 PM
  • Sound like it might the problem; time to apply the update (which has only just come out, looks like it was initially released only on Tuesday).

    Symptoms certainly fit!

    Friday, January 19, 2018 1:08 PM
  • Anyone try the Hotfix yet to see if that resolves the issue?  We have reverted back to the 1706 client in the meantime which seems to be working good.
    Monday, January 22, 2018 3:25 PM
  • Applying hotfix tonight.

    Interestingly all OS delployments are ok using two pause steps before packages are deployed, apart from Windows 7 x64 - this hangs at various points installing packages - rolled back to previous client - all ok.

    So lets see if hotfix resolves this.

    Wednesday, January 31, 2018 11:34 AM
  • No change at this end - hotfix applied, client and Server updated to new version 5.0.8577.1108

    Ran OS deployment with new client and no pause steps - fails again;

    Failed to Reconnect to existing job, hr=0x87d01011    InstallSoftware    01/02/2018 11:46:02    2496 (0x09C0)
    Reconnect Job request failed, hr=0x87d01011    InstallSoftware    01/02/2018 11:46:02    2496 (0x09C0)

    Adding pause steps after CM installer runs still ok

    Thursday, February 1, 2018 11:57 AM
  • Same problems here in task sequence using a package (install vmware tools) right after Setup Windows and ConfigMgr step:

    Install Software failed, hr=0x87d01011
    Process completed with exit code 2278559761

    Fixes issues explained in KB4057517 not much about task sequence problems right?

    Satus Message viewer errors:
    Failed to Reconnect to existing job, hr=0x87d01011
    Reconnect Job request failed, hr=0x87d01011
    Install Software failed, hr=0x87d01011.

    This was working just fine before upgrade or changes :)

    • Edited by Ariendg Tuesday, February 13, 2018 5:11 PM
    Tuesday, February 13, 2018 12:52 PM
  • Microsoft are aware and this will be fixed in the 1802 release.
    Thursday, February 15, 2018 4:49 AM
  • MS have informed me of the same - it is an "internal bug" and to be resolved in 1802

    Wednesday, February 21, 2018 9:55 AM
  • And 1802 has just been released!

    Anyone going to brave the Fast Ring, and report back if it works or not?

    Friday, March 23, 2018 10:40 AM
  • Anyone make the jump to 1802 and test yet?
    Monday, April 9, 2018 7:30 PM
  • We are going to do the upgrade tomorrow as we have the same exact issue.

    I will let you know how it goes

    Monday, April 16, 2018 1:23 PM
  • any news from any upgrades?
    Thursday, April 26, 2018 9:51 AM
  • Hi there yes - Ran the fast ring script updated to 1802 - no issues with the task sequences now.

    See your NHS England - (were NHS) too :-)

    Thursday, April 26, 2018 9:53 AM
  • awesome and before i take it task sequences just sat installing a random package forever and were ok with a pause step etc or using old client and noe no pause step and new client and packages fly through?
    Thursday, April 26, 2018 1:06 PM
  • Don't need the pause - I have removed that from our task sequence.

    Thursday, April 26, 2018 1:09 PM
  • Have noted one issue when using in anger - Multiple Imaging issues - Can build one machine at a time fine if I start other machines - I get the following error :

    0x80004005 as soon as I try to start the task sequence?

    Friday, April 27, 2018 4:47 PM
  • just applied 1802 - 4 machines have kicked off osd all ok, pauses removed

    Monday, April 30, 2018 2:49 PM
  • Were you able to keep your pauses removed or did you see the issue again?
    Wednesday, August 15, 2018 4:25 PM