locked
Leaving app sharing because re-invite failed RRS feed

  • Question

  • Hi,

    I am looking for advice on how to troubleshoot a particular problem, where an internal user shares his screen with an external user over the edge server. The connection usually gets established just fine and the callee can see the screen of the caller. In like one of two tries, after a few seconds, the screen sharing is cancelled. In this case, the diagnostic reports show the following info:

    52049; reason="Leaving app sharing because re-invite failed"; UserType="Callee"; MediaType="applicationsharing-video"; MediaChanBlob="NetworkErr=no error,ErrTime=0,RTPSeq=0,SeqDelta=0,RTPTime=0,RTCPTime=3726562646047,TransptRecvErr=0x0,RecvErrTime=3726562646035,TransptSendErr=0x0,SendErrTime=0,InterfacesStall=0x0,InterfacesConnCheck=0x0,MediaTimeout=0,RtcpByeSent=0,RtcpByeRcvd=0,BlobVer=1"; BaseAddress="192.168.10.118:5384"; LocalAddress="145.253.90.26:59569"; LocalSite="192.168.10.118:18319"; RemoteAddress="80.187.110.162:6897"; RemoteSite="80.187.110.162:6817"; MediaEpBlob="ICEWarn=0x0,ICEWarnEx=0x0,LocalMR=145.253.90.26:59569,RemoteMR=145.253.90.26:51304,PortRange=1025:65000,LocalMRTCPPort=50763,RemoteMRTCPPort=51304,LocalLocation=2,RemoteLocation=1,FederationType=0,StunVer=2,CsntRqOut=0,CsntRqIn=0,CsntRspOut=0,CsntRspIn=0,Interfaces=0x2,BaseInterface=0x2,Protocol=0,LocalInterface=0x2,LocalAddrType=2,RemoteAddrType=5,IceRole=1,RtpRtcpMux=1,AllocationTimeInMs=204,FirstHopRTTInMs=1,TransportBytesSent=2327,TransportPktsSent=79,IceConnCheckStatus=4,PrelimConnChecksSucceeded=0,IceInitTS=3726562629306,ContactSrvMs=214,AllocFinMs=261,FinalAnsRcvMs=4016,ConnChksStartTimeMs=4019,FirstPathMs=4431,UseCndChksStartTimeMs=9232,ReinviteSntMs=9336,NetworkID=camelot-idpro.d,BlobGenTime=3726562647025,MediaDllVersion=6.0.8968.338,BlobVer=1"; MediaMgrBlob="MrDnsE=avedge.camelot-group.com,MrResE=0,MrErrE=11001,MrBgnE=37265626292936472,MrEndE=37265626292949704,MrDnsI=morgaine.camelot-idpro.de,MrResI=1,MrErrI=0,MrBgnI=37265626292908472,MrEndI=37265626292915232,MrDnsCacheReadAttempt=0,BlobVer=1"; LyncAppSharingDebug=""

    Sometimes, it just keeps working. I also ran the ApplicationSharing scenario with the Logging Tool to collect additional details. I couldn't find anything helpful there but I might just not puzzle it together right. This has been working previously and for now we couldn't identify a change in Skype or networking that might be causing this behavior. We have Skype Server 2015 single front end and edge server, both in the same internal subnet. There is no firewall on the internal edge interface and the internal clients, front end and edge server should be able to communicate without restrictions on the network. From what I can tell, ports to and from the edge server external interface are open as required.

    Any hint where to look is much appreciated. I couldn't even find a post or page on the web with someone who has ever experienced the same diagnostic ID.

    Friday, February 2, 2018 1:26 PM

All replies

  • Same issue here: ms-client-diagnostics: 52049; reason="Leaving app sharing because re-invite failed"

    I tried installing Dec 8 2015 security fix, but is already installed on my testing machine and problem persist; in my case this only can be replicated on internal vs external (same org and federated) scenario

    https://support.microsoft.com/en-us/help/3114303/app-sharing-sessions-fail-in-skype-for-business-2016-when-the-transpor

    will test disabling VBSS and let you know what happens.....

    https://support.microsoft.com/en-us/help/3150900/black-or-frozen-screen-when-you-share-your-screen-in-skype-for-busines


    Friday, February 2, 2018 4:51 PM
  • Hi BStahl,

     

    When internal users share sceen with external users,which kinds of external user?federation user?or user in external environment.

    Based on your diagnostic reports, I met this issue,this problem caused by the network loop in my case,please connect your network Team,check the network loop.


    Best Regards,
    Leon Lu


    Please remember to mark the replies as answers if they helped. If you have feedback for TechNet Subscriber Support, contact tnsf@microsoft.com.


    Click here to learn more. Visit the dedicated forum to share, explore and talk to experts about Microsoft Teams.

    Monday, February 5, 2018 8:34 AM
  • We are on Skype client version 16.0.8431.2110 that comes with the Office 1708 in the semi-annual update channel (targeted). I will check if updating to the current monthly build makes a difference. I will also try disabling VBSS to see if I can narrow down this issue, however, I can't find a reason why VBSS shouldn't work.
    Monday, February 5, 2018 9:18 AM
  • We havn't tested with federated partners because we don't have any. So the issue is with same org users on an external network. I personally checked with the network team looking for firewall rules, NAT, routing, ... we did not find anything unusual. So if you have something in particular to look for, I'd be happy to check again. Also, in like 1 out 2 tries, or maybe 1 out of 3, it just works fine if I repeat the tests with the exact same users, computers etc.
    Monday, February 5, 2018 9:22 AM
  • Hi BStahl,

     

    Have you changed the update channel?and make a test.

    If you want to disable the VBSS ,you could refer to this article.

    http://blog.schertz.name/2016/06/skype-for-business-vbss-update/

     

    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.


    Best Regards,
    Leon Lu


    Please remember to mark the replies as answers if they helped. If you have feedback for TechNet Subscriber Support, contact tnsf@microsoft.com.


    Click here to learn more. Visit the dedicated forum to share, explore and talk to experts about Microsoft Teams.

    Wednesday, February 7, 2018 7:22 AM
  • Hi All,

    We also have this issue. The Re-Invite appears to be the setting up of the controls for the screen share. I.E displaying the "Give Control" Button at the top of the screen share. As you state above, initially the screen share establishes and you get around 8 seconds of screen sharing, as the re-invite fails the sessions tears down. I have made sure FE and edge are patched fully, client is running latest updates but this happens intermittently. I will also try disabling VBSS and also enforcing only VBSS and see what happens. Any further progress would be appreciated.

    Best Regards

    Rob

    Wednesday, February 7, 2018 1:48 PM
  • Are there any update for this issue, if the reply is helpful to you, please try to mark it as an answer, it will help others who has similar issue.


    Best Regards,
    Leon Lu


    Please remember to mark the replies as answers if they helped. If you have feedback for TechNet Subscriber Support, contact tnsf@microsoft.com.


    Click here to learn more. Visit the dedicated forum to share, explore and talk to experts about Microsoft Teams.

    Friday, February 9, 2018 9:31 AM
  • Further to my point above I have reviewed the monitoring reports and note that the re-invite That fails always happens over UDP. Sessions over TCP always work. I am now focusing my attention on the customers perimeter Firewall in case something is closing down these UDP sessions. If I find the culprit I will let you know.

    Regards

    Rob

    Friday, February 9, 2018 9:35 AM
  • Thanks for keeping me updated. I also noticed that the sessions always ends before the give control/request control options appears. We did packet traces on the firewalls and were not able to identify any dropped/rejected packets there. I still think it's a either a bug or network and/or Skype topology configuration issue but no luck yet. I didn't had the chance to troubleshoot this any further as of now due to some other high priority topics.

    Regards

    Björn

    Friday, February 9, 2018 10:32 AM
  • Hi All,

    OK, the plot thickens. Last week I noticed that the failing sessions were always over UDP. However today we have found that the problem follows the user. i.e regardless of the device used, this specific user always uses UDP for screen shares. Other users always use TCP and are not affected. After removing the affected user from Skype and adding back in the problem is resolved and now this users screen shares happen over TCP!

    I am now convinced that something in the back end DB has been set for this user and has affected how app invites function. Further testing to follow.

    Monday, February 12, 2018 6:13 PM
  • I tried disabling VBSS and the AppSharing (server Mediaconfiguration and ConferencingPolicy, client using regedit), restarted server and validated replication; behavior is different cause app sharing does not start :( 

    Re-enabled VBSS and appsharing start but drops after 9-10 sec. issue persist...

    http://blog.schertz.name/2016/06/skype-for-business-vbss-update/

    Tuesday, February 13, 2018 2:28 AM
  • Yes, disabling VBSS makes it worse meaning that screen sharing doesn't even start most of the time. So I think VBSS isn't an issue here, which takes me to RDP which obviously isn't working properly.
    Tuesday, February 13, 2018 10:02 AM
  • Unfortunately, I can't relate to that.  I my test scenario, UDP isn't used. I am currently staying with VBSS disabled for my test clients, because VBSS seems to be working properly and so I can focus on the RDP connection, which still doesn't work most of the time. But I have yet to identify the issue here. Going through CLS and clients logs as well as wireshark again, I still can't seem to find any issue though I am no expert here. However, now that VBSS is disabled, I don't get the re-invite error as initially reported, but rather a more common one:

    ms-client-diagnostics: 23; reason="Call failed to establish due to a media connectivity failure when one endpoint is internal and the other is remote";UserType="Callee";MediaType="applicationsharing";
    I am still more looking for the "connectivity failure" here and probably going to find someone to go through the logs and see where someone else gets.
    Tuesday, February 13, 2018 10:15 AM
  • Hi BStahl,

     

    Based on your error,the problem common causes of edge calls failing is that clients on the internal network are not completely reachable by the internal network interface of the Edge server.you could refer to this link

    http://www.ucprimer.com/tech-blog/ms-client-diagnostics-23-reasoncall-failed-to-establish-due-to-a-media-connectivity-failure-when-one-endpoint-is-internal-and-the-other-is-remotecalleemediadebugaudioicewarn0x80012b


    Best Regards,
    Leon Lu


    Please remember to mark the replies as answers if they helped. If you have feedback for TechNet Subscriber Support, contact tnsf@microsoft.com.


    Click here to learn more. Visit the dedicated forum to share, explore and talk to experts about Microsoft Teams.

    Thursday, February 15, 2018 9:03 AM
  • Are there any update for this issue, if the reply is helpful to you, please try to mark it as an answer, it will help others who has similar issue.

    Best Regards,
    Leon Lu


    Please remember to mark the replies as answers if they helped. If you have feedback for TechNet Subscriber Support, contact tnsf@microsoft.com.


    Click here to learn more. Visit the dedicated forum to share, explore and talk to experts about Microsoft Teams.

    Tuesday, February 27, 2018 1:38 AM
  • Hi,

    We're facing the same issue so was wondering if your issue is resolved yet?

    Any help would be appreciated at this stage.

    Thanks

    Wednesday, March 28, 2018 6:15 PM
  • Hello Bstahl,

    is this error in the BYE ? is the internal client or External client sending the BYE in the user logs?

    Would assume this issue only happens when one is internal and other user is External. Where as Desktop share between two internal or two External users is working fine.

    1. Please verify if you don't have TLS inspection enabled on the External Facing "Application layer Firewall" or Checkpoint Firewall (seen very often TLS inspection causing such an issue - as per the route selected for media

    2. Since its working, would be interesting to know what path is it taking when its working, do we have two different Edge pools or a pool with 2 servers. Have seen certain issues when it fails when the media route is established via specific edge server . Other cases were like 2 edge servers are not able to talk to each other externally

    Logs:

    Client Machine : Fresh sip logs + Netmon

    Edge : Sip trace + Netmon (both if more than 1)

    External client : Fresh sip logs + Netmon

    Friday, March 30, 2018 6:01 PM
  • Hi,

    I have seen this blog post before, however, it is not applicable for our environment.

    Thanks,

    Björn

    Tuesday, April 3, 2018 2:32 PM
  • I am just back from parental leave and the issue still persists.

    Regarding 1) Yes, it is only between one internal and one external user. TLS inspection is not enabled on any firewall, we even did change the UTM appliance (different vendor) in the mean time and it did not change anything regarding this issue.

    Regarding 2) Single edge server only. Always the same route.

    I will collect the logs as soon as possible and share them with you.

    Regards

    Björn

    Tuesday, April 3, 2018 2:39 PM
  • Hi,

    We managed to resolve this issue, however the underlying cause is still unknown.

    The access edge server was a VM running on Hyper V. The virtual network adaptors were removed and re-added and reconfigured exactly as before. This resolved the issue!

    Tuesday, April 3, 2018 2:42 PM
  • a good guess would be the persistent routes of the edge (internal Interface) are not reachable from the Client to authenticate for the MEdia relay service ..sometimes it is able to route the sometimes its not routable, very much a network. We can use MSTurnPing from the Reskit to verify MRAS connection to Edge server
    Tuesday, April 3, 2018 8:43 PM
  • I saw a similar issue, and when we filtered Netmon traffic from client to EDGE Server, we noticed that there were STUN Requests, but no responses. The internal firewall was blocking some traffic, causing Application sharing to work for 8-9 seconds ( in VBSS mode -- during early media), and then fail
    Wednesday, April 11, 2018 4:56 AM
  • Hi Sri,

    Your description is exactly what we experienced. The strange thing was that there was no firewall in place between the client and the internal edge except for the Edge windows Firewall which we disabled as part of the testing process. As previously mentioned. Deleting and recreating the Virtual Network adaptor resolved the issue, yet what was actually wrong is still a mystery. What also doesn't make sense is that AV always worked. If you have any specific info on your case and what you saw in the NETMON logs I'd be fascinated to see it.

    Wednesday, April 11, 2018 8:14 AM
  • I apologize for the late reply on this matter, but here's my update so far: We updated our Skype infrastructure to March patch level including all Windows Updates up to 2018-04. Servers were all rebooted and completely shut down in between. I replaced the virtual network interfaces of the Edge server, reconfigured them completely new, removed and set persistent routes again. Turned off the internal Windows firewall of the Edge server. Funny thing is, I am now unable to replicated this issue as I did previously simply using an internal and external client and start application sharing. I have done this like 30 times now with different users and it did not happen a single time. HOWEVER, I now have multiple issues reported where the exact same issues happens during internal and external Skype conferences. So the problem has now shifted from peer to peer (over edge) to conferences. However, it's not as easily replicated as before so I am now struggling collecting the logs from clients and server. Trying to get some today, but weren't lucky so far.

    Thursday, May 3, 2018 8:57 AM
  • So finally having the time to look deeper into this, I noticed the following:

    - The edge server will STUN on every interface. So besides having static routes for internal networks, the edge server will also try to contact internal endpoints and servers through the external interface. These packets recently where dropped by our firewalls. We set up a rule that will actually reject any packets coming from the external interfaces going for internal networks, so the edge server would immediately notice.

    - Internal endpoints and servers were able to contact the edge server's external IPs rather then using the intended internal route and interface only. Same as above, we set up a rule that will reject packets heading towards the edge server's external interface coming from the inside.

    - We noticed a lot of packets from the edge server's external AV address going to ports outside the 50000-59999 TCP/UDP port range. While this seems to be a non issue as far as I know, we temporarily set up a rule to allow any port outgoing for the AV address.

    Implementing all of the changes above and looking at the S4B monitoring reports and feedback we got, one or a combination them did resolve our application sharing problem. We will monitor this closely until next week, but I am confident that these seem to be the permanent fixes that we required for our environment.

    Still, does anyone know why the edge server will try to connect on so many different ports outside the designated range? I think there was a best practice for Office CS 2017 and Lync 2010, but this still seems to be a thing.

    Friday, May 4, 2018 1:37 PM
  • We have just resolved this problem on a customer's infrastruture. The issue resolution fits in with the common theme here that there was a firewall problem, and I think the cause can vary but fundamentally the issue is caused by some form of network related issue.

    In our case external clients talking to internal clients via the Edge service saw a content share for 5-10 seconds then it dropped (external client is sharing content to internal client), and after a random number of repeated retries it works. Sometimes it would also work first time. After a lot of wireshark and UCCAPI trace analysis we found that the external client ended the call with a BYE stating the sharing session could not be setup. The assumption was this was caused by ICE/STUN failing but what was of course confusing is why it works for 10 seconds and also works fully sometimes.

    When we looked at the ICE/STUN traffic flows we eventually spotted that the traffic sent by the EDGE used the wrong source NAT rule so that nearly all traffic from the EDGE AV interface (we used three separate IP address for the EDGE interface in this case) was NATted to the access EDGE IP address and not the correct AV EDGE IP address. Some traffic did get NATted to the AV EDGE which was confusing, but we think that is because it was in response to an inbound stream from the external client.

    The way we found this was to see the packet capture from the firewall as we couldn't see the traffic flow on wireshark as the incorrectly NATted traffic from EDGE to external client was most probably blocked by the external clients firewall.

    So, whilst complicated to understand the full traffic flow the actual fix was to ensure that all traffic from the EDGE used the correct respective 1-1 NAT rule in the firewall. 

    The most optimal troubleshooting sequence for this issue should have been:

    1 - Identify who is closing the call down (internal client or external client)

    2- Capture snooper logs on both clients

    3 - Establish the reason the call is closed via the UCCAPI log on the client looking at the SIP trace in Snooper and also the messages prior to the terminating SIP BYE to see if that provides a pointer or not (in this case the issue clearly points to SDP STUN/TURN issues)

    4- Capture snooper and Wireshark traces on the external client and external EDGE

    5 - Analyse Wireshark traffi with STUN filter and determine if the sent packets are received at each end (not some packets are expected to be dropped and some are expected to get through so this is the tricky part and require a a thorough analysis)

    6 - If there looks to be missing packets then capture a firewall packet trace to see what is happening with the traffic flow

    7 - In our case we could see the packets sent over the Internet had the source IP of the access EDGE IP and not the AV IP

    The problem is extremely difficult to troubleshoot until you know what the actual error is, as STUN is horrible to debug but the devil always seems to be in the detail to fix these issues!

    We also checked the firewall to ensure UPNP/PMP was disabled, SIP and TLS inspection were disabled, routing from the internal client networks to the public EDGE DMZ IPs was not possible. Note the actual public IPs should be ok to be reachable.

    Jed


    • Edited by Jed Ellerby Friday, November 30, 2018 9:00 AM
    Friday, November 30, 2018 8:58 AM
  • Hi Jed,

    Is this issue fixed?

    We are also facing the same issue.

    Regards,

    AJ

    Thursday, June 20, 2019 10:56 AM
  • I was able to fix this using the steps outlined in my posts above. However, as others pointed out, the cause can be fundamentally different between environments. You should be able to get an idea where to look using this thread.
    Friday, June 21, 2019 9:20 AM