none
DirectAccess - Upload OK, Download very slow and unstable.

    General discussion

  • Hi guys,

    just to give you a short introduction on my environment.

    - DirectAccess on a 2012 R2 Server (virtual, 8GB memory, 2 vCPU)
    - Windows 8.1 Clients
    - Using IP-HTTPS
    - Port Forwarding for 443 is done on our Juniper Router and there is no additional Firewall in between.
    - Internet Connection in my company: 1GBit Down and Up
    - Internet Connection at home: 200Mbit Down and 20 Mbit Up

    DirectAccess itself is working without any issues BUT if I am working from outside using DirectAccess and I copy a file from a fileserver to my local drive I reach a maximum of ~300-400 kb/s. Additionally I can see that the transfer job is very unstable, stopping for a few seconds then continuing etc. It takes about 10 minutes to copy a 50MB testfile.

    If I am doing it the other way round and copy a file from my local client to the fileserver I get my full upload speed (~ 2MB/s).

    I am wondering why only the download (which will be the most important thing for our employees) is so slow and unstable while the upload is working fine? What I already tested:

    - Tried it from different location with different internet connections (Mobile, Several Providers etc.) and I always see the same issue.

    - Tried it with different clients (Win 8.1, Win 10)

    - If I use a normal VPN connection to connect to my company I reach full speed for Up- and Download, so it is somehow related to DirectAccess.

    - I know that IP-HTTPS is not as good as 6to4 and or Teredo, but this doesn't explain why the Upload is working fine and only the Download is so slow.

    - I built a new DirectAccess Server within another Active Directory for testing purposes and I am facing the same issue.

    - I already disabled 6to4, Teredo and ISATAP on my client for testing purposes but this is not bringing any performance increase.

    Any idea how I can move forward to fix this issue? Has anyone seen this before?

    Thanks a lot.

    Stefan

    Thursday, March 10, 2016 12:47 PM

All replies

  • Hi Stefan,

    Thanks for posting here.

    - Tried it from different location with different internet connections (Mobile, Several Providers etc.) and I always see the same issue.

    - Tried it with different clients (Win 8.1, Win 10)

    - If I use a normal VPN connection to connect to my company I reach full speed for Up- and Download, so it is somehow related to DirectAccess.

    - I know that IP-HTTPS is not as good as 6to4 and or Teredo, but this doesn't explain why the Upload is working fine and only the Download is so slow.

    - I built a new DirectAccess Server within another Active Directory for testing purposes and I am facing the same issue.

    - I already disabled 6to4, Teredo and ISATAP on my client for testing purposes but this is not bringing any performance increase.

    Using IP-HTTPS for DirectAccess connectivity has higher overhead and lower performance than Teredo.

    If the DirectAccess client is using IP-HTTPS instead of Teredo, the DirectAccess client will have a lower performance connection.

    For more information, please refer to link below:

    https://technet.microsoft.com/en-us/library/ee844161%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396

    Besides, please check if the following link is helpful:

    https://support.microsoft.com/en-us/kb/2883952

    In addition:

    I suggest you open a case with Microsoft, more in-depth investigation can be done so that you would get a more satisfying explanation and solution to this issue.

    Here is the link:
    https://support.microsoft.com/en-us/gp/support-options-for-business

    Best regards,


    Andy_Pan

    Friday, March 11, 2016 6:14 AM
    Moderator
  • I've got similar problems.  My home connection is a 100mb down, 10mb up, and at work its 100mb up and down.  The best speed I can get is 355kb/s.  And its extremely consistent.  Almost a nice flat line.  Server 2012 R2 and Windows 8.1
    Friday, March 25, 2016 3:11 AM
  • I've got similar problems.  My home connection is a 100mb down, 10mb up, and at work its 100mb up and down.  The best speed I can get is 355kb/s.  And its extremely consistent.  Almost a nice flat line.  Server 2012 R2 and Windows 8.1

    We have the same setup and problem - maximum is 355kb/s
    Tuesday, May 24, 2016 10:18 AM
  • Any follow up on this?  I agree w/ the consistency of the bandwidth - it almost feels artificially capped like a rate-limit.   We have very fast Internet connections, low cpu/memory usage,  2012 R2 & Windows 10 - IPHTTPS connections are painful slow.

    Any solve would be appreciated.

    Thursday, June 09, 2016 2:55 AM
  • Hi guys,

    which hypervisor is your DirectAccess server running on? We're using Xen and this was the problem! We changed the DirectAccess Server from a Xen Host to a Hyper-V Host and bandwith increased immediately. We already used the latest Xen Tools but this didn't help. If you're using Xen try to change to ESX,Hyper-V or a physical box for a test. Might be the solution.


    Thursday, June 09, 2016 5:39 AM
  • We're running on ESX.  However - not sure which NIC driver - will try a few and see.
    Saturday, June 11, 2016 3:58 AM
  • someone found something?

    Same thing here.

    DA Server 2012R2
    File Server 2012R2
    Windows 10 Workstation
    Professional internet, 1Gb up / down
    Internet connection at home, 1Gb/200Mb

    DA connection : Teredo

    Downloading file from the file server :

    Upload the file to the file server :

                      

    Friday, August 05, 2016 9:27 PM
  • For information, I have just tested with a wired connection (previous post was in wifi)....It's worse

    Download : 1MB/s

    Upload : 10MB/s

    Monday, August 08, 2016 7:52 PM
  • Are there any updates for this issue?

    Marc

    Wednesday, September 21, 2016 8:07 AM
  • Hi, I've been doing some testing around this.

    The slow speeds seemed to affect all servers that did not have the IPv6 checkbox enabled in NIC settings.  Checking this box resolved the slow speed problem for us.

    To clear things up, this is not just the Direct Access (RAS2012) server (which it should already be enabled on or DA would not work!), but also any dependent servers. (i.e file servers etc..)

    Wednesday, September 21, 2016 8:37 AM
  • Hi, I've been doing some testing around this.

    The slow speeds seemed to affect all servers that did not have the IPv6 checkbox enabled in NIC settings.  Checking this box resolved the slow speed problem for us.

    To clear things up, this is not just the Direct Access (RAS2012) server (which it should already be enabled on or DA would not work!), but also any dependent servers. (i.e file servers etc..)


    Hi James, thanks for this. I hope this can solve our issues with DA. Do you remember if you had to reboot the servers first for the change to take effect?
    Wednesday, September 21, 2016 11:06 AM
  • Hello,

    This may not be applicable to all situations here but I had similar problems on a VPN link mainly used for off site backup.  Speed was limited to 177 Kbit/s or 355 Kbit/s and erratic, sometimes dropping the line with a file transfers of a few MB.  Backups failed most of the time.

    Speed through VPN was asymmetric although the cable network was the same on both sides (100 Mbit up/ 10 Mbit down).  One way seemed full speed (10 Mbit/sec) the other way was limited.

    The configuration had been working fine for a few years but since a modem upgrade problems arised irregularly.  The ISP had been working on the cable network but that did not seem to be the problem.

    Surfing was OK and speedtest.net showed the full speed. The ISP (Telenet) remote tested the lines/modems and did not find a problem. 

    Changed routers (Cisco RV180W and RV130W): no help.
    Made a setup with the two routers on one switch so that they had a direct WAN/WAN connection and got very high and stable throughput (40+ Mbit/s), so those were definitely not the cause.

    Back to the ISP.  Turns out that switching off both the cable modems for a few minutes and rebooting them solved the problem temporarely in my case. The speed through the tunnel then was 10 Mbit/s, symmetric and stable but decayed again after some hours/days.  A further investigation from the ISP showed that one modem had a problem with sending high volumes and degraded to a slower state which it only recovered from by rebooting. This problem was solved by a software update.  Cable modem: CV7160E.

    Hope this helps someone.



    • Edited by Sugil2 Tuesday, October 18, 2016 7:19 PM
    Thursday, October 13, 2016 9:38 PM
  • Benoit, just to confirm that we are facing the same problem and after hours of investigation I am still puzzled why download is so low and have not been able to pin point the problem. 

    One thing that I did notice is that the TCP window for the upload grows way above 64K, in my case all the way to 8MB behaving in a similar way like your upload (increased throughput over time). On the download side, the TCP Window would just not budge from the initial 128K (all this is using TCP Window scaling). 

    In my case what further puzzles me is that the TCP Window is 128K but I only get 5 - 5.5 Mbits/s over 90ms latency which would come down to EXACTLY 64K TCP window therefore it could be some other factor that is limiting it to 6Mb/s (and not roughly 12Mb that would a 128KB TCP Window give)

    I can see that your download is around 720KBytes/s which, if you are dealing with the same 64KB of data on the wire problem, comes down to the latency of some 40-50ms. If your latency is there, you have exactly the same problem !

    By the way, my uploads go to 200Mbit/s, regularly around 100Mbit/s and I can't get more than 6Mbits down.

    Worth noting that all this is in Azure, client, server etc. Initially I thought something was wrong with my office Internet connectivity but even in Azure, same results.

    Do post something if you resolve this issue. I will as well.

    Thank you.


    Tuesday, October 18, 2016 3:49 AM
  • Adding myself onto this issue as well, we have 4 DA boxes in behind an Citrix Netscaler ELB, on our internal Wi-Fi were getting a good 70Mbps, but external connections coming in from the WWW are getting inconsistent results, ranging from around the 500Kbps at lowest.

    Environment

    • Server 2012 R2
    • ESXi Virtual Environment
    • Citrix Netscaler Load Balencer
    • IPHTTPS Adapter only (6to4 and Toredo disabled)
    • Windows 7 and Windows 10 clients
    • Split tunnel setup (but using Managed Tunnel implementation) 

    No solution as yet, have been messing with MTU sizes, but all this seems to do is break the resolution of sites in browsers (NFS shares still come back down the tunnel to office, but web pages wont load at all... still working on understanding that one), setting MTU on iphttpsadapter and 6 to 4 back to 1280 brings everything back to life.

    Tuesday, October 25, 2016 10:26 PM
  • It is definitely related to DirectAccess itself.

    Further to my above post, 

    I tried Downloading using SSTP configured on the same server and SSTP works just fine, that is Download and Upload speeds can both fill my 50Mb/s (Up/Down) link. With DirectAccess Upload is fine, but download to the DirectAccess client is limited to 5-6Mb/s

    Ofcourse this 5-6Mb/s limitation is directly linked with latency. In my case, around 90ms so if your latency to the server is lower you may get more throughput. (12Mb/s for 40ms latency for example etc.)

    Again, it comes down to 64K of data on the wire during Download, as if Window Scaling simply doesn't work.

    Thursday, November 03, 2016 2:52 AM
  • Sergej,

    Could you provide a little more detail about what you did to test this, im not too familiar with SSTP, but would be interested in testing this in our environment to see if we get the same results.


    Thursday, November 03, 2016 7:05 AM
  • Hi Sergej, FluxboxUK,

    when i opened this thread in march this year, i didn't had all the testings completed yet. Meanwhile i can confirm i have the same issue that you guys are reporting and additionally another issue. Let me summarize:

    Situation1: If i use a DirectAccess VM on our CloudPlatform environment (based on XEN Hypervisor) my download and upload is limited to 355kb/s maximum and i have many hick ups during a file copy. I was not able to solve this yet, but as soon as i switch to a VM on Hyper-V or to a physical server, this issue is gone. So this seems to be a XEN related issue.

    Situation2: We moved the DirectAccess server to Hyper-V and another one to a physical box (just for testing purposes). The 355kb/s limitation is gone now BUT i realize that the Upload to the DA Server is always much faster than the Download. This seems to be the same issue that you guys are talking about. I will setup an SSTP Server on the same box and compare download rates just to confirm that it's a DA issue.
    This also happens on a test environment in Azure...
    Maybe we can work together to sort this out? Might be worth contacting Richard M. Hicks (https://directaccess.richardhicks.com/) and ask him for his experiences. Maybe he heard about this before? Another option would be to open a ticket with Microsoft but unfortunately my last experience with the MS support was not as good as i expected as the solution always was like "please re-install everything..." but it's worth a try. Did you guys already open a case with Microsoft?



    EDIT// i just tested a filecopy over SSTP (running on the DA server) and i get full speed on up- and download by using SSTP. So it is definitely a DirectAccess issue/effect.
    Thursday, November 03, 2016 10:07 AM
  • Guys,

    Good to hear that people are seeing the same results as me. Iv already made some informal contact with Richard Hicks, who put the performance down to the overhead of Windows 7 Double Encryption, but im not convinced by that. Though the results of a wireshark i did last night do agree with his theory about the message fragmentation being the issue.

    Testing over our internal wi-fi shows that 50% of the packets are around the 1280 size (which is the MTU of the iphttps adapter), but when going over an ADSL line we have for testing over the web, 55% of the packets then fall into the 40-79 group. Pretty definitive proof that DA is not scaling up the window size for some reason.

    Test over wifi


    Testing over ADSL

    Ill email him a link to this thread, see if he want to join in.

    Thursday, November 03, 2016 12:05 PM
  • Hi,

    just to add. I am testing with Windows 8.1 and Windows 10 clients only and see the same results, so i think this is not related to double encryption as my clients use NULL encryption. Few minutes ago i configured an "internal" test environment. If i use the SSTP VPN functionality i get about 30-40MB/s on upload and download. If i use DirectAccess i get the same for the upload, but only 8-12MB/s download. Latency is 1-2ms.

    PS: I already contacted Richard on facebook. Would be great to have him join here.



    Thursday, November 03, 2016 12:17 PM
  • Were using a combination of Win 7 and 10 clients, but out setup includes a Citrix Netscaler which is offloading the SSL encryption on the way into the DA boxes, the benefit seen on the loading of the DA server is somewhat impressive.
    Thursday, November 03, 2016 6:49 PM
  • Does someone opened a ticket at Microsoft?

    It is amazing that with the number of people who have exactly the same problem, it has not been detected by Microsoft and the problem corrected

    It's not a little performance problem. 10 times faster for uploading than downloading. Come on MS !

     
    Thursday, November 03, 2016 7:03 PM
  • So let me just try to explain all the tests that I made as well as the environments we are using.

    I don't use Windows 7, only 8.1 & 10 as DirectAccess clients. DA Servers, I tried 2012R2 & 2016 and all this, both in our internal network as well as Azure. Results are pretty much identical, with some small variances related to NULL or double encryption due to SSTP on some of those servers.

    One example, as I explained above is 2012R2 DA Server with SSTP & 8.1 client. On upload, my home bandwidth of 50Mb/s is easily filled, on DL, stuck between 15-16Mb/s. Latency to the DA server is around 35ms. 

    If I turn off DA and connect to the same server via SSTP both UL & DL work just fine, filling 50Mb/s in both directions. Having in mind that SSTP doesn't have IPSec (as in DA) I can only conclude it is IPSec/DA related.

    Another example is 2016 DA Server, no SSTP (therefore NULL encryption) & Windows 10 client, all in Azure. The two were placed in different Azure sites with the latency of some 90ms. Upload was >100Mbit/s (starting slow and getting higher throughput as the window increases) while upload was again limited to 5-6Mbit/s. 

    Now, the interesting part is that if you try to calculate the window size for those limited download speeds, it pretty much comes down to exactly 64K i.e. Bandwidth x Latency = 64KB

    In Example 1: 15Mbit x 35ms = very close to 64KB

    In Example 2: 6Mb x 90ms = also very close to 64KB

    This lead me to check the window scaling in Wireshark as it looked like it didn't work. I can definitely confirm that window scaling works on the upload. The window goes to 8MB+, however on the DL, even though the IPv4 window goes above 64K (I saw 128KB & 256KB) seems like it is never filled and that seems to be the reason why downloads are slow. Practically the sender (server) is simply waiting for packet ACKs before sending more.

    Now, on double encryption vs. NULL apart from the higher CPU utilization I really couldn't see any UL/DL differences, which I guess is logical but I have to admit I didn't really spend too much time on precise measurements.

    Regarding the MTU size, IPHTTPs will use 1280 so anything sent at 1500 will be fragmented into 2 packets - also very visible in wireshark during download. A file from a file server is sent to the DA server @1500 then fragmented into 2 packets from the DA server to the client. However if you change the MTU on the file server to, say 1024 and fragmentation will not occur, confirmed with Wireshark (MTU < IPHTTPS MTU) but the DL performance will be no different. 

    Somehow I am convinced it is all about Window Scaling not working properly in the DL direction, and not at the IPv4 level but rather at the IPHTTPS layer.

    My environments are all IPHTTPS only. 



    Friday, November 04, 2016 1:51 AM

  • The 355kb/s limitation is gone now BUT i realize that the Upload to the DA Server is always much faster than the Download. This seems to be the same issue that you guys are talking about. I will setup an SSTP Server on the same box and compare download rates just to confirm that it's a DA issue.
    This also happens on a test environment in Azure...


    I realized that the 355KB/s is some sort of a display bug in the File Copy window. If you open Task Manager and look at the transfer speed you will get the real value. 355KB/s is roughly 3Mbit/s so if you are getting less than that it will just get stuck at 355KB/s and update occasionally. You don't see this behavior if your transfer speed is above 4-5Mbit/s !
    Friday, November 04, 2016 2:04 AM
  • @Sergej for me it is definitely not a display bug, it is really not faster. Takes ages to copy a small file, but as i said before this happens only if i run the DA Server as a VM on XEN Hypervisor. As soon as i change to Hyper-V, ESX or a physical box, this is solved.

    Anyway... i gave it another try and changed from IPHTTPS to Teredo.

    Results:

    • Downloads are much faster than before (3x-5x faster than IPHTTPS)
    • Uploads are now slow (max. 300-400kb/s) and really instable (transfer stops in between, connection gets lost etc.)

    Need to do some more Teredo testing as this might be the solution for some of our clients, but the slow upload rates i'm getting right now are confusing. Need to investigate a bit more.

    Did anyone of you guys test Teredo as an Alternative?


    Wednesday, November 09, 2016 10:27 AM
  • @Sergej for me it is definitely not a display bug, it is really not faster. Takes ages to copy a small file, but as i said before this happens only if i run the DA Server as a VM on XEN Hypervisor. As soon as i change to Hyper-V, ESX or a physical box, this is solved.

    Anyway... i gave it another try and changed from IPHTTPS to Teredo.

    Results:

    • Downloads are much faster than before (3x-5x faster than IPHTTPS)
    • Uploads are now slow (max. 300-400kb/s) and really instable (transfer stops in between, connection gets lost etc.)

    Need to do some more Teredo testing as this might be the solution for some of our clients, but the slow upload rates i'm getting right now are confusing. Need to investigate a bit more.

    Did anyone of you guys test Teredo as an Alternative?


    all my tests are done with teredo

    same thing, slow download, fast upload, but I don't have your problemes

    Wednesday, November 09, 2016 1:44 PM
  • Hey guys, please Keep on digging ... a lot of people seem to have that issue.

    Today, I switched our DirectAccess (iphttps) and our “old” TMG to a new 100/100 Mbit connection.

    Upload (from company to client) is about 3 times faster with TMG!

    Both servers are running on the same Hyper-V host, having the same internet connection.


    Marc

    Wednesday, November 09, 2016 6:20 PM
  • Stefan, I am curious to know what kind of performance you are seeing with Teredo, mainly what is the latency between the two systems (a simple ping from the client to the DA server is good enough) and then the throughput you are getting.

    At the beginning of my DA journey I tested Teredo, compared it with IPHTTPS and practically got the same DL/UL numbers (Teredo was somewhat better but really insignificantly). Having in mind that Teredo connections would not always work (only when UDP ports are open from the client all the way to the server) I concluded that IPHTTPS is a way to go.

    Thanks.


    Friday, November 11, 2016 2:05 AM
  • @Benoit & @Stefan, I would just like to confirm we are facing exactly the same issue....

    Can you provide the latency and throughput you are getting. In my case, in PRD as well as in the lab when playing with different latencies, I practically always get the download throughput equal to the 64Kbyte TCP Window. Specifically:

    Latency * Throughput = 64Kbytes

    therefore my Throughput = 64Kbytes / Latency  (for example 64KBytes/80ms = 800KBytes/s * 8 = 6.4Mbits/s)

    ....and there is no way I can get more than that, no matter what. Naturally, with half the latency (40ms) I get double the throughput (13Mbit/s) roughly. Upload performance is easily 5x-10x higher.

    Thanks



    Friday, November 11, 2016 2:23 AM
  • All, we have continued investigating this issue our end, playing around with MTU sizes and TCP window size heuristics, but nothing has really showed any repeatable improvement !

    @Sergej, can I ask where you are calculating your numbers from, aka how are your determining your 64Kbytes (are you using a tool to send just that amount), also what are you using to determine your latency ?


    Friday, November 11, 2016 9:31 AM
  • @FluxboxUK, the 64K gave me a hint that TCP Window scaling may not be working properly and this seems to be exactly what is happening on the download side. Upload is OK, which I can confirm via Wireshark.

    Why 64KB, only because TCP Window without Window scaling can only increase up to the 64KB limit (google Bandwidth Latency product). Without TCP Window Scaling, "long and fat" (high latency & a lot of bandwidth) links remain underutilized due to the TCP limitation.

    In my case when I realized that I get much less throughput then what I should and by doing tests with different latencies it occured to me that scaling may not be properly working. When I multiplied BW and Latency I realized that in all cases I was limited by the TCP max window size of 64K, which made me conclude scaling is at play.

    I am still at a loss on why this is happening on the download side only. I can also confirm it is IPSec related since on the same setup, same client and same server using SSTP VPN everything works just fine.

    To see your latency, make sure you are Connected via DA, then simply ping a server close to your DA server, for example a file server in the same LAN as your DA server.


    Saturday, November 12, 2016 12:50 AM
  • Hi! Did anyone solve this problem or any news? We have the same issue with Windows 10 14393.693. We use two physical load balanced Windows Server 2012 R2 DirectAccess Server and IP-HTTPS force tunneling. Windows 7 works like a charm. However, Windows 10 does not. Connection is OK and is established in a view seconds. Download ratings are very very poor. Upload works fine.

    Every help is appreciated. This is the Windows 10 rollout show stopper number one!

    Dietmar

     
    • Edited by -Dietmar- Tuesday, January 31, 2017 10:19 AM
    Tuesday, January 31, 2017 10:18 AM
  • Hi I have same issue. did you resolve the problem? Thanks
    Wednesday, February 22, 2017 2:24 PM
  • Hi all,

    Alas no resolution to this has been discovered, and based on some information we received last week, it seemed like its unlikely to be resolved. Microsoft are throwing most their development resources into other areas of remote access, and DA is being left 'as is' for the foreseeable future.

    We do plan to do some experiments within our organisation with other IPv6 translation technologies, see if that helps at all. But for the most part IPHTTP seems to be as good as its gonna get and does suffer performance on (presumably) high latency links.


    Wednesday, February 22, 2017 6:59 PM
  • Hi, you (Dietmar) said: "two physical load balanced Windows Server 2012 R2", does it mean 2 Physical DA serves with Windows NLB enabled, or 2 DA servers with 2 external physical load balancers. Thanks
    Friday, February 24, 2017 8:55 AM
  • Hi! We have two physical (really big) HP ProLiant DA Servers with Windows NLB. Every Server has 4 network adapters. Each 2 are connected with NIC-Teaming. 1 NIC-Team intern, 1 NIC-Team extern. NIC-Teaming is switch-dependent (IEEE802.1ax, LACP). Receive Side Scaling is enabled.

    DA was (is) a big project in our company with 2000 DA enabled notebooks. Until now it works like a charm. Since Windows 10 the download lacks of performance. Completely no matter which network is used (SIM Card, WLAN, Cable,...) the download stucks at 3,5MBit download. Not the upload.

    If I stop the iphlpsvc the download performance reach more than 30MBit.

    We really hope a solution is available soon! 


    • Edited by -Dietmar- Friday, February 24, 2017 10:21 AM
    Friday, February 24, 2017 10:18 AM
  • I can report the same issue.

    Server 2012 R2
    Clients: 8.1 and 10
    Protocol: IPHTTP

    Cap at 355KBps and instability issues when trying to contact network resources while transfering data.

    We are running a pilot to see of DA is something for us (400 mobile clients) but this is a showstopper if this does not get resolved.

    Saturday, February 25, 2017 3:22 PM
  • Anyone made any progress in this matter?
    Thursday, March 16, 2017 8:51 AM
  • Hello,

    we have opened a case with Microsoft and I am able to confirm @FluxboxUK. Sad but true: Microsoft is aware of this problem but it is as it is. Microsoft will not touch DirectAccess anymore.

    There will be something like Autoconnect VPN with Windows 10. So we do not invest more resources and money to troubleshoot DirectAccess because it works best with this terrible bug.

    If anyone of Microsoft is listen: Please don't do this! DirectAccess is great and every user who works with it loves it! It's one of the greatest features you ever designed! Many companies out there use it with many, many, many DirectAccess clients. We invested many resources and money to this project.

    Dietmar 


    P.S.: I think you can mark this as answer because there will be no solution from Microsoft to fix this (but a completely other solution like VPN).  
    • Edited by -Dietmar- Tuesday, April 11, 2017 11:45 AM
    Tuesday, April 11, 2017 11:42 AM
  • Hi Dietmar,

    Do you have any link for Autoconnect VPN on Windows 10? Any release info from Microsoft?

    Thanks.


    Regards, Jim MSCS - MCP Disclaimer: This posting is provided AS IS with no warranties or guarantees , and confers no rights. When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer

    Thursday, June 29, 2017 9:59 AM
  • Thanks Dietmar, so does this mean DirectAccess will be impacted if we deploy in our environment? How does automatic VPN profile affects DirectAccess deployment?

    Regards.


    Regards, Jim MSCS - MCP Disclaimer: This posting is provided AS IS with no warranties or guarantees , and confers no rights. When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer

    Thursday, June 29, 2017 10:56 AM
  • We didn't try VPN now because we still use DirectAccess without VPN and there are no plans to implement VPN at the moment. So I have really no experience with coexistance. Time will tell if MS will fix this because of many user voice. I hope so...
    Thursday, June 29, 2017 12:16 PM
  • "We changed the DirectAccess Server from a Xen Host to a Hyper-V Host and bandwith increased immediately."

    This also worked for me...

    Server to Client bandwidth increased from 355kbps to 10-20Mbit depending on base connection.

    We moved from xenserver 7.2 to physical hardware for the direct-access server.

    Was this bug submitted to Citrix and/or Microsoft? Does anyone know if there is a patch available?


    • Edited by Menne386 Thursday, February 15, 2018 1:40 PM
    Thursday, February 15, 2018 1:39 PM
  • Hi

    Not sure if this is of any help, but...

    I too am using DA but in a semi-lab environment and noticed the 355 cap whilst using a Surface Pro 2 and a few other manufacturers laptops.  My ISP has dynamically assigned IP addresses so I use a DynDNS akin provider and it's fibre to the premises, 56Mbps in and up to 9-12 Mbps outbound.  I have a Sophos XG Firewall on a different network segment from the main router; my DA second NIC goes to the primary router.

    I implemented a basic certification authority to issue certificates to users and computers within the domain and then updated the certificate for the NLS server and other internal resources.  I was able to get a 'blistering' 1.2 MB/S download which would be roughly the capacity of my outbound bandwidth.

    Has anyone had any similar findings?

    Many thanks.

    Chris 


    Cheers CD

    Sunday, April 22, 2018 9:53 PM