Big Performance isssue CAFS and 10 GBE network RRS feed

  • Question

  • So I am confused as we have some great results with the servers, storage and networks but when file transactions go through the CAFS share path then huge performance hits.

    Setup is:

    2 X HP DL380 G9

    2 X LSI 12 GB SAS HBA in each server

    2 x 10 GBE Intel 540-TX2 in each server

    1 x DataON DNS-2608 JBOD

    12 x HGE 8TB SAS drives

    4 x HGE 240GB SAS SSD

    * Server 2012 R2

    Business Case: 

    We have the servers hardware / software fully patched and up to date.  We then created a File Server Cluster and all hardware passed checks with warnings on the network as we have not separated the Cluster and Data network.

    MPIO is setup for the multiple SAS paths between the servers and JBOD and the dual 10 GBE nics are setup a LACP trunks.

    Storage Spaces has been setup and we have created a Tiered space through the GUI with 44TB configured as 2-way Mirror using ReFS.

    Then the File Server Role was setup from the Cluster Manager and a CAFS General Purpose role configured with CA shares setup.

    The 10 GBE network cards are tuned and when we perform a standard UNC based file transfer between the servers we get 1-1.2GBps rates ie \\server1\c$\test to \\Server2\c$\test however when we perform the same data transfer using the CA, ie: \\cafs\test ,  transfer rates drop substantially to 150-300MBps

    I have tried various configurations and every time we use the CAFS share path transfer rates drop to almost 1Gb network transfer rates,

    We have for evaluation purposes even created brand new DC and FS servers so everything is fresh and a clean configuration.

    Any suggestions are great.. I am even happy to hire someone that has a lot of experience with Storage Spaces and CAFS Volumes.


    Tuesday, August 16, 2016 2:13 PM

All replies

  • Hi rometheis wize,                       

    Thank you for your question.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.

    Thank you for your understanding and support.

    Best Regards,


    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, August 18, 2016 4:28 AM
  • Thanks Mary,  the project is on hold until we can determine if there is a configuration issue or a technology issue.  In the mean time we found the following... but each indicate some performance hit but a 500-600MBps delta is not a minor variance :)

    Article 1: https://blogs.technet.microsoft.com/josebda/2014/08/18/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead/

    5.4. Create a non-CA file share

    If your file server is clustered, you can use SMB Continuously Available file shares that allow you to lose any node of the cluster at any time without impact to the applications. The file clients and file servers will automatically recover though a process we call SMB Transparent Failover.

    However, this requires that every write be written through to the storage (instead of potentially being cached). Most server workloads (like Hyper-V and SQL Server) already have this unbuffered IO behavior, but not file copies. So, CA has the potential of slowing down file copy operations, which are normally done with buffered IOs.

    If you want to trade reliability for performance during file copies, you can create a file share with the Continuous Availability property turned off (it’s on by default on all clustered file shares).

    In that case, if there is a failover during a file copy, you might get an error and the copy might be aborted. But if you don’t have any failovers, the copy will go faster.

    For server workloads like Hyper-V and SQL Server, turning off CA will not make things any faster, but you will lose the ability to transparently failover.

    Note that you can create two shares pointing to the same folder, one without CA for file copy operations only and one with CA for regular server workloads. Having those two shares might have the side effect of confusing your management software and your file server administrators.

    Article 2: http://windowsitpro.com/windows-server-2012/new-ways-enable-high-availability-file-shares

    Keeping file shares available. SMB Transparent Failover consists of several configuration changes and new technologies. One benefit that file servers traditionally offer clients is buffering of data writes to disk. This element provides faster acknowledgments to client write requests because the file server caches the write operation in its volatile memory (meaning that if the server loses power, it loses the data), tells the clients that the data is written so that the client can carry on its work, then performs the write in the most optimal way. Certain applications always open handles with this caching disabled, through the use of the FILE_FLAG_WRITE_THROUGH attribute when creating the handle, ensuring that data is always written to the actual disk before receiving acknowledgment and avoiding any volatile cache. SMB Transparent Failover sets the FILE_FLAG_WRITE_THROUGH as the default for all created handles, eliminating the use of volatile memory cache. Now, there might be some slight performance implications because the cache is no longer used, but the assurance of data integrity is a good trade for the possibility of a slight performance degradation.  (slight understatement J)

    Thursday, August 18, 2016 4:35 AM
  • Based on the information above and some other articles we changed the Cluster File Share from Continuously Available, CA, to regular File Server Cluster Shares and the performance targets were achieved on Windows 10 & Server 2012 R2 machines, 1.2GBps download & ~900MBps upload.  The desired configuration is for CA shares however when they are enabled we see transfer rates of around 150MBps-280MBps which is a far cry from the performance otherwise achieved.

    Anyway there is an update.  I am hoping for a MS clarification so we can either tune/change the configuration to include CA shares without such a performance loss or be told CA shares simply result in poor performance even when using the latest hardware, 10GBE network and storage spaces.

    Many thanks

    Thursday, August 25, 2016 3:21 AM