none
S2D and Disk Write-Caching Policy RRS feed

  • Question

  • Testing S2D with Windows 2016 on a lab

    Does Storage Spaces Direct disable disk write-caching policy (Device manager)? I have 4x SSDs on 4 identical nodes, and every time I enable storage spaces direct, it disables the disk cache on all the SSDs. No HBA just direct connected to SATA ports of a supermicro board. The SSDs have PLP

    Is this normal? Is there a way to force enable it? I change it from device manager, but it doesn't survive a reboot

    Friday, October 20, 2017 1:02 AM

All replies

  • Hi,

    please tell us a bit more about your szenario. Which drive types do you have? Only SSDs? Or maybe NVMes for caching in front of the SSDs?

    I got NVMes for caching and in this case the caching on the SSDs gets disabled on my machines too. Maybe because S2D wants to take care that all data is written to disks. I would guess Caching will only be used with dedicated saching devices (NVMes for SSDs and HDDs, SSDs for HDDs).

    Best regards,

    Andreas

    Sunday, October 22, 2017 8:44 PM
  • Hello,

    Just SATA SSDs (Intel S3700 400GB), 4 for each node. I have the SSB cache disabled for S2D as the 4 drives are identical. 

    I saw a comment here: https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/ from Gerry which has the same issue as I have, just that I have 4 ssd drives, no nvme - comment from Dan (Microsoft) was just that it's "automatic"

    I actually gave up. With this issue, and Windows Server 1709 missing S2D, I decided to go a different route for my main lab HCI.


    Monday, October 23, 2017 4:12 AM
  • Hi,

    don't give up :-) I guess having only SSDs in a three-way-mirror configuration will perform pretty good. I would recommend to clear the whole S2D configuration (plz have a look at the PS-Scripts out there for cleaning the disks), restart with the default Parameters (don't disable anything manually) and do some Performance tests. Then we can tell a bit better if this configuration works or not.

    Best regards,

    Andreas

    Monday, October 23, 2017 6:30 AM
  • Hello,

    Yes, I'm just using mirror, and rebuilt a few times using script (found online somewhere) to clear the disk, even re-installed windows a few times with desktop experience and core.

    Performance is consistent to what the minimum random write IOPS (spanned) that the manufacturer had for the drives. I'm really just wondering if there's a way to override what ever the "automatic" default setting is for disabling the device cache. I see a huge improvement with the device cache on (almost 60% more IOPS, obviously as it's able to use the device cache) testing with VMFleet for 30 mins.

    To try and answer your question, I tried a few things:

    1. Tested each drive with and without the device cache on = disk performance are as expected
    2. Used Storage Spaces (not direct) on a single node = device cache NOT disabled after reboot
    3. Used an HBA and connected the drives to to it =  device cache disabled after reboot
    4. Set the storage pool to power protected = device cache disabled after reboot
    5. Used different SSDs (Samsung SV843 960GB with PLP) = device cache disabled after reboot

    I understand this is a whitebox, so things might not work as expected (or maybe by design). With that said, I'll just do nested hyper-v as the other option was able to fully use the capabilities of the hardware (also with RDMA support for the recent release). I started this as really to learn the S2D, and after countless rebuilds and testing, I think I'm ok with using nested hyper-v for configurations and etc. 

    Monday, October 23, 2017 4:58 PM
  • Hi,

    I am also trying to involve someone familiar with this topic to further look at this issue for more ideas. It may take some time to reply.

    Thanks for your support and understanding.

    Best Regards,
    Mary Dong


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Tuesday, October 24, 2017 2:00 AM
    Moderator
  • Hi,
    to be honest, I would guess this is by design. S2D tries to make sure, that the data is written to the disk, no matter if there is PLP available or not. Like "do not trust this PLP thing". On my caching devices, the disk cache is enabled by default.

    Best regards,
    Andreas

    Tuesday, October 24, 2017 4:39 AM
  • Hi Sniper12,

     The cache is configured automatically when Storage Spaces Direct is enabled. In most cases, no manual management whatsoever is required. How the cache works depends on the types of drives present.

    In deployments with multiple types of drives, Storage Spaces Direct automatically uses all drives of the "fastest" type for caching. The remaining drives are used for capacity.

    For example, if you have NVMe and SSDs, the NVMe will cache for the SSDs.If you have SSDs and HDDs, the SSDs will cache for the HDDs.

    When all drives are of the same type, no cache is configured automatically. In all-NVMe or all-SSD deployments, especially at very small scale, having no drives "spent" on cache can improve storage efficiency meaningfully.

    For Manual configuration, you could specify cache drive model:
    PS C:\> Get-PhysicalDisk | Group Model -NoElement
    Count Name
    ----- ----
        8 FABRIKAM NVME-1710
       16 CONTOSO NVME-1520
    PS C:\> Enable-ClusterS2D -CacheDeviceModel "FABRIKAM NVME-1710"

    https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/understand-the-cache

    Best Regards,

    Mary


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Tuesday, October 24, 2017 7:41 AM
    Moderator
  • Hi Mary,

    Yes, I have all drives of the same type SATA SSD (4 x Intel S3700 400GB per node). I am now using 4 x Samsung DC SV843 (with PLP) SSDs per node, and still have the same issue. Just for the heck of it, I also added 1 x Intel S3700 per node as SSB cache, and all the SSDs DRAM buffer of each SSD are still disabled after a reboot.

    Since there are many cache involved, I want to clarify that the cache I'm talking about is the DRAM buffer/cache on each SSD.

    I'm wondering if there's a way to force enabled it after enabling storage spaces direct.

    As Andreas mentioned, it's most likely by design, and highly possible that it's due to non-tested/validated hardware. 

    Tuesday, October 24, 2017 6:30 PM
  • Hi Sniper12,

    Thanks for your feedback.

    >per node as SSB cache, and all the SSDs DRAM buffer of each SSD are still disabled after a reboot.

    Based on my knowledge, we couldn't change that from the S2D related commands. It's more related  to be the configuration from the hardware side, maybe you could also consult the hardware vendor for more suggestion.

    Best Regards,

    Mary


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Wednesday, October 25, 2017 5:08 AM
    Moderator
  • Hi,

    in my case I could change this in the driver of the disks, but with every reboot, the setting gets overwritten. For me that's not the big problem because I got NVMes for caching, where the disk-cache stays active, but for Sniper12, or in every szenario where you don't have dedicated caching disks, this reduces the performance essentially. Maybe Windows is not able to get the info if a disk has PLP and therefore disables the cache to make sure the data is consitently written to the disk in case of a power failure?!

    But overall I would always recommend using dedicated caching disks. I know in lab szenarios this is a problem because of the cost. I had the same problem trying in VMs with the TP. Those tryouts are in my opinion only useful for configuration test, but not performance.

    Best regards,
    Andreas

    Wednesday, October 25, 2017 5:50 AM