Is my S2D solution performing? RRS feed

  • Question

  • Hi All,

    We recently sold our first S2D solution for Hyper-V and after a lot of research we constructed the 3-node S2D cluster with the following hardware (per node)

    • DL380Gen10 
    • 2 x 6146Gold (3.2)
    • 512GB RAM
    • 2 x 800 GB SSD on HPE RAID10 (OS)
    • 5 x 6.4TB NVME (P10226-B21)
    • 2 x 2-Port 10/25Gbe 621/622 Networking (4 ports connected)
    • Server 2019 Datacenter Core

    Other specs:

    • 2 Mellanox SN2010 25Gbe switches
    • iWarp configured
    • PFC/DCB/QoS configured (just in case)
    • Cluster validation OK

    So our all-nvme-flash S2D cluster should rock, but I'm not confident the numbers are what we are expecting. Can somebody with field experience tell me whether I'm ready to go into production?

    We followed to get the highest possible IOPS. To correctly test with VMFleet you need to disable the CSV cache.

    We tested with .\Start-Sweep.ps1 -b 4 -t 8 -o 16 -w 0/50/100 -d 60

    So i got the following questions:

    • Are the IOPS (2.2M) low/normal/high for our configuration?
    • What about the 100%write IOPS. Are those low/normal/high?
    • Latency on writes? Is this normal to go over 30ms with 100% Write operations?
    • What about the CDM test where the numbers on S2D aren't actually higher then on a standalone Hyper-V with exactly the same hardware.
    • Do we need to enable or disable CSV cache when going into production? Why is benchmarking without CSV cache resulting in better numbers?

    In short: am I on the right track here? :)

    Thank u!

    • Edited by T-ICT BV Thursday, July 18, 2019 2:12 PM
    Thursday, July 18, 2019 12:49 PM

All replies

  • Hi,

    I am trying to involve someone familiar with this topic to further look at this issue , since we have no test environment.

    The CSV read cache is most effective for read-intensive workloads, such as Virtual Desktop Infrastructure (VDI). Conversely, if the workload is extremely write-intensive, the cache may introduce more overhead than value and should be disabled.

    Best Regards,


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Friday, July 19, 2019 7:15 AM
  • Dear readers,

    After extensive testing, troubleshooting and adjusting configurations based on suggestions from the Slack S2D community, I have collected a number of interesting figures.

    Based on the same question on my first post, there were comments from the community that the write performance was poor. I did not have any reference material at that time, so we started with firmware, drivers, configurations, etc.

    In the end I had the opportunity to get the same hardware in addition to the S2D platform to perform similar tests with DISKSPD. In this case I was able to perform the tests on a single disk (exactly the same as in the S2D Nodes). I also performed the DISKSPD testing on Storage Pools on the test server outside the S2D cluster.

    See the results here: I have placed them in an Excel sheet for a good overview:

    Please note, I am not a specialist in the field of S2D and Storage Spaces. This S2D project is my first and I already gained a lot of experience 😊

    What strikes me is that the Storage Spaces (outside of an S2D environment) have enormous consequences for the Write-Performance. Easily a factor of 5 compared to a single drive.

    From this I conclude that the Storage Spaces technique itself is the culprit in this story. I will have to accept it? Or should someone be able to show me other values ​​based on the same test? Only then do I want to investigate further whether my configuration has anything to do with it. At the moment I think the Storage Spaces technique is somehow degrading the performance.

    I also found this reddit:

    I would very much like a statement about this from Microsoft itself, but I would also like a response from certain people who have several years of experience with S2D / StorageSpaces.

    • Dave Kawula
    • Jan-Tore Pederson
    • Darryl van der Peijl
    • Ben Thomas
    • Hopefully somebody from Microsoft who dares to respond 😊

    Thank u all for reading.

    • Edited by T-ICT BV Wednesday, August 14, 2019 9:42 AM
    Wednesday, August 14, 2019 9:41 AM