S2D with Storage Replica Log File Location; Multiple Pools with S2D? RRS feed

  • Question

  • Hello,

    I am in the process of configuring two 3-node clusters that will be replicated using Storage Replica cluster-to-cluster replication. S2D is a fully supported configuration in the case of cluster-to-cluster replication according to the technet documents.

    I have no problem setting up S2D and getting it running on both sites. Test-cluster, create-cluster, enable-s2d and there you have it. I now have two clusters who are both running their own hybrid (SSD+HDD) S2D arrays. Performance is good on each cluster.

    Now comes the cluster to cluster configuration. I have my Data Volume that I want to replicate and I have a data volume of the same size on the second cluster that is ready to receive the replication. These volumes are cut out of the S2D arrays.

    Now comes the required log volume. This is required to be on a SSD or NVMe device. I have set it up using the log volume carved out of the hybrid S2D array as well, but I am looking for a way to make this log volume ALL flash.

    1. Is there any way to make a second pool using S2D so that I could make an all SSD pool and 3 way mirror some small SSDs for the log?

    2. Can I add a few SSDs to the capacity of the S2D pool and still have SSDs assigned as a "journal". If so, could I pin a virtual disk to the SSDs from there?

    3. What is the recommended method for creating an all SSD log volume for Storage Replica with S2D enabled on both clusters in cluster to cluster replication?

    Any input would be appreciated. Thanks!

    Friday, August 4, 2017 10:02 PM


  • You don't need to worry about this with S2D, it has real time caching to your flash devices to ensure Storage Replica I/O needs for the log are satisfied


    Saturday, August 5, 2017 3:39 PM

All replies

  • You don't need to worry about this with S2D, it has real time caching to your flash devices to ensure Storage Replica I/O needs for the log are satisfied


    Saturday, August 5, 2017 3:39 PM
  • Hi Matthew Monday,

    Agree with Elden Christensen, and in addition, you could also refer to the article when configuring storage replica.


    Best Regards,


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Monday, August 7, 2017 2:14 AM
  • Thanks for the quick reply! We had spoken about whether that requirement was really necessary for us since all of our IOs would be going to the cache device first.

    Thanks  Elden and Mary!


    Monday, August 7, 2017 1:53 PM

    I wonder about the same thing (years later, but now we have Server 2019); has anything changed?

    and apologies in advance for being cryptic or verbose

    my setup is 2 nvme + 16 ssd + 24 hdd per node (4 right now)
    using the one pool for log volumes would probably always be nvme writes and ssd reads; no problem there for me

    however, I have a pair of 960GB nvme m.2 in a pcie card in each node (unused right now), that I was intending to use for log volumes:
    1) I'm sure I can create a second pool; is this still not advised (with Server 2019)?
    2) if I create a second pool, am I still limited to 64 volumes max (or is this by chance per pool)?
       2A) this only concerns me per the future: should I just expect 32 x 64TB data volumes as my max, or 2PB, before I need to setup another cluster to get more space?
    3) if I do not use this pair (either in some weird reserved or pinned fashion in the single pool for the log volumes or as a second pool), would there be any benefit to simply adding them as journal disks?  My only concern here is that they are internal to the nodes, not front facing hot swap drives like the rest, and are small, but would not mess with drive type factoring and have plp; and also note, in my perfmon, Cluster Storage Hybrid Disk > Cache Miss Reads/sec is flatlines at zero (but also note, I'm copying like 400TB to the cluster, and no one is really reading from it right now)
       3A) note that my only wiggle room per expansion (and not messing with drive type number of disks factoring) is to either add this pair of m.2 nvme and add another pair of u.2 nvme and a pair of sas ssd; or, toss these m.2, and add a pair of u.2 nvme and four sas ssd; thinking this latter option is best for hotswap and better percentages ratios (again, if second pool or some pinned log vol is not a way to go)

    and a couple things that are unclear to me (after much reading):

    4) assuming I create the log volumes with "-fileSystem ReFS" only on the ownerNode of the data volume (as opposed to my data vols being created with "-fileSystem CSVFS_ReFS"), and when I run new-SRpartnership, they will convert to CSV, yes?  so the tricky part for me (since I'll have more than 25 log volumes and can't use drive letters): do I use accessPoint/mountPoint as c:\clusterStorage (or make my own c:\logVOLS point)?

    5) also (sort-of per OP's point 3), when I create a log volume (assuming I go with single pool), do I add the pair of m.2 nvme in each node to the pool and do not define them as -usage journal and use "-size 14GB" for the new-volume cmd (and perhaps s2d auto-selects them for use for the log vols?), or don't use the m.2 at all and use "-storageTierFriendlyNames performance,capacity -storageTierSizes 14307MB,0MB", thus ensuring nvme writes and ssd reads for the log vols?

    any guidance will be super appreciated (Elden or Mary or anyone else)

    "All things are possible, but not all things are permissible"

    Wednesday, August 5, 2020 2:50 PM