none
Adding S2D nodes or additional S2D cluster

    Question

  • Hi

    We are scaling up our HyperV S2D park. We already have two S2D HCI Clusters, each with 6 nodes. We now have 6 additional nodes ready to be brought into production.

    So the question pops up. Add these 6 nodes to one of the two existing clusters ? Or, create a third cluster with 6 nodes ?

    S2D supports up to 16 nodes, no issues there, although I find it more risky to have larger clusters. If there is ever an issue on the cluster level, you loose more workloads. Basically the 'do not put all your eggs in one basket' theory.
    So what is the sweetspot for number of nodes ? Depends on your environment probably. It's more a gut feeling then solid judgement. Twelve nodes sounds too risky for me, or on the border of risk anyway. Don't know why, ask my gut.

    So lets say I choose a 3rd cluster of 6 nodes. Then I get a 3rd S2D storage pool. And subsequently, this cluster will carry it's own RDMA/ROCE traffic for this pool on the same 10Gbit switches as the other 2 clusters. In short, I have 6 more nodes talking RDMA in the same two storage VLANs as the other 2 nodes.
    Is this a disadvantage ?
    Should I separate the storage VLAN's for each cluster ?

    I'm sure all options will work. On paper. So how do I get to making the right choice ?
    Storage sizing isn't a factor. We don't use hybrid volumes with dual-parity, only 3 way mirroring. More nodes in one cluster will make the storage sizing more efficient if I was using dual-parity.

    Any thoughts ?

    Greetz
    RW

    Monday, February 25, 2019 11:02 AM

All replies

  • Hi Richard,

    >>Is this a disadvantage ?

    Mix traffic might affect the performance.

    VLAN could reduce the need to have routers deployed on a network to contain broadcast traffic.

    You could use powershell command  get-storagehealthreport to monitor the performance of the new S2D cluster.

    https://docs.microsoft.com/en-us/powershell/module/storage/get-storagehealthreport?view=win10-ps
    For example, IOLatencyAverage ,I/O latency is defined simply as the time that it takes to complete a single I/O operation.

    For such issue or advertise about performance problem consult, I suggest you could consider to contact Our Advisory Team.

    https://support.microsoft.com/en-us/help/4051701/global-customer-service-phone-numbers
    You also could enable virtual machine queue in VM network adapter settings to improve performance.

    Appreciate your support and understanding.
    Best Regards,
    Frank




    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com







    Tuesday, February 26, 2019 8:48 AM
    Moderator
  • Hi

    We are scaling up our HyperV S2D park. We already have two S2D HCI Clusters, each with 6 nodes. We now have 6 additional nodes ready to be brought into production.

    So the question pops up. Add these 6 nodes to one of the two existing clusters ? Or, create a third cluster with 6 nodes ?

    S2D supports up to 16 nodes, no issues there, although I find it more risky to have larger clusters. If there is ever an issue on the cluster level, you loose more workloads. Basically the 'do not put all your eggs in one basket' theory.
    So what is the sweetspot for number of nodes ? Depends on your environment probably. It's more a gut feeling then solid judgement. Twelve nodes sounds too risky for me, or on the border of risk anyway. Don't know why, ask my gut.

    So lets say I choose a 3rd cluster of 6 nodes. Then I get a 3rd S2D storage pool. And subsequently, this cluster will carry it's own RDMA/ROCE traffic for this pool on the same 10Gbit switches as the other 2 clusters. In short, I have 6 more nodes talking RDMA in the same two storage VLANs as the other 2 nodes.
    Is this a disadvantage ?
    Should I separate the storage VLAN's for each cluster ?

    I'm sure all options will work. On paper. So how do I get to making the right choice ?
    Storage sizing isn't a factor. We don't use hybrid volumes with dual-parity, only 3 way mirroring. More nodes in one cluster will make the storage sizing more efficient if I was using dual-parity.

    Any thoughts ?

    Greetz
    RW

    If you aren't worried about storage efficiency (global dedupe enabled around one big storage pool) - stick with a separate cluster. It's a better idea to have MORE fault domains. 

    Cheers,

    Anton Kolomyeytsev [MVP]

    StarWind Software

    Profile:   Blog:   Twitter:   LinkedIn:   

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    Sunday, March 3, 2019 2:03 PM
  • Hi,

    Just check the situation about your issue.

    Best Regards,
    Frank

    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Wednesday, March 6, 2019 2:30 AM
    Moderator
  • Hi,
    Just checking in to see if the information provided was helpful. Please let us know if you would like further assistance.

    Best Regards,

    Frank

    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

    Thursday, March 7, 2019 9:07 AM
    Moderator