none
VM's running on Compute nodes and storage on Storage Spaces Direct

    Question

  • Hi,

    Not sure if this is the correct thread as my question kind of crosses two areas. As i understand it we could make a storage spaces direct cluster and present scale out file server on top of that for VHD storage, from what I can gather it's recommended to have RDMA 10Gbps network adapters between the storage nodes for high synchronization of storage between the nodes, however what im not too sure about is the network speed between the compute nodes which will run the VMs in memory and CPU, and the storage cluster which hosts the VHDs, does this link need to be also 10Gbps or could i get away with a few 1Gbps in a team? what is the kind of traffic that goes between compute and storage cluster, I'm assuming it would need to be 10Gbps because it's the VM which receives the request for storage IO and my speed to the disk system would be limited by that network link (if the storage could operate at say 10Gbps, but i link compute to storage using 1Gbps then am I only ever going to get 1Gbps data transfer, speed or is the request sent to the storage system and the transfer will occur at 10Gbps?)

    Many thanks



    • Edited by Milkientia Thursday, March 02, 2017 10:30 PM
    Wednesday, March 01, 2017 7:50 PM

Answers

  • Hi Steve

    The network between the servers that make up your Scale-Out File Server should be 10Gbps or better. This network is used for things like writing out data in multiple copies to ensure resiliency against a node being down, data reconstruction when a drive fails, data synchronization when a node has been offline for servicing or similar.

    The network between the Hyper-V servers and the Scale-Out File Server dictates how fast storage IO will be for the VMs on the Hyper-V servers. Technically you can use a 1Gbps network, but you will likely find that it is too slow, unless storage IO requirements are very modest.

    You can also run the VMs on the same nodes that have the storage physically attached. The nodes should have sufficient CPU and memory resources to run both storage and the VMs and should be equipped with 10Gbps or better networking. RDMA is optional. Many of our partners offer ready-made server configurations that have been tested extensively for this configuration.

    Cheers

    ClausJor [MSFT]

    • Marked as answer by Milkientia Thursday, March 02, 2017 7:23 PM
    Thursday, March 02, 2017 3:41 PM

All replies

  • Hi Steve,

    >>does this link need to be also 10Gbps or could i get away with a few 1Gbps in a team?

    As far as I know, 10 Gbps is not required.

    >>what is the kind of traffic that goes between compute and storage cluster

    If you mean put the VM on the storage, the traffic between them is the same as a physical disk on a traditional physical computer.

    Anyway, a higher speed network would give you better performance.

    Best Regards,

    Leo


    Please remember to mark the replies as answers if they help.
    If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Thursday, March 02, 2017 3:06 AM
    Moderator
  • Hi Steve

    The network between the servers that make up your Scale-Out File Server should be 10Gbps or better. This network is used for things like writing out data in multiple copies to ensure resiliency against a node being down, data reconstruction when a drive fails, data synchronization when a node has been offline for servicing or similar.

    The network between the Hyper-V servers and the Scale-Out File Server dictates how fast storage IO will be for the VMs on the Hyper-V servers. Technically you can use a 1Gbps network, but you will likely find that it is too slow, unless storage IO requirements are very modest.

    You can also run the VMs on the same nodes that have the storage physically attached. The nodes should have sufficient CPU and memory resources to run both storage and the VMs and should be equipped with 10Gbps or better networking. RDMA is optional. Many of our partners offer ready-made server configurations that have been tested extensively for this configuration.

    Cheers

    ClausJor [MSFT]

    • Marked as answer by Milkientia Thursday, March 02, 2017 7:23 PM
    Thursday, March 02, 2017 3:41 PM
  • You can also run the VMs on the same nodes that have the storage physically attached.

    Hi Claus,

    the main benefit from S2D is the use of local storage to shape the Cluster, OK. Is it also supported to use JBODs externally? In case of SoFS, there is no cross-cabling necessary. Is that right?

    Regards,
    Marcel


    https://www.windowspro.de/marcel-kueppers

    I write here only in private interest

    Disclaimer: This posting is provided AS IS with no warranties or guarantees, and confers no rights.

    Thursday, March 02, 2017 4:13 PM
  • Thanks for your reply Claus and others who have answered, your reply was the most clearest.

    Marcel, good question but I was under the impression that you could extend your storage nodes with JBODs attached directly to each storage node with S2D without the need for crossing over between nodes (unlike 2012/R2 required), it sounds though that the more storage you attach to each node in JBODs we're going to need to add faster network between each of the storage nodes to do the tasks Claus described above.

    The way i understand it, you start from individual storage nodes (let's say 4) and you can fill the internal drive bays, if it's not enough connect JBODs with SAS expander cards to each node (no need to connect to other nodes for that JBOD, it belongs to just 1 node, so you'd buy 4 of those). then you create the S2D cluster which you can then layer a clustered shared volume on top of. if you left it as this you can run VMs on top and this would be hyper-converged, but if you want storage separate from compute then you'd layer a SOFS on top of the CSV and run your VMs on the compute.

    please correct me if my understanding is wrong anyone.

    One more thing Claus - is RDMA required at any point if I am using either hyper-converged or separate storage and compute? I thought i'd read somewhere on Microsoft guides RDMA was required for S2D

    thanks




    • Edited by Milkientia Thursday, March 02, 2017 10:31 PM
    Thursday, March 02, 2017 7:23 PM