locked
Storage spaces + Hyper-V with multiple 1GBe nics for storage? RRS feed

  • Question

  • Hi guys!

    So I just got my private cloud hardware. I actually put in the order before summer, but due to firmware and certification issues on my desired SuperMicro JBODs delivery was seriously delayed. So much that I've completely forgotten my networking ideas. I need help/verification. Or at least a URL - most described setups are 10 GBe nowadays... Or even a "not gonna work"  :-)

    My setup is supposed to be a 3 JBOD, 2 head node storage spaces/sfos cluster providing storage to a 4 node Hyper-V cluster. I didn't have a budget for a 10 GBe setup, but got a great price on a lot of 1 GBe nics. After allocating management, Hyper-V, etc I have 3x 1 GBe ports left on all Hyper-V and Storage servers. 

    I think my original plan was to create three subnets and add one nic from each server. And then I guess I've imagined some kind of SMB3 magic discovering these paths between Hyper-V and storage and just aggregating bandwitdh and providing fault tolerance by sprinkling fairy dust. Must have been the heat...

    So now I'm "replanning" and I realize that I'm going to create a failover cluster at the storage level providing a cluster name and IP. I'm thinking the management subnet where domain info resides is appropriate, but then what about the other three subnets? I don't want to flood my management subnet with storage traffic, but do want bandwidth and resilience. Did I make a design error, and how do I make the best of the situation?

    Disclaimer: My previous experience on virtualization clusters is ISCSI SAN and 2008 R2 Hyper-V clusters. Storage Spaces is completely new to me :-)

    And due to overlapping technologies I struggled a bit on placing this thread. Hope I got it right


    Monday, October 13, 2014 5:05 PM

All replies