none
SOFS hardware setup advice please

    Question

  • Hello,

    I'm looking for some advice on setting up a SOFS cluster,  the following is what I have in mind:

    x2 HP DL380p servers running Server 2012 Datacenter and HyperV, a 10Gb network card in each

    10Gb switch

    x2 HPDL380p servers running server 2012 standard for SOFS cluster, 10Gb network card in each then also either FC or SAS

    HP P2000 G3 connected to each SOFS host via FC  or a HP D2700 JBOD connected to each host via SAS

    Which would be the better option?  I don't want overkill, however I would like good performance at a decent cost.

    One more question,  for the best possible resilience would a P2000 with a D2700 attached or just two D2700's be ok? I don't like the idea of losing 50% space but it looks like mirroring is the only option.

    Thanks

    Ross

    Wednesday, July 17, 2013 9:39 AM

Answers

  • Hello,

    I'm looking for some advice on setting up a SOFS cluster,  the following is what I have in mind:

    x2 HP DL380p servers running Server 2012 Datacenter and HyperV, a 10Gb network card in each

    10Gb switch

    x2 HPDL380p servers running server 2012 standard for SOFS cluster, 10Gb network card in each then also either FC or SAS

    HP P2000 G3 connected to each SOFS host via FC  or a HP D2700 JBOD connected to each host via SAS

    Which would be the better option?  I don't want overkill, however I would like good performance at a decent cost.

    One more question,  for the best possible resilience would a P2000 with a D2700 attached or just two D2700's be ok? I don't like the idea of losing 50% space but it looks like mirroring is the only option.

    Thanks

    Ross

    1) SAS has a smaller latency and is direct (FC will "overwrap" SAS with FC and add virtualization layer injecting more latency as there are no native FC disks these days). So if you do not have anything in-house them go SAS.

    2) Separate SAS JBODs are of course better then a single unit as SAS JBOD is mostly passive hardware with a redundant PSUs but it's not 100% redundant on it's own. Duplication definitely makes sense as you increate fault tolerance. So you'll end with something like:

    See for more details:

    http://blogs.technet.com/b/privatecloud/archive/2013/04/05/windows-server-2012-about-clustered-storage-spaces-issue.aspx

    3) Upcoming R2 does support parity with Clustered Storage Spaces so you're not going to lose 50% (even more as you need one physical disk for quorum). See:

    http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B218#fbid=Bpho5V72AuV

    4) Make sure you do understand SoFS is not supported with a generic workload and is not a file server replacement. Deployment with a set of a big

    files does work but many of the frequently accessed small files will put the cluster on the knees because of the LUN reservation lock/release.

    See:

    http://technet.microsoft.com/en-us/library/hh831349.aspx

    5) Check these manuals on how to configure your SoFS on top of the existing HA storage (FC, SAS or iSCSI in our case does not matter). So simply skip StarWind configuration and check rest of the manuals and all the diagrams - they are the same for all the underlying storage. See:

    http://www.starwindsoftware.com/sw-providing-ha-shared-storage-for-scale-out-file-servers

    http://www.starwindsoftware.com/config-ha-shared-storage-for-scale-out-file-servers-in-ws-2012

    Also there's a possibility to deploy only a pair of SoFS machines and no physical shared hardware, the only thing you need is a set of an Etherent cables (better cross-over, no switches) between SoFS nodes. See:

    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers

    6) I would also strongly suggest to run a set of an experiments as you may find running VMs (guess it's what you want to do with a SoFS thing I may be wrong however) directly from DAS would be faster then deplpying setup with a long set of a re-routed Ethernet-based traffic...

    Hope this helped a bit :)


    StarWind iSCSI SAN & NAS

    Wednesday, July 17, 2013 10:09 AM
  • Ok thanks, I'll look into it.  I may go with 2 DAS shared SAS JBODS connecting to the 2 HyperV cluster nodes.  Looking at costs the initial setup of the SOFS isn't going to be cheap with extra 10GbE swtich modules and 2 extra servers.

    Cheers

    Follow the links I gave. You'll have the idea how to a) eliminate the need in a SAS JBODs and SAS controllers and SAS drives at the back end basically mirroring "el cheapo" SATA drives directly between SoFS boxes (so you still have two hypervisor servers and two SoFS servers but no SAS infrastructure) and b) feed virtual LUNs directly to Hyper-V (in this case you also eliminate SoFS servers and and end with a pair or maybe three of Hyper-V nodes providing HA storage to themselves). That's what VMware ESXi does out of box with it's VSA thing. Basically the same idea here just the implementation is native (no guest VM spawned).

    StarWind iSCSI SAN & NAS

    Thursday, July 18, 2013 7:21 PM

All replies

  • Hello,

    I'm looking for some advice on setting up a SOFS cluster,  the following is what I have in mind:

    x2 HP DL380p servers running Server 2012 Datacenter and HyperV, a 10Gb network card in each

    10Gb switch

    x2 HPDL380p servers running server 2012 standard for SOFS cluster, 10Gb network card in each then also either FC or SAS

    HP P2000 G3 connected to each SOFS host via FC  or a HP D2700 JBOD connected to each host via SAS

    Which would be the better option?  I don't want overkill, however I would like good performance at a decent cost.

    One more question,  for the best possible resilience would a P2000 with a D2700 attached or just two D2700's be ok? I don't like the idea of losing 50% space but it looks like mirroring is the only option.

    Thanks

    Ross

    1) SAS has a smaller latency and is direct (FC will "overwrap" SAS with FC and add virtualization layer injecting more latency as there are no native FC disks these days). So if you do not have anything in-house them go SAS.

    2) Separate SAS JBODs are of course better then a single unit as SAS JBOD is mostly passive hardware with a redundant PSUs but it's not 100% redundant on it's own. Duplication definitely makes sense as you increate fault tolerance. So you'll end with something like:

    See for more details:

    http://blogs.technet.com/b/privatecloud/archive/2013/04/05/windows-server-2012-about-clustered-storage-spaces-issue.aspx

    3) Upcoming R2 does support parity with Clustered Storage Spaces so you're not going to lose 50% (even more as you need one physical disk for quorum). See:

    http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B218#fbid=Bpho5V72AuV

    4) Make sure you do understand SoFS is not supported with a generic workload and is not a file server replacement. Deployment with a set of a big

    files does work but many of the frequently accessed small files will put the cluster on the knees because of the LUN reservation lock/release.

    See:

    http://technet.microsoft.com/en-us/library/hh831349.aspx

    5) Check these manuals on how to configure your SoFS on top of the existing HA storage (FC, SAS or iSCSI in our case does not matter). So simply skip StarWind configuration and check rest of the manuals and all the diagrams - they are the same for all the underlying storage. See:

    http://www.starwindsoftware.com/sw-providing-ha-shared-storage-for-scale-out-file-servers

    http://www.starwindsoftware.com/config-ha-shared-storage-for-scale-out-file-servers-in-ws-2012

    Also there's a possibility to deploy only a pair of SoFS machines and no physical shared hardware, the only thing you need is a set of an Etherent cables (better cross-over, no switches) between SoFS nodes. See:

    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers

    6) I would also strongly suggest to run a set of an experiments as you may find running VMs (guess it's what you want to do with a SoFS thing I may be wrong however) directly from DAS would be faster then deplpying setup with a long set of a re-routed Ethernet-based traffic...

    Hope this helped a bit :)


    StarWind iSCSI SAN & NAS

    Wednesday, July 17, 2013 10:09 AM
  • Thanks for the detailed explaination.

    We don't have any FC in house I just assumed it may be slightly better performance but I guess if I daisy chained D2700's they would be SAS which wouldn't make much sense in using FC not to mention cost.

    How do DAS Jbods work, if I have a two node cluster for HyperV does each node have a SAS connection to each Jbod?  Does this work well and can the VM's be backed up easily using DPM?

    We currently have a 4 node 2008R2 HyperV cluster connecting to a 3 node HP P4500G2 Lefthand networks via iSCSI.  I'm looking to implement a new server 2012 cluster with new hardware.

    I liked the idea of the SOFS as we can then add other applications like SQL.  Is it also possible to add a classic file server to the same cluster as the SOFS using the same storage but a separate volume?

    Thanks for the help

    Wednesday, July 17, 2013 11:10 AM
  • Thanks for the detailed explaination.

    We don't have any FC in house I just assumed it may be slightly better performance but I guess if I daisy chained D2700's they would be SAS which wouldn't make much sense in using FC not to mention cost.

    How do DAS Jbods work, if I have a two node cluster for HyperV does each node have a SAS connection to each Jbod?  Does this work well and can the VM's be backed up easily using DPM?

    We currently have a 4 node 2008R2 HyperV cluster connecting to a 3 node HP P4500G2 Lefthand networks via iSCSI.  I'm looking to implement a new server 2012 cluster with new hardware.

    I liked the idea of the SOFS as we can then add other applications like SQL.  Is it also possible to add a classic file server to the same cluster as the SOFS using the same storage but a separate volume?

    Thanks for the help

    1) General rule of thumb is if you don't have a FC in house already - don't do it now. Too expensive! With 10 GbE and IB you can hardly find any use of a low FC latency to compensate 5x - 10x higher deployment and lifecycle price. 

    2) Yes, connect both Hyper-V nodes with a SAS uplinks to every SAS JBOD.

    3) DPM deals with CSV layered on top of a Clustered Storage Spaces. So for DPM is does not matter who's doing CSV back end. No issues.

    4) Yes you can add a file server but I'd recommend to create a VM, put it on a CSV and configure failover cluster for your "ordinary" workload. Much easier to maintain.



    StarWind iSCSI SAN & NAS

    Wednesday, July 17, 2013 2:00 PM
  • 1) General rule of thumb is if you don't have a FC in house already - don't do it now. Too expensive! With 10 GbE and IB you can hardly find any use of a low FC latency to compensate 5x - 10x higher deployment and lifecycle price. 

    2) Yes, connect both Hyper-V nodes with a SAS uplinks to every SAS JBOD.

    3) DPM deals with CSV layered on top of a Clustered Storage Spaces. So for DPM is does not matter who's doing CSV back end. No issues.

    4) Yes you can add a file server but I'd recommend to create a VM, put it on a CSV and configure failover cluster for your "ordinary" workload. Much easier to maintain.



    StarWind iSCSI SAN & NAS

    Ok great thanks for the help.  One more question, are there any recommend hardware requirements you know of for the nodes in a SOFS cluster?

    Thanks

    Wednesday, July 17, 2013 2:45 PM
  • [...]

    Ok great thanks for the help.  One more question, are there any recommend hardware requirements you know of for the nodes in a SOFS cluster?

    [ ... ]

    You're welcomed!

    CPU is virtually never an issue these days, more RAM you throw in - the better (as SMB 3.0 does effective server-side caching), critical part of the configuration is PCIe bus and 10 GbE and SAS gear compatibility (especially 10 GbE). Running out of lanes will limit you on the bandwidth also not all the PCIe hardware was create equal - I would strongly recommend giving a try to the hardware to see can it do wire speed with 10 GbE with TCP before wrapping everything into SoFS setup. Also make sure you'll have a compromise between high-performance 15K rpm SAS drives and high-capacity NL-SAS as say with a typical VDI workload whole setup runs out of IOPS long before it runs out of capacity. You may also look at flash caching and flash <-> spindle tiering coming with R2 as it can cut down cost significantrly providing more usable space @ the end of the day.

    Hope this helped :)



    StarWind iSCSI SAN & NAS

    Thursday, July 18, 2013 9:28 AM
  • Ok thanks, I'll look into it.  I may go with 2 DAS shared SAS JBODS connecting to the 2 HyperV cluster nodes.  Looking at costs the initial setup of the SOFS isn't going to be cheap with extra 10GbE swtich modules and 2 extra servers.

    Cheers

    Thursday, July 18, 2013 10:07 AM
  • Ok thanks, I'll look into it.  I may go with 2 DAS shared SAS JBODS connecting to the 2 HyperV cluster nodes.  Looking at costs the initial setup of the SOFS isn't going to be cheap with extra 10GbE swtich modules and 2 extra servers.

    Cheers

    Follow the links I gave. You'll have the idea how to a) eliminate the need in a SAS JBODs and SAS controllers and SAS drives at the back end basically mirroring "el cheapo" SATA drives directly between SoFS boxes (so you still have two hypervisor servers and two SoFS servers but no SAS infrastructure) and b) feed virtual LUNs directly to Hyper-V (in this case you also eliminate SoFS servers and and end with a pair or maybe three of Hyper-V nodes providing HA storage to themselves). That's what VMware ESXi does out of box with it's VSA thing. Basically the same idea here just the implementation is native (no guest VM spawned).

    StarWind iSCSI SAN & NAS

    Thursday, July 18, 2013 7:21 PM
  • Ok thank you, sounds like a good idea.  I have also been looking at HP VSA which does the same thing and also a P2000 directly attached via SAS however I cannot use storage spaces with the P2000.

    I spoke to a storage guy who works with out supplier (he was VMware focused though) he wasn't familiar at all with storage spaces or SOFS and he didn't think it sounded like a good idea.  Whats your opinion on P2000 RAID storage vs a JBOD using storage spaces?

    If I were to go down the HP VSA route do you have a VSA VM per HyperV node and how would it connect to the storage that is on the servers?  As our current P4500 connects via iSCSI.

    Cheers!
    Ross

    Friday, July 19, 2013 5:19 PM
  • Ok thank you, sounds like a good idea.  I have also been looking at HP VSA which does the same thing and also a P2000 directly attached via SAS however I cannot use storage spaces with the P2000.

    I spoke to a storage guy who works with out supplier (he was VMware focused though) he wasn't familiar at all with storage spaces or SOFS and he didn't think it sounded like a good idea.  Whats your opinion on P2000 RAID storage vs a JBOD using storage spaces?

    If I were to go down the HP VSA route do you have a VSA VM per HyperV node and how would it connect to the storage that is on the servers?  As our current P4500 connects via iSCSI.

    Cheers!
    Ross

    1) HP (ex-Left Hand) VSA is by far the best idea for a set of a simple reasons: it's ancient (~5 years old) Linux running inside guest virtual machine and feeding storage to your cluster. First of all it's slow (because of the virtualization overhead, all I/O is routed thru the VMbus driver) and second if something will go wrong you have very little chance to fix it (unless you're a Linux admin and have root shell to VSA). Also you need to understand you have your CSV much later then with the other solutions as when storage stack initiaizes your VSA is not powered up yet. Stick with native apps (I think I gave you all possible options) or hypervisor vendor blessed way (like VMware VSA if you're on the "dark" side or Clustered Storage Spaces with optional SoFS thing if you're still with Microsoft).

    2) Send "that guy" to the library or buy him TechEd entrance ticket. SoFS and Clustered Storage Spaces are the ways Microsoft is pushing real hard with Windows Server 2012 (R2). So it's a future of Microsoft storage for a while. If "that guy" is not familiar with the basic concepts this means he had spent too much time with VMware. IMHO.

    3) There's no much sense in P2000 again for a set of a reasons: Microsoft DOES NOT support virtualized LUNs with Clustered Storage Spaces, MS needs direct access to SAS spindles. So the only model from P2000 list (with SAS uplinks) will have something you're not going to use ever - built-in RAID logic. iSCSI and FC are not supported with Clustered Storage Spaces and are slower @ the end of the day (SAS basically goes directly to your SAS port on a mobo and iSCSI and FC add extra virtualization layer, extra latency and extra pipeline stage - no sense running "old school"). So you'll pay extra not being able to utilize the features you've been charged for. What's the reason? That's why P2000 is a BAD idea: you can pay 1/3 and get more bays from Supermicro. And Supermicro will not try to rip you off selling you overpriced re-badged Seagates (something HP will definitely try). Ask sales rep from HP three questions a) Will you be able to utilize RAID functionality with Clustered Storage Spaces b) Will you be able to use FC and iSCSI with CSS? and c) Can you fill random disks into HP case and still have non-void support?

    4) See, I'm not a big fan of a HP as a storage company. They make and sell printers. Other things being about storage they license (LSI?) or buy as a whole company (Lefthand). It's hard not to lose momentum and keep everything up @ running w/o keeping the new company as a stand-alone separate business unit (HP did not).

    5) If you'd still go VM-running storage (there are tons of other companies doing this) then yes, you'll end with a single VM running on every Hyper-V host. Storage is connected to VM as a VHD(x) and VHD(x) content should be placed on a DAS you'll try to cluster and share between the nodes. I/O is run thru the VMbus, network is routed thru vSwitch etc. 


    StarWind iSCSI SAN & NAS

    Saturday, July 20, 2013 6:30 AM
  • Ok cool, guessing you meant VSA is the worst idea not best :) I did want to stick with clustered storage spaces, ill go to HP and my supplier with those questions and see what they say. Initially I was after just a JBOD (which works with CSS) so I could use storage spaces. Thanks for all your information you have been a really great help! Cheers Ross
    Saturday, July 20, 2013 8:11 AM
  • Ok cool, guessing you meant VSA is the worst idea not best :) I did want to stick with clustered storage spaces, ill go to HP and my supplier with those questions and see what they say. Initially I was after just a JBOD (which works with CSS) so I could use storage spaces. Thanks for all your information you have been a really great help! Cheers Ross

    1) Yes, you're right. I'm sorry about this...

    2) Excellent! Would be nice if you'd share some implementation results later :) 


    StarWind iSCSI SAN & NAS

    Sunday, July 21, 2013 3:16 PM
  • Hello again,

    I have been busy researching a solution still and getting prices.

    I  have almost completely ruled out HP VSA due to cost and performance.  I also looked at Quanta Computer two node server with direct attached storage which looked like a good solution however expansion can only be added with storage no more nodes.

    The only options I can see viable without going iSCSI hardware SAN are:

    Hyper-V hosts connected to two SAS Jbods  (what is the max number of hosts you could direct connect?)

    Starwind Native SAN (I looked into this and it sounds very good,  do you know if the 3 node limit will be increased?  Is Microsoft DPM able to take VM child backups?)

    The original a two node SOFS with hyper-V nodes accessing the shares for storage.

    Which option do you believe to be best?  I will be running around 25-30 VMs.

    Thanks

    Friday, August 09, 2013 4:16 PM
  • Hello again,

    I have been busy researching a solution still and getting prices.

    I  have almost completely ruled out HP VSA due to cost and performance.  I also looked at Quanta Computer two node server with direct attached storage which looked like a good solution however expansion can only be added with storage no more nodes.

    The only options I can see viable without going iSCSI hardware SAN are:

    Hyper-V hosts connected to two SAS Jbods  (what is the max number of hosts you could direct connect?)

    Starwind Native SAN (I looked into this and it sounds very good,  do you know if the 3 node limit will be increased?  Is Microsoft DPM able to take VM child backups?)

    The original a two node SOFS with hyper-V nodes accessing the shares for storage.

    Which option do you believe to be best?  I will be running around 25-30 VMs.

    Thanks

    1) What numbers did you manage to sqeeze from HP VSA made you feel disappointed? Can you share?

    2) I'd stay away from "cluster-in-a-box" solutions as they are extensive lock-in-vendor. You cannot replace single component of the infrastructure rathen you need to perfom a complete hardcore forklift upgrade. Also as you\ve mentioned they do only scale up rather the scale-out you'd expect. 

    3) You can connect as many as you have ports on a SAS JBOD. Typically there are two sets "in" and "out" so you can connect two servers. For more you'll need to have an expander etc. Take a look @ the book for SAS:

    http://www.lsi.com/about/contact/Pages/SASSANsforDummies.aspx

    It's small, clean and answers your SAS questions with pictures and numbers.

    4) Yes, V8 will do 4 nodes in RAID 0+1 (don't confuse with 1+0 aka 10) + 2 async replica nodes. Post V8 will do complete

    scale out with unlimited amount of nodes.

    5) DPM does work with it just fine. Hardware VSS provider should help to reduce time in redirected mode for the cluster.

    It's hard for me to recommend as I'm StarWind CTO so would sound like I'm selling you the solution. On your place I'd get both (dual SAS JBODs + SAS Disks Vs. StarWind + SATA on say 2 or 3 Hyper-V servers) and compare performance and installation costs. You can combine both solutions with SoFS just fine.


    StarWind iSCSI SAN & NAS

    Sunday, August 11, 2013 8:26 PM
  • I didn't even test the HP VSA, I don't like the idea of it running on a VM plus I was quoted £2250 per VSA!  That's without support.

    Definitely staying away from Cluster in a box, they seem like a good idea from the marketing but once you look into it there is always at least one point of failure.

    Cool thanks for that link, we could run 4 nodes with 2 Jbods which is a good plan it just depends on cost.

    Thanks, I assume you recommend RAID 1 on the servers if using Starwind in a 2 node setup?  With the 3 node setup do you recommend to use RAID 0 on the servers and allow Starwind to do the mirroring alone as it is unlikely that all 3 servers will fail?  Is it possible to run RAID1+0 or would this be a waste of storage?   I hope that's clear what I am asking.

    I can try Starwind no problem, I don't know if we could try the JBOD solution first before purchasing.

    Narrowing down the options to try are:

    2-4 Hyper-V hosts connecting to 2 Jbods.

    Starwind Native SAN over 2-3 hosts filled with disks.

    Starwind iSCSI SAN over 2 SOFS nodes.

    Thanks

    Monday, August 12, 2013 10:46 AM
  • Right... It's interesting to know will VMware do their upcoming vSAN as a VM-based or a hypervisor kernel-based solution :)

    Cluster-in-a-box can have all the components duplicated (or triplicated) so no actual SPOF. The problem is very few of these architectures does allow to grow capacity with a passive expansion nodes (I personally cannot recall any names). Say Equallogic can do scale out but to grow capacity you need to buy more active nodes (every one is a cluster-in-a-box running proprietary OS over MIPS hardware). From this point of view nearly all-passive-around SAS JBODs look MUCH more promising.

    Yes, mirror on underlying drives + mirror between nodes for just a pair of nodes. RAID 0+1 (mirror between nodes and every node has a stripe) for 3 nodes and up. RAID 1+0 aka 10 is not supported because of the performance reasons: aggregating single stripe from a many nodes slows down everything on a internode communications... However at the end of the day doing mirror between stripes gives away 33% or 25% of the raw capacity (3 or 4 nodes accordingly). As we don't require SAS and are fine with SATA you'll be basically paying with capacity for IOPS as MPIO splits requests between the nodes (performance goes up with every new node added).

    I don't think it's a big deal to get hardware, put it into test and then return if it does not work for you in the way you expect. However I agree it's always a PITA to evaluate actual hardware solutions. One of the reasons software-defined-storage now starts to rule following software-defined-networking :)

    Listed options are perfectly valid. Just make sure you run high capacity SATA spindles with S/W and not SAS (waste of money, we layer more logic and cache above platters then SAS hardware does). Will be happy to bring you in touch with techies so they could help with installation, configuration and so on Would love to see the numbers for both $$$ and IOPS captured and published here or on your blog. Does not matter will you go for S/W at the end of the day or not - other guys will love your efforts!

    Thanks!


    StarWind iSCSI SAN & NAS

    Tuesday, August 13, 2013 11:20 AM
  • Thanks, very helpful information!  Shame we only use Hyper-V, would be nice if Microsoft did a virtual SAN.

    What are the recommended amount of network adapters in each server for Starwind?  I would like to install the trial version on two servers although it won't really be a performance based evaluation as I only have ancient DL380's (G4) to use for testing with only 2 network ports.

    In production I would be using G8 DL380's with 10Gb SFP for the iSCSI, and the 1Gb for Heartbeat.

    Thanks

    Ross

    Tuesday, August 27, 2013 12:36 PM
  • Thanks, very helpful information!  Shame we only use Hyper-V, would be nice if Microsoft did a virtual SAN.

    What are the recommended amount of network adapters in each server for Starwind?  I would like to install the trial version on two servers although it won't really be a performance based evaluation as I only have ancient DL380's (G4) to use for testing with only 2 network ports.

    In production I would be using G8 DL380's with 10Gb SFP for the iSCSI, and the 1Gb for Heartbeat.

    Thanks

    Ross

    10 GbE for a backbone is recommended (with a pair of 3 Hyper-V servers you can go switch-less to cut the costs down also performance would be better because IP switching adds latency). A pair of redundant GbE networks for heartbeat (you can always share them with the other tasks so they don't need to be dedicated unlike sync aka backbone). That's all :)

    StarWind iSCSI SAN & NAS

    Tuesday, August 27, 2013 3:10 PM
  • Ok great, for testing purposes on the old servers I'll use a switch for the iSCSI,  then heartbeat and LAN on the other port.

    Thanks :)

    Tuesday, August 27, 2013 5:02 PM
  • Ok great, for testing purposes on the old servers I'll use a switch for the iSCSI,  then heartbeat and LAN on the other port.

    Thanks :)

    Should work fine :) Please share some testing numbers as soon as you'll have them. If you'll need pre-installation support just kick StarWind techies. "Let us run your storage infrastructure for you!" (c) StarWind. Thank you and good luck!

    StarWind iSCSI SAN & NAS

    Wednesday, August 28, 2013 7:15 PM