none
SSD RAID on iSCSI Storage RRS feed

  • Question

  • I am looking for some opinions potentially from those who have experience with this. We are looking into ordering 4 solid state drives to put into a RAID. They're kind of expensive so I am contemplating what RAID configuration to use. RAID 10 has usually worked the best for me in most situations but if we do a RAID 5, we gain some space and get more for our money. However, I feel that I would overall get better performance from a RAID 10 since we will be hosting SQL servers on the storage and there will be a lot of I/O's. We cannot do a RAID 0 because we must be able to tolerate at least one drive failure. Anyway, I was wondering what experience you've all had with SSD RAID on iSCSI storage. We are going to be attaching the storage to a clustered host group running Hyper-V 2012. Thank you in advance!
    Friday, August 8, 2014 7:32 PM

Answers

  • I am looking for some opinions potentially from those who have experience with this. We are looking into ordering 4 solid state drives to put into a RAID. They're kind of expensive so I am contemplating what RAID configuration to use. RAID 10 has usually worked the best for me in most situations but if we do a RAID 5, we gain some space and get more for our money. However, I feel that I would overall get better performance from a RAID 10 since we will be hosting SQL servers on the storage and there will be a lot of I/O's. We cannot do a RAID 0 because we must be able to tolerate at least one drive failure. Anyway, I was wondering what experience you've all had with SSD RAID on iSCSI storage. We are going to be attaching the storage to a clustered host group running Hyper-V 2012. Thank you in advance!

    1) RAID10 of course. For a reason: RAID5 like any parity-based RAID does read-modify-write sequence so with any even smallest write whole stripe across all RAID group members would be updated. Making long story short: you'll just BURN your flash cells with bunch of a parasite writes resulting tons of erase-program flash operations. More here:

    Flash Architecture

    http://www.violin-memory.com/products/technology-architecture/

    "Existing RAID 5 and 6 solutions rely on Read-Modify-Write operations that are unsuited to flash."

    2) I don't think hiding fast flash behind slow Ethernet is a good idea. You'll get much better performance if you'd use SQL Server AlwaysOn AG. In this case I/O would go directly to flash on host and only minimal amount of writes would be synchronized over slower switched Ethernet fabric. See:

    AlwaysOn Availability Groups

    http://msdn.microsoft.com/en-us/library/ff877884.aspx

    This whole thing requires Enterprise license however. As an alternative you can use FCI (Failover Cluster Instances) with Virtual SAN running again on locally mounted flash. Similar performance and fault tolerance compared to AlwaysOn AG but may be cheaper because there's no need to use Enterprises licensing and maybe faster because RAM is used as L1 block cache adsorbing writes (In-Memory thing from SQL Server 2014 will be still faster however, also because native compiled stored procedures, but that's another story...)


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    Friday, August 8, 2014 9:28 PM