none
Storage Spaces Direct and Raid0 (instead of pass-through)

    Question

  • I am trying to test Storage Spaces Direct using Dell R720s with the Perc H710P mini on-board. One limitation of the H710P raid card is that it does not support pass-through. As a result I tried to define several of the on-board drives as Raid0. However none of the Raid0 seems to be detected as acceptable disks (see below).

    Two questions:

    1) Is using individual disks presented as Raid0 luns out of the question currently?

    2) Is there a chance this configuration will be supported down the road?

    Here are the commands I tried so far ->

    PS C:\Windows\system32> Enable-ClusterS2D -SkipEligibilityChecks
    WARNING: No elegible DAS disks found.

    PS C:\Windows\system32> Enable-ClusterStorageSpacesDirect -Verbose
    VERBOSE: Connecting to cluster on local computer ECS-HYPERV501.
    VERBOSE: Performing the operation "Enable-ClusterStorageSpacesDirect" on target "ecs-hv-clust1.XXX.XXX".
    WARNING: No elegible DAS disks found.

    PS C:\Windows\system32> Get-PhysicalDisk | ? CanPool -eq $true

    FriendlyName    SerialNumber                     CanPool OperationalStatus HealthStatus Usage            Size
    ------------    ------------                     ------- ----------------- ------------ -----            ----
    DELL PERC H710P 00827f569b973f181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 00ce9c649dba3f181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 004a5cb79ed03f181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 00ff56e99fe43f181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 0008f142a1fa3f181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 00aa114aa20c40181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 0014136fa31f40181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 00bf03a5a43340181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 00fe4674a76240181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB
    DELL PERC H710P 00c00b4aaa9240181e00ede3e860f681 True    OK                Healthy      Auto-Select 558.38 GB

    Friday, January 01, 2016 3:23 PM

Answers

  • The storage controller must be a simple HBA. In your case its a RAID controller, which is not going to be supported.

    For more detail on hardware, please see my blog post here: http://blogs.technet.com/b/clausjor/archive/2015/11/23/hardware-options-for-evaluating-storage-spaces-direct-in-technical-preview-4.aspx 

    • Proposed as answer by Tim CerlingMVP Saturday, January 02, 2016 10:52 PM
    • Marked as answer by GT-CFG-COE Saturday, January 02, 2016 11:46 PM
    Saturday, January 02, 2016 7:20 PM

All replies

  • Disks must NOT be presented as any part of a RAID set.  Disks MUST be presented as JBOD.

    . : | : . : | : . tim

    Saturday, January 02, 2016 3:15 PM
  • Thanks Tim.

    This is what I tried to do by defining each individual disk as its own Raid0 (one disk = one Raid0).

    Unfortunately the H710P raid controller does not let me do pass-through (lots of complaints in various forums) which is why I was investigating whether or not doing one disk = one raid0 would work.

    That issue does not exist with newer Dell servers (R730 + H330 or H730). I may investigate if I can add a different raid card and re-route the SAS cables.


    Saturday, January 02, 2016 3:59 PM
  • The storage controller must be a simple HBA. In your case its a RAID controller, which is not going to be supported.

    For more detail on hardware, please see my blog post here: http://blogs.technet.com/b/clausjor/archive/2015/11/23/hardware-options-for-evaluating-storage-spaces-direct-in-technical-preview-4.aspx 

    • Proposed as answer by Tim CerlingMVP Saturday, January 02, 2016 10:52 PM
    • Marked as answer by GT-CFG-COE Saturday, January 02, 2016 11:46 PM
    Saturday, January 02, 2016 7:20 PM
  • Thanks a lot. I had came across your blog but I was hoping maybe Raid0 would be an option. I think I was confused by the fact the Raid0 devices were returned by the command Get-PhysicalDisk | ? CanPool -eq $true 

    No problem. I can understand the logic behind a "pure" SAS + HBA model especially when introducing a software storage bus.

    Much appreciated.

    Saturday, January 02, 2016 7:56 PM
  • "This is what I tried to do by defining each individual disk as its own Raid0 "

    RAID0 is a form of RAID.  It means two or more disks are created in a striped RAID set.  I've never heard of a single volume RAID0.  I'm surprised the RAID controller allows that.

    But, as you can see from Claus' post, the controller must return a specific bus type for Storage Spaces Direct.  This is more strict than the requirement for Standalone Storage Spaces where RAID controllers can be used, but only if they have the ability to present the disks as JBOD.


    . : | : . : | : . tim

    Saturday, January 02, 2016 10:52 PM
  • "This is what I tried to do by defining each individual disk as its own Raid0 "

    RAID0 is a form of RAID.  It means two or more disks are created in a striped RAID set.  I've never heard of a single volume RAID0.  I'm surprised the RAID controller allows that.

    The single volume Raid0 is not something I would use in production, but it has been used previously by some folks as a work-around when dealing with a Raid controller which will not present disks as JBOD (pass-through). This is a know issue with the Perc H710 from Dell (LSI Logic OEM - 9266-8i I believe) where the feature is disabled.

    Will be looking at using the LSI Logic 9207-8i for testing.

    Thanks a lot to both you and Claus for taking the time to reply to my post on a week-end.

    Saturday, January 02, 2016 11:55 PM
  • The storage controller must be a simple HBA. In your case its a RAID controller, which is not going to be supported.

    For more detail on hardware, please see my blog post here: http://blogs.technet.com/b/clausjor/archive/2015/11/23/hardware-options-for-evaluating-storage-spaces-direct-in-technical-preview-4.aspx 

    Hello guys, same problem here.

    I try run run S2D on Dell R630 with PERC H730.

    Disk are configured in non-raid mode but is OS are recognized are RAID. Even if disk are configured HBA mode disk are still recognized as RAID. I think i might solve new drivers or firmware. Is these some solution to run S2D with PERC RAID?

    I saw workaround with but in TP4 i does not work for me.

    (Get-Cluster).DASModeBusTypes=0x100

    Dell R630 

    PERC H730

    Firmware Version 25.3.0.0016 
    Driver Version 6.603.6.0 
    Wednesday, March 16, 2016 10:48 AM
  • If your RAID controller cannot present the disks as JBOD, then they will not be recognized as available to S2D.  Not all RAID controllers can present the disks properly.

    Here is the key verbiage from the indicated blog -

    The devices must show with bustype as either SAS (even for SATA devices) or NVMe (you can ignore your boot and system devices). If the devices show with bustype SATA, it means they are connected via a SATA controller which is not supported. If the devices show as bustype RAID, it means it is not a “simple” HBA, but a RAID controller which is not supported. In addition the devices must show with the accurate media type as either HDD or SSD.

    In addition, all devices must have a unique disk signature. The devices must show a unique device ID for each device. If the devices show the same ID, the disk signature is not unique, which is not supported.


    . : | : . : | : . tim

    Wednesday, March 16, 2016 11:27 AM
  • Thank for reply I understand  but from my point of view it can be only a matter of drivers. Disk in system are visible as JBOD but only bus is detected as RAID.

    We are very interested in S2D but we already have bunch of servers (with PERC RAID) that we like to use to run S2D in hyper convered solution. If we have to buy special HW for S2D it will lose the advantage of cheap solution

    I will try Dell support...

    Wednesday, March 16, 2016 12:22 PM
  • "Thank for reply I understand  but from my point of view it can be only a matter of drivers. <snip>  I will try Dell support..."

    Yep, vendors have to provide drivers that support this.  Not all RAID controllers will be able to support it.  Dell is the only source for that support for their devices.


    . : | : . : | : . tim


    Wednesday, March 16, 2016 2:53 PM
  • I am having the exact same issue with a Dell 730xd which came from the hardware configuration page mentioned above. I think it is a driver problem and Dell needs to fix that & Microsoft needs to give us testes a way to do our tests without that limitation.

    Thursday, March 17, 2016 1:34 PM
  • Dell R730xd (from your hardware configuration page) supports HBA and RAID mode. Unfortunately Windows ALWAYS shows the BusType RAID which is simply wrong.
    Thursday, March 17, 2016 1:36 PM
  • "Unfortunately Windows ALWAYS shows the BusType RAID which is simply wrong."

    Microsoft is reporting what the device driver is handing to it.  This is a driver issue.  It is not something that Microsoft controls.  Not all RAID controllers correctly report.  I have systems with RAID controllers which report correctly.  I have other systems which do not report correctly.  Different drivers.


    . : | : . : | : . tim

    Thursday, March 17, 2016 3:07 PM
  • after successfully testing S2D in a hyper-v dev environment, i finally decided to try it out on physical servers.  Now I have spent 5+ hours on a Friday evening moving VMs/wiping/reconfiguring/installing/updating to find out that EVEN IF I PRESENT SINGLE-DISK RAID 0 VOLUMES to the OS on my DL360 G6 w/ SmartArray P410i controller........ I have zero hope of using S2D. :banghead:

    I want my Friday evening back.  

    Starwind anyone?  I was wondering what their business model would be after S2D... but now I know.  

    P.S. HP VSA sucks


    • Edited by Hunter DG Saturday, March 19, 2016 4:25 AM
    Saturday, March 19, 2016 4:25 AM
  • Hi Hunter,

    I don't understand your comment... why are you trying to use hardware RAID with Spaces Direct?  As Tim has outlined, that is not supported... and it is not supported to use your HBA in any form of RAID mode.

    Can you please provide more context in what you are trying to do and why?  I'm sorry to hear you had a great test experience with VMs, only to be frustrated when trying to do a real deployment on metal.  :-(

    Thanks!
    Elden

    Tuesday, March 22, 2016 2:58 AM
  • Hi Elden, Thank you very much for your response! I have some older hardware (HP DL360 G6) that have embedded? SmartArray P410i RAID cards and 15k SAS disks. It is apparently not possible to present the individual disks directly to the OS as with an HBA/JBOD. What I CAN do (and what this thread is about unless I am mistaken) is simply create 1 RAID 0 logical disk per physical disk, which in my mind is effectively the same as presenting the individual disks to the system. All RAID functionality is bypassed.. I can actually REMOVE the RAID FBWC and battery. Unfortunately, in this situation, the single-disk raid 0 'logical' disks are not presented to the OS as SAS or SATA. This means S2D cannot be used. This is unfortunate. I would much rather use an integrated solution such as S2D rather than a third-party product like HP or StarWind VSA solutions... I was looking forward to simply buying several Server 2012>2016 licenses and ditching my HP VSA setup (in fact I don't even use it because it is not reliable). If you have any contribution to S2D development, please suggest to create a method to bypass the 'SAS or SATA' requirement, if technically feasible. I don't need it to be 'supported' even. I don't need it to be fast. I (and perhaps many others) would just love to have the OPTION available. I don't have the budget to buy new servers, and I'm not interested in buying new HBA cards. I would simply like to move beyond Hyper-V Replica DR to clustered S2D HA... I hope I have explained myself well. Thanks again!
    Tuesday, March 22, 2016 4:01 AM
  • I appreciate your desire to use Storage Spaces Direct. Storage Spaces Direct requires SAS and SATA disk devices to be connected via a simple SAS HBA. You cannot use RAID controllers in either RAID or JBOD mode, as the drives are still obfuscated by the controller and it uses a different driver stack.

    Cheers

    ClausJor [MSFT]

    Principal Program Manager Storage Spaces Direct.

    Wednesday, March 23, 2016 2:59 AM
  • Thanks for the response. I'm curious, is the restriction for a technical reason? Can you please explain In more detail as to WHY? Does REFS require direct access? An obfuscated disk is still a disk. Or perhaps a restriction/business decision in order to be eligible for support from Microsoft? I was so looking forward to S2D.
    Thursday, March 24, 2016 4:57 AM
  • Yes, it is a technical issue.  It is written to look for SAS or SATA JBOD.  It is not uncommon to have new technology accept specific requirements, nor is it uncommon to have new technology require newer hardware.  Yes, obfuscated disk is still a disk, but if it is not crystal clear the type of disk/controller being used (and by saying it is obfuscated you are declaring that to be the case), there is no way for the technology to know that the disk will react in a manner consistent with the technology.  If S2D sees a SAS or SATA JBOD disk, it knows exactly what to expect.  If it sees a RAID0 disk coming from a RAID controller, it does know that the disk will respond without the RAID controller stepping in and doing something.

    . : | : . : | : . tim

    Thursday, March 24, 2016 12:07 PM
  • Thanks for the response. I am (clearly) aware that it only works with SAS/SATA. I was hoping for a description of the technical reason behind this limitation. As in WHY is it written to only work with SAS and SATA. To your further explanation... What exactly does s2d need to know or expect from a SAS/SATA disk (that it can't get from a Raid disk) and why. Are SAS or SMART disk attributes taken in to consideration? Does it need to be able to detect when disk cache has been flushed to disk? What else? More specifically, is there a insurmountable technical reason, and if so what is it? For example: neither the lack of SMART attributes nor lack of knowledge about disk cache are insurmountable problems. Desirable to know about sure, but developers could easily remove such requirements, or perhaps allow them to be bypassed, and the underlying function of S2D would work fine. Perhaps there would be more risk of data loss, (and thus such scenarios would not be 'supported') but the potential usage scenarios would greatly increase. Is there a technical limitation that cannot be engineered around, or is it it just a business decision to help protect from poorly-designed deployment scenarios (as in 'RAID should probably not be allowed because the OS won't be able to detect the volume's physical characteristics and confirm only individual drives are being presented, that is undesirable, so let's just prevent it from happening')

    • Edited by hunterdg Friday, March 25, 2016 12:07 AM
    Thursday, March 24, 2016 11:44 PM
  • "Is there a technical limitation that cannot be engineered around, or is it it just a business decision "

    My hunch is the latter.  After all, it is a first generation product.  Why muddy the water with interfaces that are not as 'clean' as other interfaces.  From a business standpoint, it makes a lot more sense to start with something having well-defined standard interfaces rather than something that is very dependent upon the implementer. 


    . : | : . : | : . tim

    Friday, March 25, 2016 1:47 PM
  • after successfully testing S2D in a hyper-v dev environment, i finally decided to try it out on physical servers.  Now I have spent 5+ hours on a Friday evening moving VMs/wiping/reconfiguring/installing/updating to find out that EVEN IF I PRESENT SINGLE-DISK RAID 0 VOLUMES to the OS on my DL360 G6 w/ SmartArray P410i controller........ I have zero hope of using S2D. :banghead:

    I want my Friday evening back.  

    Starwind anyone?  I was wondering what their business model would be after S2D... but now I know.  

    P.S. HP VSA sucks


    1) That's going to be fixed. Eventually. There are not so many OEM RAID controller vendors so give the some time and they will settle down with a working solution. See even PERC 730 isn't on VMware VSAN 6.2 HCL yet! So best selling server and best selling controller aren't supported by best selling hypervisor :)

    https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144614

    PERC H730 + VSAN 6.2 = NO GO

    "The certification of the Dell PERC H730 and FD332 based storage controllers on VMware Virtual SAN 6.2  is pending due to an issue identified during testing.  "

    Software-Defined Storage is cool and is really about Software but... SDS is *DEAD IN THE WATER* w/out proper testing. I can really wish a lot of passion and time and serenity to Microsoft storage engineers! Now they are in the same shoes some guys have been since early 2000s. 

    2) StarWind has multiple revenue streams (ready nodes aka appliances, actual software and services) and % from Hyper-V software is not so big. Either way we're testing now with S2D doing erasure coding and replication and our layer doing deduplication, log-structuring, DRAM caching and proper uplinks like HCL-ed iSCSI / iSER / RDMA and everything works fine. At least there's going to be a way to feed S2D managed storage to VMware. We're complimentary to Microsoft and not competitive. 

    I did a good explanation some time ago, here:

    Storage Spaces Direct Vs. XXX

    https://community.spiceworks.com/topic/1445491-s2d-vs-starwind-virtual-san?page=1#entry-5551567

    3) I'd disagree! HP VSA is a very solid product. They have 7+ years of maturity, more paying customers than anybody else in this space has and remember - it was they who had actually started hyper converged game many years ago :)

    HP VSA

    https://charbelnemnom.com/2014/06/deploying-hp-storevirtual-vsa-on-hyper-v-2012-r2-cluster-part-1-hyperv-hp-storage-storevirtual-sysctr/

    They really had to add proper erasure coding, deduplication and some SMB3 gateway VMs to their solution after 2012 release. 

    Either way I wish you good luck!

    P.S. TP4/5 S2D is getting to a point it's actually usable!

    Storage Spaces Direct TP4 Step-by-Step Guide

    https://slog.starwindsoftware.com/microsoft-storage-spaces-direct-4-node-setup/

    Which is very cool. VMware has to catch up and add VSAN to Enterprise license instead of using it as a separate SKU.


    Cheers,

    Anton Kolomyeytsev [MVP]

    StarWind Software Chief Architect

    Profile:   Blog:   Twitter:   LinkedIn:  

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.


    • Edited by VR38DETTMVP Friday, April 01, 2016 7:53 PM URL
    Friday, April 01, 2016 2:11 PM
  • "This is what I tried to do by defining each individual disk as its own Raid0 "

    RAID0 is a form of RAID.  It means two or more disks are created in a striped RAID set.  I've never heard of a single volume RAID0.  I'm surprised the RAID controller allows that.

    The single volume Raid0 is not something I would use in production, but it has been used previously by some folks as a work-around when dealing with a Raid controller which will not present disks as JBOD (pass-through). This is a know issue with the Perc H710 from Dell (LSI Logic OEM - 9266-8i I believe) where the feature is disabled.

    Will be looking at using the LSI Logic 9207-8i for testing.

    Thanks a lot to both you and Claus for taking the time to reply to my post on a week-end.

    They aren't exactly the same. Hardware yes but firmware is VERY different. 


    Cheers,

    Anton Kolomyeytsev [MVP]

    StarWind Software Chief Architect

    Profile:   Blog:   Twitter:   LinkedIn:  

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    Friday, April 01, 2016 2:25 PM
  • Thank you VERY MUCH for chiming in. P.S. Nice forum handle :) I've always been partial to the VG30DETT myself.
    • Edited by Hunter DG Friday, April 01, 2016 5:36 PM
    Friday, April 01, 2016 5:33 PM
  • This is a big problem.  I'm sitting on 1.5PB of disk here and I cannot use them because the RAID controller refuses to pass anything other than "RAID" devices to the OS, no mater how I configure it.

    For the record it's an Avago MegaRAID 9361-8i.
    Thursday, August 04, 2016 8:51 PM
  • It is , and it will increase the next weeks. You can still use your controller telling the OS the Raid Bus disks for S2D if you set a Cluster property: '(Get-Cluster).S2DBusTypes=0x100' ingnores the fact, that it's a raid bus. But be warned that this is unsupported!

    Regards, Michael

    Saturday, August 06, 2016 4:54 PM
  • Hello, you are right about 0x100 but it will only set Raid Disks. so maybe you wanna try this :

    (Get-Cluster).S2DBusTypes=4294967295

    for select all bus types for S2D. but ofcourse its also unsupported!




    • Proposed as answer by OBitheJEDi Monday, August 08, 2016 1:04 PM
    • Unproposed as answer by OBitheJEDi Monday, August 08, 2016 1:04 PM
    • Proposed as answer by OBitheJEDi Monday, August 08, 2016 1:04 PM
    • Edited by OBitheJEDi Monday, August 08, 2016 1:06 PM
    Monday, August 08, 2016 5:33 AM
  • I don't think you can mix busses - tried yesterday with 3 Servers, two Raid, 1 SAS. enable-clusters2d cmdlet Surfacing Disks takes 1 hour!! But when you try to create a new vdisk it Fails.

    S2DBusTypes seems to be some bitmask. But until MS documents this property it will stay dark.


    • Edited by train-IT Monday, August 08, 2016 1:29 PM
    Monday, August 08, 2016 1:27 PM
  • I don't think you can mix busses - tried yesterday with 3 Servers, two Raid, 1 SAS. enable-clusters2d cmdlet Surfacing Disks takes 1 hour!! But when you try to create a new vdisk it Fails.

    S2DBusTypes seems to be some bitmask. But until MS documents this property it will stay dark.


    When i tried with (Get-Cluster).S2DBusTypes=4294967295, i could create vdisk once but failed 3 times. But there is nothing to say, its unsupported atm.
    Tuesday, August 09, 2016 5:40 AM
  • Hi GT-CFG-COE,

    I'm currently testing and validating S2D in our lab environment and hit the same issue you mentioned.  I'm running a Dell R630 and a Get-PhysicalDisk shows disk BusType as RAID. Funko22 mentioned that a workaround for pre TP4 no longer works, so I went looking through the cluster properties and came across "S2DBusTypes".  Running:

    (Get-Cluster).S2DBusTypes=0x100 in place of

    (Get-Cluster).DASModeBusTypes=0x100

    has allowed me to enable S2D on my R630.

    I hope this helps, although I imagine you've worked round this by now but maybe it can help others :)

    Best regards,

    Fixx

    *****Just read the whole post and realised someone has already posted this - whoops*****

    • Edited by David (BSOL) Wednesday, August 10, 2016 3:05 PM Already covered
    Wednesday, August 10, 2016 2:58 PM
  • That can happen :)

    Anyway, I realized to create a 3 node-S2D Cluster with MIXING Busses! Please accept that this is completely unsupported. I had a few timeouts and disk losses on the node with the SAS attached disks but they were connected via MPIO which is also unsupported. I think thats the reason for the disk losses (there was no data loss because of the mirroring reduncancy!).

    I tried to kill one node, and rebuild is gonna take a while, but succeeds.

    If you want to try that yourself it's important to create the vdisks with Powershell, the GUI brings an error about a wrong configured pool... (new-volume). It was possible to create a three way mirrored disks on just 3 nodes (my build was 14393, don't think it was possible before?)

    Regards, Michael


    • Edited by train-IT Friday, August 12, 2016 6:58 AM
    Friday, August 12, 2016 6:57 AM
  • Once again I'm a bit late to this party, but it seems that either the problem has been corrected in Server 2016, or at least a workaround is now available.

    For comparison, I'm running multiple Dell R620's with Perc H710p and Server 2016 Datacenter, RAID1 for system disk all other drives RAID0, formatted and no partitions. And initially had EXACTLY the same problem as described here, and like Hunter DG went through the same extended trial and error period attempting resolution.

    Although the impression is given further down that this problem is resolved, I don't see an explicit procedure, so perhaps this will help others directed here in the hope of getting their RAID controllers to function with S2D.

    Simple (sic) fix, manual configuration of S2D. Assuming the cluster is configured: my problem began at enabling the Storage Spaces Direct... starting from there the solution is:- (okay, what worked for me...)

    Enable-Clusters2d -Autoconfig:0 -SkipEligibilityChecks
    $pd = Get-PhysicalDisk | ? CanPool -eq $true
    New-storagepool -StorageSubSystemFriendlyName clus* -friendlyname S2D -ProvisioningTypeDefault Fixed -PhysicalDisks $pd

    #this seems important and may have made the difference, but I'm NOT going back to try it again without this! (400GB is the breakpoint between SSD and HDD on my systems - alter to suit)
    Get-StoragePool  S2D* | Get-PhysicalDisk | where {$_.Size -lt 400GB} | Set-PhysicalDisk -MediaType SSD 
    Get-StoragePool  S2D* | Get-PhysicalDisk | where {$_.Size -gt 400GB} | Set-PhysicalDisk -MediaType HDD

    #Create Storage Tiers 
    New-StorageTier -StoragePoolFriendlyName S2D -FriendlyName Cap -MediaType HDD -ResiliencySettingName Parity
    New-StorageTier -StoragePoolFriendlyName S2D -FriendlyName Per -MediaType SSD -ResiliencySettingName Mirror

    $poolname = Get-StoragePool S2D
    $perf = Get-storagetier Per
    $capa = Get-storagetier Cap

    New-Volume -StoragePool $poolname -FriendlyName vdisk1 -FileSystem CSV_REFS -StorageTiers $perf, $capa -StorageTierSizes 40GB, 200GB
    New-Volume -StoragePool $poolname -FriendlyName vdisk2 -FileSystem CSV_REFS -StorageTiers $perf, $capa -StorageTierSizes 40GB, 200GB
    etc.... (adjust storagetiersizes to suit)

    I did have some trouble with Resiliency setting and PhysicalDiskRedundancy setting due to initial number of nodes (I wasn't going to waste more time chasing a duff solution) - alter to suit your installation.

    At this point you should be able to use Failover Cluster Manager to manage your storage.

    CAUTION: This is an unsupported workaround, no warranty explicit or implied, use at own risk. Do not use in a production environment.

    For reference please credit:-
    Dell Technical Note: Microsoft Storage Spaces Direct Deployment on Dell PowerEdge R730xd

    and
    TechNet ScriptCenter: Building Storage Spaces Direct hyper-converged Cluster using Nano Server in VMs by Michael Rueefli


    • Edited by MBoczek Monday, February 20, 2017 10:56 PM Error correction + usage
    Saturday, February 11, 2017 7:21 PM
  • "Although the impression is given further down that this problem is resolved, I don't see an explicit procedure, so perhaps this will help others directed here in the hope of getting their RAID controllers to function with S2D."

    Even though you provided a workaround that may have worked for you, the fact remains that this is a completely unsupported configuration.  You should not deploy this into any production environment.


    . : | : . : | : . tim

    Monday, February 13, 2017 2:17 PM
  • Point taken. I have edited the workaround to include a usage caution.
    Monday, February 20, 2017 11:00 PM
  • Here's what MS is missing.

    The product is designed more around large scale redundant and distributed storage. However, there's a HUGE segment out there like me who would love nothing more than to deploy this on single servers for our lower I/o requirements like backup and archive space.

    The attraction is the tiered storage with capabilities for NV or SSD read/write cache and a second tier of 7200rpm junk disks. With RAID, we can improve the underlying junk disk performance somewhat, and also cache on NV disk with SD2, further improving on this. By not considering this layout, it is restricting a HUGE segment in the market who would otherwise buy the OS and use it. For instance, right now I have 4 storage servers I'm rebuilding. Starwind has been promising to fix their SSD cache option for 4 YEARS, and I've had it. I want more than anything to dump these jokers, so I loaded up a 2016 eval to test the waters. Guess what? I'm up against this same restriction as the others in this thread. It should be obvious by the amount of people searching for a solution on this that the demand is high.

    I think this is a big oversight on the part of MS by not acknowledging the size of the potential deployments in this arena. So now I'm forced to go back to just utilizing the LSI cachecade underneath an openfiler OS and I'll get what I get. I'm very let down to be quite frank.

    Hopefully MS will see the potential and fix this in upcoming releases, but for now, I have to commit and move on while hanging my head in defeat.

    Monday, July 31, 2017 6:02 PM
  • Simply use Storage Spaces on a standalone system to achieve what you are describing.  Storage Spaces Direct is a specific technology designed to create a highly available solution using local storage.  Storage Spaces, introduced in 2012, provides the tiered storage capabilities you are describing. Since S2D is a highly available solution, it does not make much sense to have a single server installation because it has a single point of failure.

    tim

    Tuesday, August 01, 2017 2:01 PM