locked
Storage Spaces: How come a virtual disks (thinly provisioned with mirrored resiliency) footprint on the pool can remain high despite the volume was deduped and all potential purgable slabs/free space have been consolidated? RRS feed

  • Question

  • How come a virtual disks (thinly provisioned with mirrored resiliency) footprint on the pool can remain high despite the volume was deduped and all potential purgable slabs/free space have been consolidated?

    Optimize-volume reports a lot of free space. The volume was retrimed, slabs consolidated. Regarding deduplication no errors (scrubbing) and the free space was also consolidated here i.e. garbagecollection. So, how come the footprint on the pool does not change in the Storage Spaces manager ... even after free space was trimed and Consolidated by optimize-volume?

    BTW Just changed some disks but around february I dealt with the same problem and made it work: thinly provsioned mirrored storage with deduped volumes. But time to pop the question on how to reclaim that unused space on the volume and virtual disk back to the pool.




    Wednesday, September 18, 2013 1:38 PM

All replies

  • Could you share the output of your 

    optimize-volume -retrim -verbose -slabconsolidate

    and

    get-virtualdisk | fl

    Thursday, September 19, 2013 12:10 AM
  • Hi. Sorry about the late reply.

    Here is the output of those 2 command strings for a volume Z

    Please note the Storage Space is on the Host - and the Volume is a passed through as a disk to a VM running in the role of i.e. a file server.

    VM env>Powershell:

    optimize-volume -retrim -verbose -slabconsolidate
    DriveLetter[0]: z
    DriveLetter[1]:
    VERBOSE: Invoking slab consolidation on SysMgmt (Z:)...
    VERBOSE: Slab Analysis:  0% complete...
    VERBOSE: Slab Analysis:  100% complete.
    VERBOSE: Slab Enumeration:  2% complete...
    VERBOSE: Slab Enumeration:  3% complete...
    VERBOSE: Slab Enumeration:  5% complete...
    VERBOSE: Slab Enumeration:  6% complete...
    VERBOSE: Slab Enumeration:  7% complete...
    VERBOSE: Slab Enumeration:  8% complete...
    VERBOSE: Slab Enumeration:  10% complete...
    VERBOSE: Slab Enumeration:  11% complete...
    VERBOSE: Slab Enumeration:  12% complete...
    VERBOSE: Slab Enumeration:  13% complete...
    VERBOSE: Slab Enumeration:  15% complete...
    VERBOSE: Slab Enumeration:  16% complete...
    VERBOSE: Slab Enumeration:  17% complete...
    VERBOSE: Slab Enumeration:  18% complete...
    VERBOSE: Slab Enumeration:  20% complete...
    VERBOSE: Slab Enumeration:  21% complete...
    VERBOSE: Slab Enumeration:  22% complete...
    VERBOSE: Slab Enumeration:  23% complete...
    VERBOSE: Slab Enumeration:  25% complete...
    VERBOSE: Slab Enumeration:  26% complete...
    VERBOSE: Slab Enumeration:  28% complete...
    VERBOSE: Slab Enumeration:  30% complete...
    VERBOSE: Slab Enumeration:  32% complete...
    VERBOSE: Slab Enumeration:  33% complete...
    VERBOSE: Slab Enumeration:  34% complete...
    VERBOSE: Slab Enumeration:  36% complete...
    VERBOSE: Slab Enumeration:  37% complete...
    VERBOSE: Slab Enumeration:  38% complete...
    VERBOSE: Slab Enumeration:  39% complete...
    VERBOSE: Slab Enumeration:  40% complete...
    VERBOSE: Slab Enumeration:  44% complete...
    VERBOSE: Slab Enumeration:  45% complete...
    VERBOSE: Slab Enumeration:  46% complete...
    VERBOSE: Slab Enumeration:  47% complete...
    VERBOSE: Slab Enumeration:  49% complete...
    VERBOSE: Slab Enumeration:  50% complete...
    VERBOSE: Slab Enumeration:  51% complete...
    VERBOSE: Slab Enumeration:  52% complete...
    VERBOSE: Slab Enumeration:  54% complete...
    VERBOSE: Slab Enumeration:  55% complete...
    VERBOSE: Slab Enumeration:  56% complete...
    VERBOSE: Slab Enumeration:  58% complete...
    VERBOSE: Slab Enumeration:  59% complete...
    VERBOSE: Slab Enumeration:  61% complete...
    VERBOSE: Slab Enumeration:  62% complete...
    VERBOSE: Slab Enumeration:  63% complete...
    VERBOSE: Slab Enumeration:  64% complete...
    VERBOSE: Slab Enumeration:  66% complete...
    VERBOSE: Slab Enumeration:  67% complete...
    VERBOSE: Slab Enumeration:  68% complete...
    VERBOSE: Slab Enumeration:  69% complete...
    VERBOSE: Slab Enumeration:  70% complete...
    VERBOSE: Slab Enumeration:  71% complete...
    VERBOSE: Slab Enumeration:  72% complete...
    VERBOSE: Slab Enumeration:  74% complete...
    VERBOSE: Slab Enumeration:  76% complete...
    VERBOSE: Slab Enumeration:  77% complete...
    VERBOSE: Slab Enumeration:  78% complete...
    VERBOSE: Slab Enumeration:  79% complete...
    VERBOSE: Slab Enumeration:  80% complete...
    VERBOSE: Slab Enumeration:  82% complete...
    VERBOSE: Slab Enumeration:  83% complete...
    VERBOSE: Slab Enumeration:  84% complete...
    VERBOSE: Slab Enumeration:  85% complete...
    VERBOSE: Slab Enumeration:  86% complete...
    VERBOSE: Slab Enumeration:  88% complete...
    VERBOSE: Slab Enumeration:  89% complete...
    VERBOSE: Slab Enumeration:  90% complete...
    VERBOSE: Slab Enumeration:  91% complete...
    VERBOSE: Slab Enumeration:  93% complete...
    VERBOSE: Slab Enumeration:  94% complete...
    VERBOSE: Slab Enumeration:  95% complete...
    VERBOSE: Slab Enumeration:  96% complete...
    VERBOSE: Slab Enumeration:  97% complete...
    VERBOSE: Slab Enumeration:  98% complete...
    VERBOSE: Slab Enumeration:  100% complete.
    VERBOSE: Slab Consolidation:  0% complete...
    VERBOSE: Slab Consolidation:  2% complete...
    VERBOSE: Slab Consolidation:  5% complete...
    VERBOSE: Slab Consolidation:  8% complete...
    VERBOSE: Slab Consolidation:  10% complete...
    VERBOSE: Slab Consolidation:  13% complete...
    VERBOSE: Slab Consolidation:  16% complete...
    VERBOSE: Slab Consolidation:  18% complete...
    VERBOSE: Slab Consolidation:  21% complete...
    VERBOSE: Slab Consolidation:  24% complete...
    VERBOSE: Slab Consolidation:  27% complete...
    VERBOSE: Slab Consolidation:  29% complete...
    VERBOSE: Slab Consolidation:  32% complete...
    VERBOSE: Slab Consolidation:  35% complete...
    VERBOSE: Slab Consolidation:  37% complete...
    VERBOSE: Slab Consolidation:  40% complete...
    VERBOSE: Slab Consolidation:  43% complete...
    VERBOSE: Slab Consolidation:  45% complete...
    VERBOSE: Slab Consolidation:  48% complete...
    VERBOSE: Slab Consolidation:  51% complete...
    VERBOSE: Slab Consolidation:  54% complete...
    VERBOSE: Slab Consolidation:  56% complete...
    VERBOSE: Slab Consolidation:  59% complete...
    VERBOSE: Slab Consolidation:  62% complete...
    VERBOSE: Slab Consolidation:  64% complete...
    VERBOSE: Slab Consolidation:  67% complete...
    VERBOSE: Slab Consolidation:  70% complete...
    VERBOSE: Slab Consolidation:  72% complete...
    VERBOSE: Slab Consolidation:  75% complete...
    VERBOSE: Slab Consolidation:  78% complete...
    VERBOSE: Slab Consolidation:  81% complete...
    VERBOSE: Slab Consolidation:  83% complete...
    VERBOSE: Slab Consolidation:  86% complete...
    VERBOSE: Slab Consolidation:  89% complete...
    VERBOSE: Slab Consolidation:  91% complete...
    VERBOSE: Slab Consolidation:  94% complete...
    VERBOSE: Slab Consolidation:  97% complete...
    VERBOSE: Slab Consolidation:  100% complete.
    VERBOSE: Performing pass 1:
    VERBOSE: Retrim:  0% complete...
    VERBOSE: Retrim:  6% complete...
    VERBOSE: Retrim:  16% complete...
    VERBOSE: Retrim:  21% complete...
    VERBOSE: Retrim:  23% complete...
    VERBOSE: Retrim:  24% complete...
    VERBOSE: Retrim:  26% complete...
    VERBOSE: Retrim:  27% complete...
    VERBOSE: Retrim:  36% complete...
    VERBOSE: Retrim:  40% complete...
    VERBOSE: Retrim:  41% complete...
    VERBOSE: Retrim:  42% complete...
    VERBOSE: Retrim:  44% complete...
    VERBOSE: Retrim:  46% complete...
    VERBOSE: Retrim:  47% complete...
    VERBOSE: Retrim:  49% complete...
    VERBOSE: Retrim:  50% complete...
    VERBOSE: Retrim:  51% complete...
    VERBOSE: Retrim:  52% complete...
    VERBOSE: Retrim:  53% complete...
    VERBOSE: Retrim:  54% complete...
    VERBOSE: Retrim:  55% complete...
    VERBOSE: Retrim:  57% complete...
    VERBOSE: Retrim:  58% complete...
    VERBOSE: Retrim:  59% complete...
    VERBOSE: Retrim:  60% complete...
    VERBOSE: Retrim:  62% complete...
    VERBOSE: Retrim:  63% complete...
    VERBOSE: Retrim:  64% complete...
    VERBOSE: Retrim:  65% complete...
    VERBOSE: Retrim:  66% complete...
    VERBOSE: Retrim:  67% complete...
    VERBOSE: Retrim:  68% complete...
    VERBOSE: Retrim:  71% complete...
    VERBOSE: Retrim:  72% complete...
    VERBOSE: Retrim:  73% complete...
    VERBOSE: Retrim:  74% complete...
    VERBOSE: Retrim:  75% complete...
    VERBOSE: Retrim:  76% complete...
    VERBOSE: Retrim:  77% complete...
    VERBOSE: Retrim:  93% complete...
    VERBOSE: Retrim:  95% complete...
    VERBOSE: Retrim:  97% complete...
    VERBOSE: Retrim:  98% complete...
    VERBOSE: Retrim:  99% complete...
    VERBOSE: Retrim:  100% complete.
    VERBOSE:
    Post Defragmentation Report:
    VERBOSE:
     Volume Information:
    VERBOSE:   Volume size                 = 969,87 GB
    VERBOSE:   Cluster size                = 4 KB
    VERBOSE:   Used space                  = 392,06 GB
    VERBOSE:   Free space                  = 577,80 GB
    VERBOSE:
     Allocation Units:
    VERBOSE:   Slab count                  = 3879
    VERBOSE:   Slab size                   = 256 MB
    VERBOSE:   Slab alignment              = 127,00 MB
    VERBOSE:   In-use slabs                = 1813
    VERBOSE:
     Slab Consolidation:
    VERBOSE:   Space efficiency            = 86%
    VERBOSE:   Potential purgable slabs    = 46
    VERBOSE:   Slabs pinned unmovable      = 13
    VERBOSE:   Successfully purged slabs   = 41
    VERBOSE:   Recovered space             = 10,25 GB
    VERBOSE:
     Retrim:
    VERBOSE:   Backed allocations          = 3313
    VERBOSE:   Allocations trimmed         = 1505
    VERBOSE:   Total space trimmed         = 376,24 GB

    Host env>Powsershell:

    get-virtualdisk | fl

    ... (here comes the particular output for the volume above) ...

    ObjectId                          : {89744da1-1389-11e3-9464-0015172eea2f}
    PassThroughClass                  :
    PassThroughIds                    :
    PassThroughNamespace              :
    PassThroughServer                 :
    UniqueId                          : A14D74898913E31194640015172EEA2F
    Access                            : Read/Write
    AllocatedSize                     : 889595101184
    DetachedReason                    : None
    FootprintOnPool                   : 1779190202368
    FriendlyName                      : Workspaces: SysMgmt
    HealthStatus                      : Healthy
    Interleave                        : 262144
    IsDeduplicationEnabled            : False
    IsEnclosureAware                  : False
    IsManualAttach                    : False
    IsSnapshot                        : False
    LogicalSectorSize                 : 512
    Name                              :
    NameFormat                        :
    NumberOfAvailableCopies           : 0
    NumberOfColumns                   : 1
    NumberOfDataCopies                : 2
    OperationalStatus                 : OK
    OtherOperationalStatusDescription :
    OtherUsageDescription             :
    ParityLayout                      :
    PhysicalDiskRedundancy            : 1
    PhysicalSectorSize                : 4096
    ProvisioningType                  : Thin
    RequestNoSinglePointOfFailure     : True
    ResiliencySettingName             : Mirror
    Size                              : 1041529569280
    UniqueIdFormat                    : Vendor Specific
    UniqueIdFormatDescription         :
    Usage                             : Other
    PSComputerName                    :

    ...

    BTW

    Screen dumps below from the storage manager (with the extra info erased beyond volume z)

    Fig: VM

    Fig: Host






    Thursday, October 3, 2013 1:59 AM
  • Just tried this again - allthough I've done that before trying to figure this out ...

    start-dedupjob -volume z: -type garbagecollection

    Now after that ... running optimize-volume -retrim -verbose -slabconsolidate will state a problem about there were few evictable slabs.

    optimize-volume -retrim -verbose -slabconsolidate

    cmdlet Optimize-Volume at command pipeline position 1
    Supply values for the following parameters:
    DriveLetter[0]: z
    DriveLetter[1]:
    VERBOSE: Invoking slab consolidation on SysMgmt (Z:)...
    VERBOSE: Slab Analysis:  0% complete...
    VERBOSE: Slab Analysis:  100% complete.
    VERBOSE: Performing pass 1:
    VERBOSE: Retrim:  22% complete...
    VERBOSE:
     Slab consolidation was skipped because there were few evictable slabs.
    VERBOSE: Retrim:  24% complete...
    VERBOSE: Retrim:  25% complete...
    VERBOSE: Retrim:  26% complete...
    VERBOSE: Retrim:  27% complete...
    VERBOSE: Retrim:  29% complete...
    VERBOSE: Retrim:  32% complete...
    VERBOSE: Retrim:  36% complete...
    VERBOSE: Retrim:  39% complete...
    VERBOSE: Retrim:  41% complete...
    VERBOSE: Retrim:  42% complete...
    VERBOSE: Retrim:  43% complete...
    VERBOSE: Retrim:  46% complete...
    VERBOSE: Retrim:  47% complete...
    VERBOSE: Retrim:  48% complete...
    VERBOSE: Retrim:  49% complete...
    VERBOSE: Retrim:  50% complete...
    VERBOSE: Retrim:  51% complete...
    VERBOSE: Retrim:  52% complete...
    VERBOSE: Retrim:  53% complete...
    VERBOSE: Retrim:  54% complete...
    VERBOSE: Retrim:  55% complete...
    VERBOSE: Retrim:  56% complete...
    VERBOSE: Retrim:  57% complete...
    VERBOSE: Retrim:  58% complete...
    VERBOSE: Retrim:  59% complete...
    VERBOSE: Retrim:  61% complete...
    VERBOSE: Retrim:  62% complete...
    VERBOSE: Retrim:  63% complete...
    VERBOSE: Retrim:  64% complete...
    VERBOSE: Retrim:  65% complete...
    VERBOSE: Retrim:  66% complete...
    VERBOSE: Retrim:  67% complete...
    VERBOSE: Retrim:  68% complete...
    VERBOSE: Retrim:  69% complete...
    VERBOSE: Retrim:  70% complete...
    VERBOSE: Retrim:  71% complete...
    VERBOSE: Retrim:  72% complete...
    VERBOSE: Retrim:  73% complete...
    VERBOSE: Retrim:  74% complete...
    VERBOSE: Retrim:  75% complete...
    VERBOSE: Retrim:  76% complete...
    VERBOSE: Retrim:  77% complete...
    VERBOSE: Retrim:  78% complete...
    VERBOSE: Retrim:  94% complete...
    VERBOSE: Retrim:  95% complete...
    VERBOSE: Retrim:  97% complete...
    VERBOSE: Retrim:  98% complete...
    VERBOSE: Retrim:  99% complete...
    VERBOSE: Retrim:  100% complete.
    VERBOSE:
    Post Defragmentation Report:
    VERBOSE:
     Volume Information:
    VERBOSE:   Volume size                 = 969,87 GB
    VERBOSE:   Cluster size                = 4 KB
    VERBOSE:   Used space                  = 392,49 GB
    VERBOSE:   Free space                  = 577,37 GB
    VERBOSE:
     Allocation Units:
    VERBOSE:   Slab count                  = 3879
    VERBOSE:   Slab size                   = 256 MB
    VERBOSE:   Slab alignment              = 127,00 MB
    VERBOSE:   In-use slabs                = 1812
    VERBOSE:
     Slab Consolidation:
    VERBOSE:   Space efficiency            = 86%
    VERBOSE:   Potential purgable slabs    = 17
    VERBOSE:   Slabs pinned unmovable      = 0
    VERBOSE:   Successfully purged slabs   = 0
    VERBOSE:   Recovered space             = 0 bytes
    VERBOSE:
     Retrim:
    VERBOSE:   Backed allocations          = 3313
    VERBOSE:   Allocations trimmed         = 1506
    VERBOSE:   Total space trimmed         = 376,49 GB




    Thursday, October 3, 2013 2:05 AM
  • So I guess these are my points and questions:

    1) The FootprintOnPool remains?

    In the first run optimize-volume -retrim -verbose -slabconsolidate came out and said at the very end of its command output:

    VERBOSE:   Allocations trimmed         = 1505
    VERBOSE:   Total space trimmed         = 376,24 GB

    However, no space was reclaimed? The FootprintOnPool remains? How come?

    2) What does there were few evictable slabs mean?

    So like I you will probably go and try run the garbagecollection for dedupe - to make sure that deduped and now potentially free space was really cleared so it will really be standing by for trimming. Just to make sure.

    So now you go and run it the second time - and optimize-volume -retrim -verbose -slabconsolidate ... and this time after the having rerun the garbagecollection for dedupe on that volume the command comes out with something different i.e. there were few evictable slabs?

    2 different results - doing almost the same thing ... optimizing volume Z? Or was it 2 different runs - with the first run completing correctly with retrimming space - allthough the storage manager does not reflect any changes to allocated space?

    Anyway - what can the explaination be for there were few evictable slabs ... what does that statement mean more or less precisely (practically speaking)?

    :)





    Thursday, October 3, 2013 2:20 AM
  • While waiting to see if you have some solution for my problem here ... hopefully you post back :o) - I just dug out a note from February 14th 2013 testing Storage Spaces. So I'll just try to go with whose statements I made then meanwhile (allthough I am going to have a hard time understanding that note myself and interpretating the precise actions of that note now - since I did not put the order of those actions down I did then :o( ). There was some guy blogging about reclaiming space at the time and he had retrieved some points from you then. So I was trying to go with that then when I also had this problem - I am now trying to re-research in all these posts here.

    Anyway, as already indicated in my first post above:

    BTW Just changed some disks but around february I dealt with the same problem and made it work: thinly provsioned mirrored storage with deduped volumes. But time to pop the question on how to reclaim that unused space on the volume and virtual disk back to the pool.

    and

    Please note the Storage Space is on the Host - and the Volume is a passed through as a disk to a VM running in the role of i.e. a file server.

    ... I all ready had this working after February 2013. But have now switched some disks.

    (update: strikeout on the text - see 2) in a post below)

    I have this note apparently not posted as a feedback to MS from the testing I on SS in Windows for about 1½ week I think. It's dated February 14th 2013 and titled: Several interactions between Storage Spaces and Disk Manager mess up - Extending a SS virtual disk in disk manager can mess up reclaiming slabs in SS.

    At the top of that I note I wrote:

    Shortened:
    Base line experience here?:
    So shrinking and reextending a volume in DM created on a SS Virtual Disk - and then defrag, make the reclaim work in this way that allocated space for that virtual disk in SS now matches the reported disk use by optimize-volume.
    So not about slabs use - the inconsistency between 57 GB's disk use and consolidated slabs at 332x256 mb ... around 84000 GB's? Still strange to me, optimize-volume now still reports slabs use around 332x256 mb = 84000 GB's .... while the allocated space for the virtual disk in SS is just 61,5 GB's now.

    Somehow all a very inconsistent experience ... would be nice with some equivalence, i.e. being able to relate those number across the system.

    When those numbers does not match ... i.e. having a disk drive reporting different size where the math does not "seem" to add up ... you start worrying.

    You (Microsoft) should post a paper or something doing the math with equivalence relations ... so we can add up if anything is not working correctly.






    Friday, October 4, 2013 3:00 AM
  • Can you check if you have any volume shadow copies that are consuming space on this volume?

    vssadmin list shadows

    The trimmed allocations are from a file system perspective, and shadow copies underneath could prevent the slab from being unmapped, which could be the reason why your "footprint on pool" is not reducing despite the optimize.

    Here is another thread where volume shadow copies were accounting for the "hidden capacity"
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/6546480e-54d7-4b73-b89c-d6025423f998/storage-spaces-queryissue-using-server-2012-essentails?forum=winserveressentials#70532665-ab2e-487f-aa5a-c606be15ce31

    Tuesday, October 8, 2013 12:58 AM
  • Just issued command vssadmin list shadows and also rechecking using the UI i.e.  ComputerMgmt>Shares>AllTasks>ConfigureShadowCopies.

    Shadow copies are not enabled on that volume in question for this case. Shadow copies are in fact - on that VM, only enabled on 2 volumes where one is the system volume.

    BTW The problem does not seem to be on the VM - since the free space is acutally listed correctly here? It seems to be on the host (where the storage pool resides for those virtual disks used as pass-through disks in the VM). On the host free space remains consumed by the pool (thinly provisioned). But on the VM ... i.e. where shadow copies could have been configured ... the space is listed as free.

    Thursday, October 10, 2013 10:34 PM
  • Have a practical solution that will unmap the slabs that can work for me now - but there are problems with the slabs not being unmapped automatically.

    ...

    Been testing a bit more. Seems like the deadspace I am trying to reclaim on the back-end (the host and storage provider) does not work - only on the front-end (the VM) things work.

    It's like those unmap hints for slabs  does not work on the back-end and host providing the HDD storage? (if that is what is happening when using harddrives and not VHDX?)

    1) Removing a volume to the back-end works as a temp solution. Then you can move it back. The context okaying this temp solution in my case may be practical use of dedupe and thin provisioning. Practically speaking, it's a low frequency event.

    So it works "manually", if I remove a drive from the VM and online it on the host (i.e. Host>Hyper-V>MyVM>MyDisk>Remove + Host>DM>MyDisk>Online) - and run optimize-volume there (Powershell>Optimize-volume -driveletter MyDisk -retrim -slabconsolidate -verbose) ... then space is reclaimed on the stoage pool in the back-end. Then I can just - as a practical solution (***) do Host>DM>MyDisk>Offline + Host>Hyper-V>MyVM>MyDisk>(re)Add)

    In the vm ...Powershell>Optimize-volume -driveletter MyDisk -retrim -slabconsolidate -verbose would just say this thing about too few evictable slabs? An on the back-end the footprint remained?

    Regarding the reference to a practical solution (***), during the almost 1 year I've been using storage Spaces and dedupe I've learned this is only something I do to initially sort of (re)pack a volume. I.e. may be once a year or on storage migrations. I.e. in this case, because I was switching to a set of less, but bigger disks. So you're not wasting time on doing deduplication or optimize-drives all year around. So you could probably live with that - if you remember what to do like once a year - since there are problem with the automatics. The point is you do not. I just had to spend some time over a few weeks to make repack the volumes again. BTW I did not think of this myself - but also consider it to be my experience using dedupe for about 1 year - after having reflected over this article: http://www.techrepublic.com/blog/datacenter/dont-waste-time-reclaiming-space-on-thin-provisioned-vms/4718)

    But this works - as a temp solution.

    2)  Is (dead) space reclaimation not supposed to work on the back-end if we're using HDD's - and not (SSD, RAID or any other storage array)?

    May be I just did 1) the last time as well - no fiddling with shrinking, extending, defrag whatever to try and trigger the unmap slabs (unmap slabs does not seem to work perfectly when using harddisk drives directly in the storage pool - i.e. not VHD(X), RAID or anything.

    I tried looking into what is means to do the thick provisioning in the front-end i.e. VM (guest) and then the thin on the Host (providing the storage device) .... really in this new case not using VHDX, Intel RST or similar - but trying to gain from the flexibility of running directly on the harddrive ... i.e. simple, i.e. mirror ... and without the impact of potential VHDX corruption. I.e. sort of pure Storage Spaces.

    So may be I was using Intel RST initially - and Intel provided the hardware device requirements http://technet.microsoft.com/en-us/library/jj674351.aspx,m i.e. I am wondering if this is about the harddisks (when not using a layer of RAID indirecting to the harddrives - which I did use before). I.e. the unmapping of dead space not working now on the pool in the backend?






    Friday, October 11, 2013 3:19 PM