Storage Spaces Dilemma - what to do to existing pool to increase performance RRS feed

  • Question

  • Dear All

    I have the following dilemma.

    I have 24TB pool running single parity on WSE 2016. It comprises circa 12 JOBOD HDDs. the pool has been doing its job for couple of years. I migrated from WSE2012 and before that from Home Server 2011 and Home Server.

    I have been reading how addition of 2no SSDs as journal drives enhances write speeds - which haven't been great but acceptable. My server has 16GB of Ram and when transferring large files, say movies up to 12GB, RAM was used as a write cache. However considering I haven't been doing anything with the server for years I decided entertain the idea of installing additional 2no SSDs and assign them as dedicated journal drives.

    I followed example as found on dataon blog with some additional PowerShell commands to assign disks as SSDs.

    The odd thing started happening when I completed the process. When I started moving large file (22GB 4k movie) from my pc to the server, the speed was 100% 1Gbps till the copying got stuck on 44%. I checked from the server side and can confirm that the RAM was utilised as a cache but not SSDs. what was the worst is that the copying never finished, just timed out. tried couple of times, to no success.

    After extended research, I found that the max cache space utilised by existing pool when adding an SSD is 1GB. You cannot increase the cache size for the existing pool. This is the case for me, as both of my newly installed SSDs showed 0.6% of utilisation in Storage Spaces GUI.

    To the question?

    What to do?

    - do I start detaching disks from my existing pool and create new pool with SSDs and gradually move data across?

    - do I forget about incorporation of SSDs at all - one can say, for home use, why bother.

    - I would love to type some magic commands in the PowerShell which would see my SSDs utilised a bit more (100GB is the limit for cache).

    I had some bad experiences with moving data over and when I think of at least a week or two of moving data, I just loose hope. I will not last worrying that long. Is there a more convenient way of addressing my issue?

    Looking forward to any tip or advice that could see my server performing a bit faster and would see those new SSDs paying for themselves


    Monday, March 11, 2019 1:01 PM


All replies

  • Hi,

    This is a quick note to let you know that I am currently performing research on this issue and will get back to you as soon as possible. I appreciate your patience.

    If you have any updates during this process, please feel free to let me know.

    Best regards,


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Tuesday, March 12, 2019 1:09 PM
  • Michael thank you for looking into this.

    As am waiting, i decided to offload couple of disks to create a new pool with parity. I assigned 2no SSDs as journal drives. 

    I noted only 1gb is being reported as used. I know that this space is write disk cache  (WDC).

    I started moving data from old storage pool to new one (with journal drives) and I have noticed brief increase in write speeds, but after a moment it went back to normal.

    now the question. am i confusing WDC with journal drive functions.

    if only 1gb WDC is  being used out of 120GB SSD? what happens with remaininv 119GB? just sit there doing nothing?

    or WDC is being utilised for parity functions and 119GB is used as a write buffer?

    if the latter is the case, why I don't see any increase in write speed when the buffer is being filled?

    thank you

    Tuesday, March 12, 2019 11:32 PM
  • Hi,

    Thanks for your detailed information.

    From your post, your S2D set to in a Hybrid deployment which aims to balance performance and capacity or to maximize capacity and do include rotational hard disk drives (HDD).

    When caching for hard disk drives (HDDs) in the hybrid deployment, both reads and writes are cached, to provide flash-like latency (often ~10x better) for both. The read cache stores recently and frequently read data for fast access and to minimize random traffic to the HDDs.


    Cache drives

    Capacity drives

    Cache behavior (default)

    SSD + HDD



    Read + Write

    The cache should be sized to accommodate the working set (the data being actively read or written at any given time) of your applications and workloads.

    This is especially important in hybrid deployments with hard disk drives. If the active working set exceeds the size of the cache, or if the active working set drifts too quickly, read cache misses will increase and writes will need to be de-staged more aggressively, hurting overall performance.

    You can use the built-in Performance Monitor (PerfMon.exe) utility in Windows to inspect the rate of cache misses. Specifically, you can compare the Cache Miss Reads/sec from the Cluster Storage Hybrid Disk counter set to the overall read IOPS of your deployment. Each "Hybrid Disk" corresponds to one capacity drive.

    We can also set cache size to suit the workload. To override the behavior, use Set-ClusterS2D cmdlet and its -CacheModeSSD and -CacheModeHDD parameters. The CacheModeSSD parameter sets the cache behavior when caching for solid-state drives. The CacheModeHDD parameter sets cache behavior when caching for hard disk drives. This can be done at any time after Storage Spaces Direct is enabled.

    Reference link:

    Hope above information can help you. If you have any question or concern, please feel free to let me know.

    Best regards,


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Wednesday, March 13, 2019 5:37 AM
  • Hi,

    Additionally, for the question 

    what happens with remaininv 119GB? just sit there doing nothing? or WDC is being utilised for parity functions and 119GB is used as a write buffer?>>>

    I think, the remaining 119G is used as a read/write caching for hard disk drives as above mentioned.

    Please feel free to let me know if you need further assistance.

    Best regards,


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    Wednesday, March 13, 2019 7:48 AM
  • Hi Michael

    I came across the article you mentioned, however I thought It will not be applicable to my case as I only have a single server (is s2d applicable to min 2no servers?).

    If I use a server as an archive repository (mostly photos and 4k videos) I though I don't need a read cache, hence my attention was drawn to the journal drives solution.

    My situation is a single server with 12HDDs in parity and 2no SSDs. 

    please forgive my lack of knowledge here, but I think I have been bypassing a virtual drive creation - this may be the cause for journal drives not being utilised.

    Here are the steps I use to create my pool.

    1. I created the pool using GUI - parity space

    2. I added the journal drives using PowerShell

    - Get-PhysicalDisk -CanPool $True

    - PDToAdd = Get-PhysicalDisk -CanPool $True

    - Add-PhysicalDisk -StoragePoolFriendlyName "Pool" -PhysicalDisks $PDToAdd -Usage Journal

    3. As the drive was already created and the letter assigned, I skipped the step to create a virtual drive on the pool. I considered it unnecessary as I use the pool for Windows Server Essentials folder and I was adding disks as and when required to expand the pool.

    Could this be a reason? why the need for a virtual drive sitting on the pool? will I be able to easily expand VD capacity?

    Thank you and sorry for being a noob.



    • Edited by AndrzejWSE Wednesday, March 13, 2019 11:45 AM
    Wednesday, March 13, 2019 11:39 AM
  • So assuming that VD is essential, why the need of setting redundancy of VD when pool has inherent redundancy (parity).

    if the pool has the redundancy set, presumably VD can be set up without any redundancy?

    and again the function of journal drive and assigned EBC comes to mind.

    if I assing 2no 120GB SSDs as journal, they are added to the pool with 1GB WBC. I presume WBC is used to maintain parity and remaining 119GB would be used as write buffer?

    Wednesday, March 13, 2019 11:57 AM
  • Hi its me again

    I think I got to the bottom of things when trying to plug a gap in my knowledge.

    I found this great tutorial of deploying Storage Spaces on a stand-alone server - which may help some enthusiasts like me. Google - deploy standalone storage spaces

    After reading this article, I realised that when using Storage Space is created using GUI (accessed from Control Panel - System - Storage Spaces), the Virtual Drive is being created in the background (I presume the commands are scripted) and then, when I try to add journal drive to the pool which already contains VD - the journal functionality does not work.

    This was confirmed by number of articles (which I cant paste here)

    The solution would be to follow pool creation in server manager gui or by using the following shell commands:

    1. Gather the disks

    Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True

    2. Create Storage pool from disks (excluding the SSDs)

    New-StoragePool –FriendlyName StoragePool1 –StorageSubsystemFriendlyName “Storage Spaces*” –PhysicalDisks (Get-PhysicalDisk PhysicalDisk1, PhysicalDisk2, PhysicalDisk3, PhysicalDisk4)

    3. Add SSDs as a journal drives to the pool

    Get-PhysicalDisk -CanPool $True

    PDToAdd = Get-PhysicalDisk -CanPool $True

    Add-PhysicalDisk -StoragePoolFriendlyName "Pool" -PhysicalDisks $PDToAdd -Usage Journal

    4.Check disk assignment

    Get-PhysicalDisk | Select-Object FriendlyName, MediaType,  Size

    5. Assign disks types (HHDs and SSDs)

    Set-PhysicalDisk –FriendlyName PhysicalDisk6 -MediaType  SSD

    Set-PhysicalDisk –FriendlyName PhysicalDisk6 -MediaType  HDD

    6. Create Virtual Disk - where you assign redundancy - Parity (or mirror)

    New-VirtualDisk –StoragePoolFriendlyName StoragePool1 –FriendlyName VirtualDisk1 –Size (50GB) –ProvisioningType Thin

    7. You can also specify WBC size to fill the SSD, however I haven't yet determined weather to increase WBC from recommended 1GB to whatever the SSD disks are.

    Get-StoragePool "Bulk Storage" | New-VirtualDisk  -FriendlyName "Bulk Storage" -ResiliencySettingName Parity -UseMaximumSize -ProvisioningType Fixed -PhysicalDiskRedundancy 2 -WriteCacheSize (318901321728) -NumberOfColumns 7

    I will give it a shot tonight and report back if any of the above is not true and is the fruit of my ignorance.


    Wednesday, March 13, 2019 1:44 PM
  • Hi,

    Thanks for your detailed update.

    Based on the reference, for WBC to work properly, storage space with two-way mirroring or single-parity must have at least two SSDs, and storage space with three-way mirroring or dual parity must have at least three SDDs. 

    Here're the references discussed storage space, hope this helps.

    Meanwhile, I'll stand by with you. If any question or update, please feel free to let me know.

    Highly appreciate your effort and time. 

    Best regards,


    Please remember to mark the replies as an answers if they help.
    If you have feedback for TechNet Subscriber Support, contact

    • Marked as answer by AndrzejWSE Thursday, March 14, 2019 8:18 PM
    Thursday, March 14, 2019 9:44 AM