Reallocate Replica Volume Without Losing Recovery Points - DPM 2010 RRS feed

  • Question

  • I have a few grossly overallocated replica volumes.  Basically, what happened is that we have a few SQL DBs that grew extraordinarly large due to mismanagement, DPM autogrew the replica and recovery point volumes, and then the DB was fixed, shrunk, and now DPM is overallocated.  I can shrink recovery point volumes.  It took about a month for everything to churn, but it eventually shrunk down to normal levels.  Awesome!  But the replica volumes are still way overallocated.

    I know that DPM 2010 can't shrink Replica Volumes and you shouldn't try it from DiskPart either.  I understand that and I'm okay with that.  I've reached a sort of zen state now in how wasteful DPM can be of disk space.  While I'm baffled this isn't being addressed in DPM 2012, I accept it for what it is, because DPM is so wonderful in many other ways.

    So I thought, well, if I stop protection on the member and don't delete the recovery points, I can create a new properly allocated set of volumes and then just purge the old recovery points after they expire.  Genius!  Sure, for a few weeks I'm "double allocated" but that's okay.  Zen and all that.  In the end, I'll get my smaller replica volume.

    The only problem is that DPM refuses to let me allocate new volumes!  I've tried different protection groups, different retention periods, anything!  It always insists that I must reuse the existing volumes.  I don't want to delete the old volumes, because I want to maintain history so I can restore from the old volumes until they expire.

    This is just a limitation in the UI, right?  Please tell me there's a way to do this from the command line... :-)

    Tuesday, March 20, 2012 1:30 PM

All replies

  • No answer from me, and no thread hijacking intended, but my question is about the same thing, So I'll watch this thread and notify you if I get feedback.


    Tuesday, March 20, 2012 2:30 PM
  • Did you delete your question?  I was curious to read it, but the link in your reply isn't working.
    Tuesday, March 20, 2012 5:19 PM
  • sorry dude, i moved my question to the Storage pool subforum since it seemed more relevant (topic is about changing a disk from scsi- to mpio-type).

    not the same question but the answer(if any) will probably boil down to the same thing.

    if i understand your problem correct that is :)


    Tuesday, March 20, 2012 9:05 PM
  • As was so helpfully explained in my thread of a similar nature, a volume will exist until all active items are deleted entirely. I'm pretty much in a similar boat with my SQL/DPM environment. I'm in the process of preparing a new DPM server, which is gonna be built right, the first time. But for now, I'm stuck with a ton of volumes with practically 1TB+ of space not being utilized. It's a shame DPM doesn't do a better job with regard to replica size management. I guess MS only anticipated SQL DBs getting larger and not being deleted....
    Tuesday, March 20, 2012 11:27 PM
  • I don't think any large long term DPM deployment can possibly be "built right" to avoid this problem.  When you protect 1000s of databases across dozens of servers, sometimes they might bloat for reasons completely out of your control.  With no way to effectively reduce the size -- even apparently by doing a copy-and-delete-later which is what I'm asking in this post -- it makes DPM incredibly space inefficient!  The common refrain is "drives are cheap."  Yes, they are.  But you know what isn't cheap?  The power required to make the drives spin!  That's expensive.
    Wednesday, March 28, 2012 5:14 PM
  • Hi Timothy,

    I always run into the same issue. Way too much disk space gets allocated for replica volumes...
    WHat I do about it is shrinking the volumes "under the hood" in the Server Manager's Disk Management. It works quite well, you'd just have to make sure that you only shrink in multiples of 10 MB. Otherwise DPM wouldn't be able to grow a volume anymore. But unfortunately this is also not a long term solution as DPM grows the volume again to 1,5 x size of the data size when it needs to expand a volume.

    So we run in a circle :-)

    Just wanted to provide you an idea of keeping the disk space usage a little lower

    Best regards

    Mario Schaupp

    Friday, April 13, 2012 8:29 AM
  • Based on a different TechNet article I read a long time ago, I thought there was a risk of messing up the replica by shrinking the replica volume using disk manager.  I thought some sort of VSS voodoo could cause issues.  Was that false?

    I'm going to have to put your method to the test!  So the only major hurdle is just making sure the volumes are 10MB aligned.  I can do that...

    Friday, April 13, 2012 2:54 PM
  • While it would initially consume more storage you might be able to use the MigrateDatasourceDataFromDPM script to move the replica and recovery point volumes to a custom DPM replica volume of smaller size. I've not tested to see whether this works but might be worth looking into. See
    Tuesday, April 17, 2012 7:11 AM
  • @Danny - I tried that with disk-to-disk but it didn't work because I was never given an opportunity to size the new volumes.  I did not try moving from a DPM volume to a custom volume.  Interesting idea.  It would take two whole passes of my retention to complete because I would have to wait for the initial volume to expire all the recovery points, then wait for the custom volumes to expire when migrating back, but it might work.

    Currently I'm testing manually shrinking volumes in Disk Manager.  The shrinking part was easy to do.  However, my first restore after shrinking failed with a non-descript error halfway through.  I'm trying again.  If it fails again, I'll be left wondering if it's because the deallocated part of the drive had some information it needed that's now missing.  My next step will be a consistency check.  If it still fails after that, i'll have to conclude that manually shrinking is a bad idea.  I will update this post with what I find after I perform all those tests.

    Tuesday, April 17, 2012 5:40 PM
  • Here is the error I get when doing the restore after shrinking.  It looks like shrinking a replica volume really IS a bad idea (I had my doubts since nobody could tell me why exactly, but this must be why).

    DPM encountered an error while performing an operation for \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1881\918888c3-ebf3-41d1-a877-856694f6c813\Full\E-Vol\SQL\Test\Test.MDF on dpm1.contoso.local (ID 2033 Details: The parameter is incorrect (0x80070057))

    I'm going to see if a consistency check brings it back to life.  My guess is that it will, but at the cost of the existing recovery points.  That's fine for this test, but not good for production! :-)

    EDIT: Consistency check succeeded, but I still couldn't recover from my test DB even after that.  I tested recovery from another DB and it worked fine.  So, I would suggest NOT shrinking the replica partition manually!
    Tuesday, April 17, 2012 8:47 PM
  • The way that I've worked around the waste of space is by two-fold.

    1. Run DPM in a VM. 

    2. Assign disks to pool via adding VHD/VHDX to the SCSI interface.  I usually add them in 2TB at a time using dynamic disks.

    Yes, by using dynamic disks I reduce my I/O throughput and introduce fragmentation.  But I can always defrag the underlying VHD using contig if I shut DPM down and I honestly never have any throughput problems on my DPM machines.  Plus, I can shrink a dynamic disk so that even though the partitions allocated are huge, the underlying VHD/VHDX only uses up "real" space.

    This keeps me from micro-managing the partitions in DPM while keeping the underlying storage in some sort of reasonable order.  I was able to shrink 15TB of preallocated space down to about 8TB of real space using this method.

    This approach also prepares me for the day (read with fingers crossed) where Microsoft will support data deduplication real-time on active VHD's in Hyper-V!  OK, maybe that won't happen.

    And yes, I'm sure there will be many that disagree with my approach but it has worked for me and keeps from spending inordinate amounts of time on these partition management problems.


    Tuesday, January 29, 2013 9:16 PM