none
DPM2012R2 - Consistency Check eating up disk space on Replica volume RRS feed

  • Question

  • I am backing up a file server with DPM.  There is currently 23.7TB of data being backed up.  The replica volume size was 26TB.  I ran into a scenario where the volume needed a consistency check.  I started the check and let it run.  A day later, the check failed saying it ran out of disk space.  Turns out, the replica volume had totally filled up.  So I added an additional 2TB of space and kicked the job off again.  After another day, the job failed and the volume was full again.  I've ran consistency checks in the past and they have never needed this much space to finish.  In fact, they never appeared to need any additional space at all.  I have several questions?

    1.  Is something not working as it's supposed to?

    2. How much space will DPM need to successfully complete the consistency check?

    3. Why is the consistency check taking up so much space?


    • Edited by Caleb44 Monday, June 6, 2016 5:49 PM
    Monday, June 6, 2016 5:47 PM

All replies

  • The only thing I can think of would be one or more of the following:

    A) A bunch of new data was added and some was deleted on the protected server - the CC may be bringing over the newly created data before removing the deleting data from the replica.

    B) Files were compressed at one time and are no longer, so need to uncompress them on the replica.

    C) Files were deduped at one time and not-deduped now, so need to un-dedup them on the replica.

    We had a scenario where DPM 2012 R2 running on Windows 2012 was protecting a Windows 2012 R2 dedup volume and because of incompatibilities in dedup between Windows 2012 and Windows 2012 R2 the DPM server was unable to restore files.  We introduced a fix in DPM 2012 R2 UR5 that would convert the dedup replica volume to native un-dedup files on the replica so files can be restored.  That required more replica volume space.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    Wednesday, June 8, 2016 9:03 PM
    Moderator
  • A) A bunch of new data was added and some was deleted on the protected server - the CC may be bringing over the newly created data before removing the deleting data from the replica.

    Thanks for your repose Mike.  B and C do not apply to us- It's an NTFS volume and we do not compress our data in any way.  

    As for A,  I can verify that the the addition of data was minimal, less than 100GB.  Some may have been deleted, but that would be extremely minimal maybe 5GB.  The only event that occurred before the CC was required was that the the NTFS permissions we modified on the entire data set.  Would this somehow affect this?

    Wednesday, June 8, 2016 9:15 PM
  • Hi,

    NTFS metadata (file names, attributes, security) all part of the NTFS master file table (MFT) and those changes are tracked by DPM the same as user file data.  Those changes would come over and be applied to the MFT on the replica but those would be overwrites and at most should only cause more recovery point volume space usage.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    Wednesday, June 8, 2016 9:30 PM
    Moderator
  • So what else do you think is causing this?
    • Edited by Caleb44 Wednesday, June 8, 2016 9:40 PM
    Wednesday, June 8, 2016 9:40 PM