none
Protecting multiple folders on a drive are treated as one drive during consistency check. RRS feed

  • Question

  • Hello:

    We are very close to upgrading our backup system to DPM 2010 but need to get around an obstacle first. I was hoping that you all might have some suggestions.

    We currently have one protection group backing up a drive that has approximately 1TB of data on it. The problem is, that when the data on the server doesn't match what is in DPM, it throws off a c-check (consistency check). These c-checks take anywhere from 3 to 7 days to finish (it's very magical), so we usually just end up deleting all the data on that volume and starting from scratch losing all our 5 days of detention points.

    Well, we are about to start keeping a 30 day retention period for that drive and absolutely can't keep resolving the c-check issue using the method above. To remedy the issue, I have just tried to backup up the folders in that drive, but when the c-check is needed again, aal the folders are treated as one drive. So instead of just running a c-check on the offending folder, it runs on the entire drive.

    Do any of you have suggestions other than putting all those folders into their own drives? We talked about putting the folders on their own mount point (on our SAN storage) but fear that DPM will still see them as being on the drive rendering our solution pointless. Let me know what you all think.

    Thanks

    Chris

     

    Friday, April 1, 2011 7:55 PM

Answers

  • Did not understand why you would lose 30 days of backup.

    While CC in progress - which is upto 7 days for you, you will not have recovery point on those days.
    All recovery points prior to CC will be maintained.

    You can separate the folders onto multiple volumes, and for usage purpose mount the volumes under mountpoints on original single volume like C:\Folder1, C:\Folder2, C:\Folder3 etc, so the applications still see same paths.

    If C:\Folder1 and C:\Folder2 are mount points pointing to separate volumes, you can protect them (and run CC) individually.
    Also C:\ also will be independent of these two mounted folders.

    Thanks,
    Arun


    Thursday, April 7, 2011 9:05 AM

All replies

  • Yeah, Doing c-check will run on all folders of that drive.

    In a normal scenario, the c-check shouln't be needed often. It runs when normal ExpressFull fails (or you scheduled it from UI).

    In your case it looks like the express full backup is failing often. Check the failed ExpressFull jobs in Monitoring view and let us know the error it reported ?

    Thanks,
    Arun.

    Tuesday, April 5, 2011 7:53 AM
  • It doesn't happen often, but when it does, it's a catastrophe. So I am trying to find a way to do a c-check on just the folders and not the entire drive. Is there a way to get around that? I can't afford to lose 30 days’ worth of disk backups for an entire drive because a c-check's take so long to complete.

    Tuesday, April 5, 2011 5:34 PM
  • Did not understand why you would lose 30 days of backup.

    While CC in progress - which is upto 7 days for you, you will not have recovery point on those days.
    All recovery points prior to CC will be maintained.

    You can separate the folders onto multiple volumes, and for usage purpose mount the volumes under mountpoints on original single volume like C:\Folder1, C:\Folder2, C:\Folder3 etc, so the applications still see same paths.

    If C:\Folder1 and C:\Folder2 are mount points pointing to separate volumes, you can protect them (and run CC) individually.
    Also C:\ also will be independent of these two mounted folders.

    Thanks,
    Arun


    Thursday, April 7, 2011 9:05 AM
  • Ok, that helps clear things up a little.

    I was stating that we would lose 30 days of recovery points because by the time the c-check would finish, it was faster to just delete the volume and start over. If the c-check runs for 7 days or more, then that is 7 days’ worth of backups that we are not getting. This kind of defeats the purpose of having backups in the first place.

    We thought by breaking up the large drives into multiple drives or mount points, it would save us time on performing the c-checks. So instead of doing a c-check on a 1 TB drive, we would be able to do a c-check on a 200 GB drive or mount point. I hope that makes since.

    I just wanted to be sure before reconfiguring 9 TB of data that the c-check wasn’t going to treat the mount points as one big drive. I am assuming that because the data are on different volumes, DPM will see that and know that it is not part of the original drive.

    Thanks

    Thursday, April 7, 2011 6:24 PM
  • Consistency check doesn't transfer complete data whenever anything on prduction server changes. It compares the data on replica and production server and transfers only the difference, though it reads the data changed to figure out the change.

    So consistency check for 1 TB of data shouldn't take 7 days if your churn is very less and production server and DPM disks are healthy .i.e providing good read/write throughput. Few question here:

    1. Is there anything specific in your configuration which is making it slow, like too much churn, data transfer over WAN etc.?

    2. Is there any other application running on DPM server.? 

    3. Are there too many small files in your data source?

    4. Is your DPM server having adequate RAM etc.?

    5. How frequently you run SYNC?

     

    Putting them in seperate volumes will definitely help as you can then run them in parallel.


    Thanks, Surendra Singh [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.
    Tuesday, April 26, 2011 11:23 AM
  • If TCP chimney offloading is enabled, it might cause consistency checks to run extremely slower sometimes.

    Can you try out disabling it and see if it improves the backup speed. Follow belog links to know more about TCP chimney offloading.

    http://support.microsoft.com/kb/951037

    Are you using bandwidth throttling? below link suggests that enabling it and giving it very huge number improves backup spped. It is for DPM 2007 but will apply for DPM 2010 as well.

    http://scug.be/blogs/scdpm/archive/2009/12/22/system-center-data-protection-manager-2007-tcp-chimney-and-bandwidth-throttling.aspx

     


    Thanks, Surendra Singh [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.
    Saturday, April 30, 2011 3:41 AM
  • I just checked and TCP offloading is not enabled.

    In any case, I think the ultimate question was answered by determining if DPM 2010 treats mount points as if they were part of the drive. If the data does become inconsistent, it's not going to effect the entire drive now.

    I also discovered that we did not have scheduled c-checks running so it would be weeks and even months sometimes when we would have inconsistencies. With the new hardware and everything configured correctly, performance is much better.

    Thanks everyone!

    Chris

     

    Monday, May 2, 2011 10:27 PM