none
Consistency checks running every day at 8:15pm for file data RRS feed

  • Question

  • we have a very large data source, over 1TB that gets backed up to DPM 2010.

    We recently troubleshot the backups failing all the time due to a VSS catastrophic failure, and resolved that issue by manually creating a VSS shadowstorage location on the protected file server using vssadmin commands.

    We are now getting recovery points created at 6pm each day, but at 8:15pm each night a consistency check runs for ~17 hours.  We aren't sure why.  we already increased the disk allocation for the file server from 300mb to 900mb for the file change log.

    where can we get more information about the consistency check running?

    it is *not* scheduled to run each day, but the PG is configured to run a consistency check when the data becomes inconsistent.

    Thanks


    Ben Pahl

    Tuesday, December 4, 2012 9:50 PM

All replies

  • Hi,

    Look for a failed synchronization job for that data source, that is basically the only way a replica volume can get marked inconsistent.  You can create a custom filter under the jobs tab and filter for just failed jobs for that data source.  Once you find the failed synchronization job, look at the details and troubleshoot that cause.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    Wednesday, December 5, 2012 3:34 AM
    Moderator
  • Mike,

    Thanks for the advice; it proved fruitful.

    At 8pm each day we are seeing a failed sync job that specifies a journal wrap error.  We've actually seen this before and resolved it by increasing the disk allocation for the file server. 

    In this case, we've already bumped it from 300mb to 950mb, and today after seeing that error we bumped it again to 1600mb.

    What is the highest we should set this value to?  Are there problems associated with having this value set higher?  Is this symptomatic of some other problem on our file server?

    The file share itself is over 1TB so we aren't too surprised this change log exceeds the 300mb default.

    Thanks,


    Ben Pahl

    Wednesday, December 5, 2012 6:36 PM
  • Hi,

    You have an offending application (most likely anti-virus) that is either touching lots of files, or is updating a single file or set of files constantly which is causing the USN journal to wrap between DPM Synchronization jobs.  

     To see what files may be causing the USN journal to wrap, perform the following steps,

    1) Download the DPM 2010 diagnostic and install it on the protected sever.

    http://www.microsoft.com/en-us/download/details.aspx?id=9462

    2) After it's installed from an administrative command prompt cd to: C:\Windows\MPSReports\DPM\bin

    3) Run dumpusn.exe X: -e -o C:\temp\usnlog.txt (Where X: is the protected volume drive letter)

    4) Press CTRL+C to stop it otherwise the file will be too large to open in notepad.

    5) Open the file and see if there is a common set of files that are constantly being updated.  If you find some, you can use resource monitor to see what application or process that is responsible.  Update or un-install that application.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    Wednesday, December 5, 2012 8:18 PM
    Moderator