none
DPM 2012R2 Backup to Azure for drives larger than 825GB RRS feed

  • Question

  • Hi,

    I am currently having an issue whereby I cannot get our primary data server to create an online backup to Azure. We're currently running the latest version of DPM (4.2.1312.0) and the agents have been updated accordingly (the data server has the most current agent, and I've downloaded and installed the most current Azure agent from the portal -- 2.0.8707.0).

    What I've done:

    • Removed and re-added the server to the protection group in DPM
    • Bounced both servers a number of times
    • Re-registered the DPM server to Azure
    • Fully patched the DPM server

    Local recovery points work fine from the host server to DPM, but all attempts to kick off the DPM-->Azure backup will ultimately result in the following:

    Description: Online recovery point creation jobs for <Volume> on <Server> have failed 1 times. (ID 3188)
    The backup operation failed because the maximum allowed data source size of 825 GB was exceeded. (ID 100073)
    More information

    The drive in question is a little under 1TB in size in terms of total data.

    Anyone out there had success with this, and if so did it require any particular steps to enable? Based on reading it should have been a case of "apply update and job done", but it doesn't appear so.

    Thanks,

    Nick.

    Tuesday, June 9, 2015 4:54 AM

Answers

All replies

  • Hi,

    While a online backup is on progress can you open disk management and check the size of the virtual disk that gets mounted and keep an eye on free space before the error occurs.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    Friday, June 12, 2015 7:13 AM
    Moderator
  • Hi,

    This would appear to be potentially the issue. I can see the VHD gets mounted on a 1.7TB drive, and then proceeds to grow up to this point and then fail. For reference, the current size of the replica is 1.693TB, which is close to but not more than 1.7TB.

    My understanding was that the Azure backups required a Scratch directory equal in size to 5-10% of the total data to be backed up. Our total storage to the cloud is somewhere in the 7TB range, which would put the required Scratch storage at around 700GB (much less than 1.7TB). Why is DPM attempting to reserve the entire replica size (or more) for the backup to run? And why then would it fail given that the replica size does not exceed the VHD size -- is there overhead involved?

    Nick.

    Tuesday, June 16, 2015 2:15 AM
  • I have a 10 week old support case for nearly the same issue.  Online Recovery Point creation fails for large volume. We're not running out of VHD / Scratch space.  Error 3188 and  "An unexpected error occurred while the job was running. (ID 104)"
    Wednesday, June 17, 2015 7:06 PM
  • Hi Nick,

    Search the C:\Program Files\Microsoft Azure Recovery Services Agent\Temp\cbengine*.errlog for this error:

        Failed: Hr: = [0x8004240f] Error in extending the volume

    This would indicate we're trying to backup too much data.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    Tuesday, June 23, 2015 5:39 PM
    Moderator
  • Hi Mike,

    I can see that error in the log:

    Failed: Hr: = [0x8004240f] Error in extending the volume

    Failed: Hr: = [0x8004240f] Resizing failed 552026112 3998679040 

    I can also see the following error:

    14A8 1FC4 07/11 18:15:05.453 79 WcfClient.cs(982) D1FB4A1C-DEE5-4A96-907C-7CD8D9731EE4 FATAL Failed to make web service call | Params: {Exception:  = System.ServiceModel.FaultException`1[Microsoft.Internal.CloudBackup.Common.FailureModeling.CloudServiceFault]: Internal Service Error (Fault Detail is equal to ErrorCode = CloudAsyncWorkSubmitted, DetailedErrorCode = 0, DetailedErrorSource = None/None, Message = 
    14A8 1FC4 07/11 18:15:05.453 79 WcfClient.cs(982) D1FB4A1C-DEE5-4A96-907C-7CD8D9731EE4 FATAL ).}

    None of this particularly makes any sense to me. The scratch directory requirements for Azure backups are listed as 10% of the total data to be backed up. On a 2TB VHD that would be equal to 200GB. I've got a little less than 1TB available for scratch, almost 5 times more than required.
    Adding to this, is that one of the drives that fails is only 500GB in size, and should run smoothly, yet fails also.

    How do I move forward in troubleshooting this? What are the recommended actions?

     

    Nick.


    • Edited by elnickos Monday, July 13, 2015 2:11 AM
    Monday, July 13, 2015 1:42 AM
  • Watching it closer,

    I can see the VHD's that get created locally during the online backup are no longer running out of space either. Not sure when that started to happen. I just watched the 3 jobs that are failing start, correctly create and mount a VHD. The VHD's grew in allocated size to a point that never exceeded the space available. Based on this, I am even more confused as to why I am receiving errors for disk space.

    For the record, the three VHD files that are created are: 1.061TB (638.97GB free), 1.688TB (11.86GB free), 446.94GB (1.253TB free).

    One is close to the limit, but it is not growing beyond that point (as in, I do not see free space go to 0). Certainly the last backup is well within range.

    Nick.

    Monday, July 13, 2015 3:07 AM
  • If you have Premier Support, you might want the support engineer to reference case REG:115042012654754. Our issue for creating large online backups has been resolved.


    Monday, July 13, 2015 3:21 PM
  • Can you provide any information on that? Was the resolution found on your side or on the Azure side?
    Thursday, July 16, 2015 12:38 AM
  • Hi Elnickos,

    The issue found in 115042012654754 was due to the FileCatalog update process in Azure.  The user data was making it to Azure but the associated catalog was timing out.  After a timeout period of inactivity, the cbengine service crashed.   An Azure service side fix was deployed to improve the speed of the FileCatalog process.  No action was required on the client side after the Azure service was updated.

    It seems you are facing a different issue, I suggest you open a support case so additional troubleshooting can be performed.


    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights.

    • Marked as answer by elnickos Friday, August 21, 2015 1:21 AM
    • Unmarked as answer by elnickos Friday, August 21, 2015 1:21 AM
    Thursday, July 16, 2015 2:18 PM
    Moderator
  • If anyone is interested, I discovered that this issue was linked to another that I had going on. 
    I've currently got a support case open, and any details will be provided in the following thread:

    https://social.technet.microsoft.com/Forums/en-US/b4654f77-2f66-45bb-9669-5872097b9a1c/dpm-2012r2-online-backup-error-id-3188-and-34504?forum=dpmfilebackup

    Nick.


    • Edited by elnickos Friday, August 21, 2015 1:20 AM
    • Marked as answer by elnickos Friday, August 21, 2015 1:21 AM
    Friday, August 21, 2015 1:20 AM