Architecture Query with DPM 2012 RRS feed

  • Question

  • We are looking into DPM 2012 as our enterprise backup solution. We are looking to ditch Symantec backup Exec 2012.

    Our Enterprise Data looks like this,

    DataCenter Site (20/20 MB) - data to backup - 4.9 TB

    Regional Site 1 (2/2MB) - data to backup - 530 GB

    Regional Site 2 (2/2MB) - data to backup - 471 GB

    Regional Site 3 (2/2MB) - data to backup - 204 GB

    Regional Site 4 (1/1MB) - data to backup - 60 GB

    Regional Site 5 (4/4MB) - data to backup - 207 GB

    Regional Site 6 (4/4MB) - data to backup - 965 GB

    Regional Site 7 (4/4MB) - data to backup - 2 TB

    Regional Site 8 (8/8MB) - data to backup - 2.3 TB

    Regional Site 9 (4/4MB) - data to backup - 1 TB

    Regional Site 10 (8/8MB) - data to backup - 1.3 TB

    I am not concerned about how to backup the datacenter, but I am concerned about the regional sites. There is a large amount of data to backup, and we want it all to be deduplicated to the datacenter to DISK and then to TAPE. Can DPM 2012 do this? What are your recommendations on how to acheive this? How would other enterprises backup this data with this archictecture?

    Monday, January 7, 2013 5:29 AM

All replies

  • Hi,

    You can look into Crunch for deduplication.

    Also look into RiverBed as a way to optimize DPM traffic over the WAN.

    My Blog | |
    If you found this post helpful, please give it a "Helpful" vote. If it answered your question, remember to mark it as an "Answer". This posting is provided "AS IS" with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

    Friday, May 10, 2013 5:36 AM
  • To answer your questions -

    DPM 2012 SP1 doesn't perform any deduplication on its own.

    It is however aware of windows 2012 deduped volumes, and will stored disk based backups in the same optimized format.

    However, it will hydrate the data when writing off to tape. You will only see savings for disk only backups when the data is already deduplicated, and only when it is stored to disk. So, simple solution here is to not write as many full replicas off to tape.

    So, no, DPM can't do exactly what you are looking to achieve. As to the low bandwidth concerns -- getting your initial replicas can be achieved two ways. 

    First, by just waiting it out and letting the replica build across your WAN connection. Same time as a full backup -

    Second, it is possible to manually provide replica volumes. If you are not familiar with DPM at all... its a bit of a pain, but you can essential copy the data to an external drive, ship it, copy it to manually created volumes in your storage pool - pick those custom volumes when you create the protection and DPM will do a consistency check of the data.

    That being said, once everything is in place.  What kind of data churn are you seeing in your differential backups? The biggest concern you are going to look at once you have the initial full replicas built is how much data change is occurring, your backup scope for restore points, and for what type of data.

    My remote file shares hardly change more than a few GB per day on my 100-200GB remote file shares and it gets handled just fine on a 3/3MB connection.  At the same time, I have another server that hits about 1TB of change per day, and uses about 40MB of a WAN link (throttled) for a good chunk of the day. So, even if you are protecting a 2.3TB file server on an 8MB link / how much is really changing daily? Because that is all you will have to handle, and address with your bandwidth usage. 

    So look at your daily change rate, and why you are looking for a new solution. DPM does do on-wire compression on its own, but you may need to look into an appliance as suggested by Buchatech.

    As to how other enterprises would do it - not a good way to answer that. Too many company policies, legal concerns (not requirements), to get into that discussion.

    Friday, May 10, 2013 2:33 PM