none
Thoughts before protecting a 2-node filecluster RRS feed

  • Question

  • We're about to setup a new 2-node cluster, replacing our existing one.

    Its a physical cluster running 2008R2. The current cluster is protected by Another Product.

    All user data is obviously going to be migrated to the new cluster.

    We'll need a Daily backup of all the files. Not sure how many files each drive has but we're talking millions.

    My abstract questions Before utilizing dpm here is are;

    Is DPM 2012 capable to fully protect a Windows 2012 HA Failover cluster with SC SP1 installed? Or is it still 2008R2 to be on the safe side?

    What are the current limitations in dpm 2012 regarding amount of files on a drive/mp?

    What are the recommended implementation when protecting Fileclusters

    Will we likely run into situations where DPM will triggers consistency-checks that wont have time to complete til next scheduled job is triggered? Have read other posts here about frequent access to many files, like a I/O-intensive application or a antivirus tends to be an issue

    If so, what options does one have to trim this, except splitting up data on more disks which doesn't feel like an options since design is set by other factors? I'm talking settings in DPM to i.e reducing consistency checks, speed them up or so.

    disks are between 200gb to 2100gb.


    Ivarson


    • Edited by Ivarson Wednesday, January 9, 2013 10:13 AM s/2008/2012/g
    Wednesday, January 9, 2013 9:35 AM

All replies

  • I'm protecting a fileserver with several TB of data split across about 40 iSCSI disks. Currently it is running on a two node 2008 R2 cluster, but I'm going to move to a Windows 2012 cluster in the next couple of weeks. The biggest issue with DPM and protecting fileservers are the CC jobs: One disk which is about 3 TB can take 3-4 days to complete a CC job. This is a big pain because DPM will not do backups of a disk while it is running a CC job. So if you upgrade from DPM 2012 to SP1 for instance all datasources are marked inconsistent and you have to run CC jobs... If one of the bigger one fails for some reason, you have to rerun it and then you are basically without recovery points for a week. I have complained several times about the long running CC jobs and that this needs to be addresses ASAP but so far nothing changed as far as I know. With Windows 2012 I'm looking into moving to a scale out fileserver because I think the probability of a disk doing a dirty shutdown is lower than on a traditional cluster. But I have yet to find confirmation on that...

    Anyway, if the CC job issue would be solved, DPM would be an almost perfect backup tool for fileservers. It is doing a very good job with SQL, Exchange, Sharepoint as mostly good as well and Hyper-V is only OK (at least on 2008 R2).

    Thursday, January 10, 2013 10:42 AM
  • DPM 2012 SP1 fully supports windows server 2012, I would even recommend that you upgrade the DPM server to WS 2012.
    there're no limitation regarding the amount of files since DPM depends on block changes, I've protection groups with millions of files in them, it takes a while to load all the files during a recovery, but it's working just fine.

    the recommended levels for a DPM server are as follows: 120 TB per DPM server, with 80 TB replica size with a maximum recovery point size of 40 TB

    if you need to protect a larger amount of data!! you need another DPM server

    regarding protecting file clusters, move ahead to SMB 3.0 and CSV file clusters, they provide much better flexibility to your file clusters.

    for the CC checks, for large amount of data it is recommended to create a separate network that is dedicated for the backup. more over dedicate 10Gbps NICs to this network, 1 TB takes around 15 minutes to be copied over 10 Gbps connection, (assuming disk speeds are not the bottle neck)

    keep us informed if you have any issues, and good luck

    • Proposed as answer by Willy Moselhy Thursday, January 10, 2013 6:34 PM
    Thursday, January 10, 2013 11:06 AM
  • If the data source is consistent, then the number of files is more or less irrelevant. But if you need to run a CC check, then it is not since the CC job will go through the filesystem folder by folder, file by file and this is what makes CC jobs so painfully slow. This is on Equallogic PS 6500 currently, moving to Compellent next month.
    Thursday, January 10, 2013 11:15 AM
  • This is pretty much what I'm afraid of, that CC's are gonna run day in and out leaving no window for neither backup nor restore. What steps have you taken in order to settle down these CC's?

    Obviously you have a good start excluding quarantine-folders, but with many thousands of users and applications, a scenario with rapid change of lots of files will likely occur..


    Ivarson

    Monday, January 21, 2013 10:24 AM
  • Thanks for sharing.

    Might I ask what kind of data you're protecting? Is it much "alive" like Video-editing software, Map-software and so forth?

    Im thinking of taking a pro-Active hour with Microsoft support on preparing this part since I'd avoid CC's in production at any cost :-p


    Ivarson

    Monday, January 21, 2013 10:28 AM
  • I'm also a bit worried about storage. We have a new IBM DS3500 with a configured Dynamic Storage Pool, and we're gonna share chunks of 2-4 TB storage to each dpm-server on an need-to-have basis. Is there anything to consider with;

    1. Neccessary blocksize

    2. Disks size as they appear in the dpm-servers own storagepools. etc I can protect a 4 TB cluster-volume as long as i have several 2 TB disks letting dpm span the protected volume across 'em?


    Ivarson

    Tuesday, April 23, 2013 9:31 AM