none
Consistency checks failing on one cluster, succeeding on another RRS feed

  • Question

  • Hi @ all,

    [all servers involved are 2016, patched up to July CU]

    trying to back up Hyper-V VMs running on three separate clusters. All VM configs and disks reside on the same SOFS living on an S2D cluster. The clusters are as follows:

    Production: 10 nodes on HP Synergy blades, production VLAN and SMB VLAN sharing the same converged networking infrastructure.

    DMZ: 2 nodes on HP rack servers, production VLAN is 4x1GB Cu, SMB VLAN is 10GB optic

    Management: 2 nodes on HP rack servers, production VLAN is 4x1GB Cu, SMB VLAN is 10GB optic. This is where the DPM VM resides.

    Shares on the SOFS, permissions, delegation and so on are configured in an identical way and differ only in regard to the machine accounts actually granted permissions. All permissions are granted via Active Directory groups.

    Virtual machines can be created, managed and live migrated within their respective clusters without any issues.

    DPM 2019 10.19.58.0 on a VM with a local SQL2016 instance.

    DPM agent is installed on all hosts in all clusters, connectivity shows OK. Replicas of both Windows and Linux VMs can be created without warnings. And here's where it gets interesting:

    • Any VMs on the Management Cluster (where the DPM VM is running) can be backed up without issue regardless of OS or power state.
    • On the DMZ cluster, an initial replica can be created and shows OK. On attempt to create a recovery point, there is an error saying "Replica is inconsistent". Consistency check ends in error state saying ID 2033 (a required privilege is not held by the client (0x80070522). The UNC path and host shown in the error message are where the VM is stored and registered. This behaviour does not change with the power state of the VM.
    • On the production cluster, an initial replica stays green for several seconds then goes red. From there on, the behaviour is identical to taht on the DMZ cluster.

    All hosts have been rebooted at least once since DPM agent rollout.

    What am I missing?

    • added the machine account of the DPM server to Full Access on one of the shares, to no avail
    • enabled File Server VSS writer on S2D cluster nodes, to no avail (backup of Management cluster worked before I did that)


    Evgenij Smirnov

    http://evgenij.smirnov.de

    Monday, July 29, 2019 7:01 AM

All replies

  • Update: Found a discrepancy in configuration. On the storage nodes, the hosts where backup works were members of the local DPMRATrustedDPMRAs group while the others weren't.

    I added the remaining hosts and, for good measure, the cluster name objects as well but the error persists. Rebooted both hosts in the smaller cluster, no dice.

    So, dear community, if anybody has further ideas, I am all ears :-)


    Evgenij Smirnov

    http://evgenij.smirnov.de

    Tuesday, July 30, 2019 6:56 AM