Tuesday, September 22, 2009 12:10 PMI have setup DFS successfully across two servers (Windows Server 2008 & 2003) for two folders.
For one of the folders the replication works great; no problems. On the other folder it is making an extra copy of folders (and the files within) only if the folder name begins with a space.
For example, "\\server\share\example" is ok but "\\server\share\ example" is not. DFS replication is making two copies of the " example" folder. I believe this to be what is stopping the server from completing it's initial replication.
Has anyone come across this before? I don't really want to have to manually check all the folders for spaces.
Tuesday, September 22, 2009 7:13 PMAre you using DFS Replication or FRS?
Tuesday, September 22, 2009 7:24 PMDFS Replication; I set it up in the DFS Managment console.
I'm going to try recopying all the files w/ richcopy/robocopy. I did that with the folder that's working. With the problem folder, I did the normal copy/paste.
Tuesday, September 22, 2009 7:37 PMMay I suggest you open the DFS-R debug log file on both servers, located at %systemroot%\Debug\Dfsr00*.log with the Kiwi Log Viewer (tailing, coloring by filter) or Mandiant Highlighter (no tailing, visual) and search for the name of the non replicating folder?
Check out the DFS-R debug log help series by Ned Pyle.
Tuesday, September 22, 2009 10:50 PMIn the future, you should allow DFS-R to take care of replication. Copying from one client to the other takes more bandwidth/time.
Remember to use the /copyall argument with robocopy so that you don't lose ACLs.
I'm incorrect... RDC doesn't use compression during initial replication.
I'm incorrect in my incorrectness... RDC does use compression during initial replication.
Wednesday, September 23, 2009 4:06 AM
In the future, you should allow DFS-R to take care of replication. Copying from one client to the other takes more bandwidth/time.I was under the impression it was better to do a manual copy first before replication. I've started again from the beginning letting DFS-R handle it.
Remember to use the /all argument with robocopy so that you don't lose ACLs.
Wednesday, September 23, 2009 12:59 PMHere are some things I've found out over my war with DFS-R (pleasant as it truly is):
1) Read these bullet points , discusses how changes are reflected.
2) Learn about debugging a bit. Pretty easy stuff actually . I use Kiwi Log Viewer (free, tailing), and Mandiant Highlighter (free, not tailing, but will help you easily locate a filename in a ginourmous log file, for instance).
3) The last writer always wins. Sort of...
4) Speaking of backups/restores, I suggest backing up the DFS files and folders by utilizing shared folders (whether they are administrative shares, or user accessible shares) instead of using a DFS add-on in your backup software. When restoring, redirect the restoration to a location other than the DFS-R replicated folders (note that I use backup exec, not a microsoft backup solution).
Last writer usually wins, meaning that if you drop these restored files into the replicated folder, DFS-R will probably think to itself, "these are old files, I will do my job and replace them with the new files on this other member."
Delete/move the files that you are restoring on the other member servers to an alternate location. This way the files you have restored and copied into the DFS folder will replicate (as the technically newer files no longer exist in the same location).
Also, most DFS addons in backup software do not let you redirect the files.
5) Prioritize your replicated folders by placing them into groups and utilizing bandwidth "scheduling" (aka allotting bandwidth to different "groups of replicated folders" aka replication groups).
6) If you need to "stop dfs replication," instead of: stopping the service, disconnecting a member, disabling a member in a replicated folder, simply lower the "schedule" aka bandwidth allotment to 0Mbps; make your changes (see point #4) and readjust the bandwidth schedule.
The DFS Replication service, does more things than DFS replication. DFS-R monitors the file system via the USN Journal (which monitors the file system for certain functions... I think), and then starts it's wild comparison and reconciliation of checksums stored in it's database .
Basically, lowering the bandwidth allotment will allow all these other functions of DFS-R to continue, without allowing data/changes to replicate to other members. DFS-R config is (mostly) stored in Active Directory, so use AD sites & services to replicate the AD DB changes (over FRS most likely) to DCs that are on site with the DFS Member Server, and use replmon.msc or the DFS-R GUI (dfsmgmt.msc) to make sure the changes have been reflected on this DC on the site of the DFS Member Server.
7) Understand the four statuses of a replicated folder: "uninitialized," "waiting for initial replication ," "normal," and one more. I can't locate the documentation on the explaination. "Waiting for initial replication" doesn't really mean "waiting for initial replication..." In case there is "pre-staged" (pre-existing files/folders that are the same on all members), DFS-R will still use last writer wins magic to compare to the designated primary member (see next bullet point). "...initial replication" is sort of a misnomer for the status.
8) Your designated primary server is the "master server..." for instance, when running DFS-R Replication group diagnostic report (through the dfsmgmt.msc GUI), the report uses/you should use the "primary member" as the basis for "backlogged receiving transactions," "backlogged sending transactions" (these phrases mean what they say).
9) I just found this here FAQ .
And post here if you have questions. David Shen from Microsoft is a DFS master and has helped me with various errors. And the power of Ned Pyle can even be summoned.
Last writer wins,
- Edited by mbrownnyc Thursday, September 24, 2009 1:42 PM I re-wrote a few of the points for clarity.
Wednesday, September 23, 2009 3:37 PMI have never done a manual copy first. The great thing about DFS Rep that FRS doesnt do is the way it replicates. The first replication if complete and then it only replicates the changes. This takes a huge load off the bandwidth, especially when you can designate badwidth and schedule times for off peak load periods. I agree with with mbrownnyc, let DFS Rep manage the whole kit and kaboodle.
Wednesday, September 23, 2009 4:09 PM
I have never done a manual copy first. The great thing about DFS Rep that FRS doesnt do is the way it replicates. The first replication if complete and then it only replicates the changes. This takes a huge load off the bandwidth, especially when you can designate badwidth and schedule times for off peak load periods. I agree with with mbrownnyc, let DFS Rep manage the whole kit and kaboodle.DFS-R uses RDC (remote differential compression)... see wikipedia and msdn . I'm actually curious if DFS-R/RDC handles "de-duplication" to reduce traffic even more, but I think it just uses RDC to transfer changes, not reducing bandwidth usage even further by utilizing an end-point managed, file portion checksum DB or whatever would allow for de-duplication.