Monday, May 07, 2012 10:08 PM
Hey chaps I've got a SAN and two physical servers, one will physically run SBS2011 and the other run as a hyper-v host running the PA-O virtualized and another copy of Server 2008. What's the best way to make sure if one physical machine goes down the file shares are still accessible from the SAN? Do I need to cluster or is this something DFS is for?
This is a single site solution i'm looking at!
Thanks in advance!!!
- Edited by MattyM00 Monday, May 07, 2012 10:11 PM clarification
Tuesday, May 08, 2012 1:56 AM
I read the subject and was about to go down the clustering route, but I see you're using SBS and SBS doesn't do clustering. That leaves DFS, which it does do.
That said, if you're not using DFS already then you're going to be up for a namespace changeover at some point in order to make this happen.
In and of itself, this isn't a big deal - particularly if you use group policy or a logon script to handle the drive mappings, but it's just worth mentioning as it requires a bit of forethought and planning - particularly around share consolidation, rather than just knocking up a cluster in half an hour one night out of hours.
Tuesday, May 08, 2012 9:37 AM
Well, this is going to be a whole new network set-up. Our business has shrunk somewhat over the past few years, even suffered so much as to be taken over by another company which was smaller than us to begin with! So I'm going from a full server 2003 domain downsizing to a more affordable SBS network and we're not taking the old domain name and having a complete start from scratch so I'm thinking longterm, get stuff right from the off, as opposed to migrating old business names and links and some of the naffer practices I've inherited to go forward.
Planning for DFS then, I'm a bit of a newb so a bit of lab work required upfront too, what kind of scenario am i looking at regarding resources? If i'm using like replication technologies to support failover do i need to make sure each server has its own space on the SAN which covers it's own storage and enough for the other server to replicate to with its shares, then I end up with duplicated data wich each server can fall back on to cover the other? Or does perhaps one server manage the general files and another server keeps copies ready to go?
As i say bit of a news, tried to find some like real world scenarios online but not much luck in that regard, but a lot of theory and stuff. Anything to steer me in the right direction or advice generally much appreciated :)
Tuesday, May 08, 2012 10:32 AM
Jumping straight into the questions.
Yes, you will end up needing matching amounts of space when it comes to implementing DFS-R, as it's it really doesn't have anything in common with clustering. That said, the secondary storage doesn't have to be on the SAN. NAS storage is another potentially good option depending on how much file traffic is generated in your environment.
You could use cheap local SAS (or even SATA depending on how much of a pinch you're in financially) drives to host the secondary copy and configure that second replica to "always come last", meaning it wouldn't actually be used unless your primary, SAN-connected host was offline or otherwise unreachable. You can read here about folder target prioritisation.
Of course, in distributing the load, you might find that the SAS/SATA solution actually performans admirably in which case you could stick with using "random order" on the replica referral itself (read here). You'd be in the best position to have a feel for this, to be honest. There are more tuning related links here.
From an actual replication perspective, the good news with DFS-R is that only the changes to a file are replicated. Gone are the days of full file replication every time a change is made, so it really is quite efficient.
The main thing you want to plan up front is your transition from a server-based UNC to a "global" domain-based DFS namespace as that's the change that will have potential rammifications for your users. If they use UNCs, for example, in speadsheets, Access databases and the like, they'll need a heads up. The same goes for any applications you're looking after.
This is a big topic in its own right, so maybe the best thing to is give you a few articles to read while you're formulating ideas.
Below is some reading material on DFS-R as it pertains to Server 2008 R2. I'm not sure what may be different/limited in SBS 2011, but it should be similar:
- What's new in DFS in 2008 R2
- DFS management (this is the one you want to read as a planning exercise)
Tuesday, May 08, 2012 12:56 PM
Cool, haven't had time to read through the links properly yet but cursory glance looks like the kind of thing i'm after!
I think having data duplicated twice on the same SAN may be a bit of waste of our best performing storage but, when we upgrade to SBS we'll have some old 32bit servers, with between them a fair few hard-disks and one particular server has about 8 drive bays. These servers have windows 2003 licensed for them. Would it be possible to use one of these servers as the DFS replication/backup target quite easily then?
Also as it would be a case of get the new network up and running before destroying the old and recycyling old hardware for this purpose, can i start my network with DFS, with just the two new servers and then introduce the replication destination?
Thanks for the reading for the commute home :D
- Edited by MattyM00 Tuesday, May 08, 2012 12:57 PM spelling
Tuesday, May 08, 2012 2:47 PM
Yes, you can actually start a DFS implementation with just one server. Obviously there will be no redundant folder targets or replication groups, but as you're looking to do this in a staggered fashion, there's no technical issues with this.
Technically, you could use Server 2003 so long as when you create the new DFS namespace you select "2003 mode". That said, I'd discourage this because you do lose some neat functionality. But necessity outweighs nice-to-haves, so ultimately it's your call.
Tuesday, May 08, 2012 2:50 PM
ooh ooh ooh - please sir - what do i miss out on DFS-ing from Server 2008 R2 to Server 2003R2 in particular???
Also, can I just replicate some shares, the main day-to-day ones in constant usage as opposed to all the files on a particular server?
Thanks for your assistance Lain :D
- Edited by MattyM00 Tuesday, May 08, 2012 3:07 PM
Tuesday, May 08, 2012 3:13 PM
An awful lot. Rather than repeat the list verbatim, have a read of this, which is the list of improvements introduced in Server 2008, to which you can then add the above list from 2008 R2.
In short though, there's significant performance and scalability improvements - and I do mean significant (the change-based replication is just one facet of this) as well as fixes to old-school problem scenarios (such as when a non-functioning or offline server comes back online after a long standing outage).
From a feature perspective, Access Based Enumeration is one nifty feature for small to medium environments. The splitting of folder targets from namespaces offers a higher degree of configuration as to where a client will be redirected to in different scenarios. This is the more significant of the two from an operational perspective.
- Marked As Answer by Shaon ShanMicrosoft Contingent Staff, Moderator Wednesday, May 09, 2012 5:34 AM
Tuesday, May 08, 2012 3:19 PM
Cheers Lain, some good bedtime reading there to keep me going. Take care and thank you !!!!