locked
Two-Node File Server Failover Cluster RRS feed

  • Question

  • I have one file server with windows server 2008 r2, i want to implement failover cluster for 0 downtime if server 1 goes offline than users redirect to server 2, so is it possible with two server without san or nas storage or any other alternative to fulfill my requirement. Thank you. 
    Tuesday, December 8, 2015 5:22 AM

Answers

  • file server on failover cluster requires shared storage. For file services you can also use distributed file system (DFS) and replicate folders between two file servers with local storage. It does not provide the same availability protection as failover cluster, but does not require SAN or NAS. Alternative is to use 3-d party clustering solutions with built-in replication support. 

    Gleb.

    Hey! The 3d party tool might be the StarWind Virtual SAN Free. Basically, it installs on the dedicated general purpose servers and converts its internal disks into the shared storage, which gonna be the CSVs for the MS cluster. 

    https://www.starwindsoftware.com/starwind-virtual-san-free


    • Edited by AnatolySV Friday, December 11, 2015 4:47 PM
    • Marked as answer by Shil Patel Monday, December 14, 2015 8:51 AM
    Thursday, December 10, 2015 7:32 PM
  • file server on failover cluster requires shared storage. For file services you can also use distributed file system (DFS) and replicate folders between two file servers with local storage. It does not provide the same availability protection as failover cluster, but does not require SAN or NAS. Alternative is to use 3-d party clustering solutions with built-in replication support. 

    Gleb.

    • Proposed as answer by Adrian Clenshaw Tuesday, December 8, 2015 10:44 AM
    • Marked as answer by Shil Patel Monday, December 14, 2015 8:51 AM
    Tuesday, December 8, 2015 7:40 AM
  • As Gleb says, DFS can provide a different level of HA, but it comes with some issues that you would not find in a failover cluster.  DFS has a problem with simultaneous access.  It can allow the same file to be accessed in both (all) nodes that are part of the DFS set.  If two people are updating the same file, the changes from the person who closes the file last are the changes that are saved.  The changes made by the other person are lost.  There are ways to 'recommend' accessing a primary location (meaning one particular node), but it is not guaranteed.  Additionally, the file is not updated until the person updating it closes the file.  So if a second person accesses the file while another person is updating it, the second person does not see the changes made to the file.  This can be particularly problematic if the updating person does not close the file for extended periods of time.

    DFS can be useful, but you need to know and understand its limitations and plan accordingly.  Failover clustering provides the best solution, but it does require some form of shared storage.


    . : | : . : | : . tim

    • Marked as answer by Shil Patel Monday, December 14, 2015 8:51 AM
    Tuesday, December 8, 2015 2:26 PM
  • As Gleb says, DFS can provide a different level of HA, but it comes with some issues that you would not find in a failover cluster.  DFS has a problem with simultaneous access.  It can allow the same file to be accessed in both (all) nodes that are part of the DFS set.  If two people are updating the same file, the changes from the person who closes the file last are the changes that are saved.  The changes made by the other person are lost.  There are ways to 'recommend' accessing a primary location (meaning one particular node), but it is not guaranteed.  Additionally, the file is not updated until the person updating it closes the file.  So if a second person accesses the file while another person is updating it, the second person does not see the changes made to the file.  This can be particularly problematic if the updating person does not close the file for extended periods of time.

    DFS can be useful, but you need to know and understand its limitations and plan accordingly.  Failover clustering provides the best solution, but it does require some form of shared storage.

    +100500

    Tiny remark: no concurrent access and inability to work with open files can be mitigated installing third-party software similar to PeerLock foe example. It does work but a) it's not free (big surprise but MSFT had own update for DFS solving these issues it's a pity they never came out with one...) and b) still has issues with performance and supported scenarios (Hyper-V cannot work even with PeerLock installed foe example). I'd suggest to use either virtualized file server doing Hyper-V Replica for DR or build a full-blown HA file server for HA (surprise!). 


    Cheers,

    Anton Kolomyeytsev [MVP]

    StarWind Software Chief Architect

    Profile:   Blog:   Twitter:   LinkedIn:  

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    • Marked as answer by Shil Patel Monday, December 14, 2015 8:57 AM
    Thursday, December 10, 2015 2:50 PM

All replies

  • file server on failover cluster requires shared storage. For file services you can also use distributed file system (DFS) and replicate folders between two file servers with local storage. It does not provide the same availability protection as failover cluster, but does not require SAN or NAS. Alternative is to use 3-d party clustering solutions with built-in replication support. 

    Gleb.

    • Proposed as answer by Adrian Clenshaw Tuesday, December 8, 2015 10:44 AM
    • Marked as answer by Shil Patel Monday, December 14, 2015 8:51 AM
    Tuesday, December 8, 2015 7:40 AM
  • As Gleb says, DFS can provide a different level of HA, but it comes with some issues that you would not find in a failover cluster.  DFS has a problem with simultaneous access.  It can allow the same file to be accessed in both (all) nodes that are part of the DFS set.  If two people are updating the same file, the changes from the person who closes the file last are the changes that are saved.  The changes made by the other person are lost.  There are ways to 'recommend' accessing a primary location (meaning one particular node), but it is not guaranteed.  Additionally, the file is not updated until the person updating it closes the file.  So if a second person accesses the file while another person is updating it, the second person does not see the changes made to the file.  This can be particularly problematic if the updating person does not close the file for extended periods of time.

    DFS can be useful, but you need to know and understand its limitations and plan accordingly.  Failover clustering provides the best solution, but it does require some form of shared storage.


    . : | : . : | : . tim

    • Marked as answer by Shil Patel Monday, December 14, 2015 8:51 AM
    Tuesday, December 8, 2015 2:26 PM
  • As Gleb says, DFS can provide a different level of HA, but it comes with some issues that you would not find in a failover cluster.  DFS has a problem with simultaneous access.  It can allow the same file to be accessed in both (all) nodes that are part of the DFS set.  If two people are updating the same file, the changes from the person who closes the file last are the changes that are saved.  The changes made by the other person are lost.  There are ways to 'recommend' accessing a primary location (meaning one particular node), but it is not guaranteed.  Additionally, the file is not updated until the person updating it closes the file.  So if a second person accesses the file while another person is updating it, the second person does not see the changes made to the file.  This can be particularly problematic if the updating person does not close the file for extended periods of time.

    DFS can be useful, but you need to know and understand its limitations and plan accordingly.  Failover clustering provides the best solution, but it does require some form of shared storage.

    +100500

    Tiny remark: no concurrent access and inability to work with open files can be mitigated installing third-party software similar to PeerLock foe example. It does work but a) it's not free (big surprise but MSFT had own update for DFS solving these issues it's a pity they never came out with one...) and b) still has issues with performance and supported scenarios (Hyper-V cannot work even with PeerLock installed foe example). I'd suggest to use either virtualized file server doing Hyper-V Replica for DR or build a full-blown HA file server for HA (surprise!). 


    Cheers,

    Anton Kolomyeytsev [MVP]

    StarWind Software Chief Architect

    Profile:   Blog:   Twitter:   LinkedIn:  

    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    • Marked as answer by Shil Patel Monday, December 14, 2015 8:57 AM
    Thursday, December 10, 2015 2:50 PM
  • file server on failover cluster requires shared storage. For file services you can also use distributed file system (DFS) and replicate folders between two file servers with local storage. It does not provide the same availability protection as failover cluster, but does not require SAN or NAS. Alternative is to use 3-d party clustering solutions with built-in replication support. 

    Gleb.

    Hey! The 3d party tool might be the StarWind Virtual SAN Free. Basically, it installs on the dedicated general purpose servers and converts its internal disks into the shared storage, which gonna be the CSVs for the MS cluster. 

    https://www.starwindsoftware.com/starwind-virtual-san-free


    • Edited by AnatolySV Friday, December 11, 2015 4:47 PM
    • Marked as answer by Shil Patel Monday, December 14, 2015 8:51 AM
    Thursday, December 10, 2015 7:32 PM
  • Thanks Gleb

          i want to implement failover cluster only.

    Saturday, December 12, 2015 6:57 AM
  • Thanks 

    Saturday, December 12, 2015 6:59 AM
  • Thanks tim

              For brief information about DFS.

    Saturday, December 12, 2015 6:59 AM
  • Hi Gleb

              If i use DFS than how users redirect to secondary server if primary server goes offline ? i mapped network drive in users computer. 

    Monday, December 14, 2015 8:47 AM
  • Redirecting to another share is part of the function of DFS - hence the name Distributed File System.  The end user uses a generic name to access the share and DFS finds the 'nearest' file share to satisfy the request.

    . : | : . : | : . tim

    Monday, December 14, 2015 5:15 PM
  • Hii Tim

             Both server are at same building connect with same switch so if primary server goes down than user redirect to second server how much downtime is likely on user side if they are working on word, excel documents? total users are 70 and share data is about 700 GB. 

    Thank you.

    Tuesday, December 15, 2015 5:17 AM
  • Connected users will be disconnected and will have to reconnect.  There is no automatic reconnect.  User working on a Word/Excel document would lose all updates performed on document.  Word/Excel have recovery mechanisms built-in, but DFS does not replicate files until closure.  So all changes made on the server that goes down are lost until that server comes back up.  At that time, the user could use Word's/Excel's built-in recovery mechanism to recover files on that server.

    . : | : . : | : . tim

    Tuesday, December 15, 2015 2:32 PM
  • Hi Tim 

         Than how DFS is helping me to achieve high availability file server, suppose primary server is down or freeze, i want to switch user to secondary server which steps require for that ?

    Thank You 

    Thursday, December 17, 2015 5:34 AM
  • That's why we state you have to know the limitations of DFS to determine if it is a solution that fits your requirements.  DFS has satisfied the business needs for many organizations for many years for some sorts of file access.  Other organizations require the capabilities provided by failover clustering for the file sharing needs.  There are trade-offs.

    If 'primary' DFS server is down, and an end user tries to make a connection, that end user is automatically connected to another DFS server.  By default, the user will automatically be connected to the 'closest' server, so if you have two servers in a configuration, either one can serve the file requests.  If the server 'freezes' it depends on what you mean by 'freeze'.  If it is no longer accepting new requests, it will automatically go to another server.

    If a user is connected and the server goes down, the user will have to reconnect.  User does not have to know the name of either server as the service is presented by a virtual name for the UNC.


    . : | : . : | : . tim

    Friday, December 18, 2015 5:08 PM