Answered by:
Getting started with Windows Volume Replication?

Question
-
Hi.
I'm well aware that the Technical Preview of "Windows Server 10" has just been released but I was wondering if there are any resources for how to get started with Windows Volume Replication (one of the more exciting features IMHO).
I just need a few pointers to get going in the right direction and then I should be able to get this up and running.
Do I need a cluster - even for asynchronous replication? Is there a GUI for this or do I have to rely on PowerShell? What PowerShell cmdlets are there for setting this up? Can I replicate between servers in different domains/workgroups? And so on...
If any of you can help me (and others) to get started I'd very much appreciate it.
Thanks.
- Edited by Martin Edelius (Atea) Friday, October 3, 2014 1:37 PM Spelling error
Friday, October 3, 2014 12:50 PM
Answers
-
There are two main scenarios for Storage Replica in the Windows Server Technical Preview:
- Using Storage Replica to create Server to Server replication using Windows PowerShell
- Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster ManagerPlease find below the details for each scenario.
- Ned Pyle, MSFT (PM for Storage Replica)
---------------------------------------------------------------------------
Using Storage Replica to create Server to Server replication using Windows PowerShell
1. Prerequisites for the Server to Server scenario
1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Two servers (one for each site) with Windows Server Technical Preview installed.
1c. Two disks on each server using local storage (DAS), Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.
Note 1: Sample command to measure latency
ping node-c.contoso.com -4 -f -l 1472 -n 300
Ping statistics for 10.10.0.4:
Packets: Sent = 300, Received = 300, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 9ms, Average = 2ms
2. Requirements for the data and log disks
2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.
3. Pre-Installation steps (to be performed on both servers)
3a. Install the following features and reboot: File Server and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2.
4. Configuration Steps
4a. Use the New-SRPartnership cmdlet to create replication between the nodes on the source node. For instance:New-SRPartnership -SourceComputerName sr-srv05 -SourceRGName rg01 -SourceVolumeName d: -SourceLogVolumeName e: -DestinationComputerName sr-srv06 -DestinationRGName rg02 -DestinationVolumeName d: -DestinationLogVolumeName e: -LogSizeInBytes 8gb
Note 2: You can verify that the replication is complete by checking events in the WVR Admin event log.
On the source server, check for events 5002, 2200, and 5015.
On the destination server, check for events 5015, 5001, and 5009.
4b. You can reverse source and destination by using the Set-SRPartnership cmdlet on the current destination server in order to make it the new source server. For instance:Set-SRPartnership -NewSourceComputerName sr-srv06 -SourceRGName rg02 -DestinationComputerName sr-srv05 -DestinationRGName rg01
Note 3: Remoting of SR cmdlets in the Windows Server Technical Preview only works for New-SRPartnership. You must run all other SR cmdlets on the server you are configuring.
---------------------------------------------------------------------------
Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager
1. Prerequisites for the Hyper-V Stretch Cluster scenario
1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Four servers (two for each site) with Windows Server Technical Preview installed. Each server should be capable of running Hyper-V, have at least 4 cores, and have at least 8GB of RAM. You will need more memory for more virtual machines.
1c. Two sets of asymmetric shared storage (2 nodes see one set, 2 nodes see the other set), using Shared SAS JBODs, Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.
Note 1: Sample command to measure latency:ping node-c.contoso.com -4 -f -l 1472 -n 300
Ping statistics for 10.10.0.4:
Packets: Sent = 300, Received = 300, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 9ms, Average = 2ms
2. Requirements for the data and log disks
2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.
2i. Add a Volume Label to each volume that identifies its site and purpose, such as “Data 1 Redmond”, to make it easy to identify the disks when they become CSVs.
3. Pre-Installation steps (to be performed on all nodes)
3a. Install the following features and reboot: Failover Clustering, Multipath IO, Hyper-V, and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2 for each of the two asymmetric storage sets.
4. Installation Steps (to be performed on only one of the nodes)
4a. Using Failover Cluster Manager (FCM), configure a cluster of the four nodes.
4b. Configure the cluster quorum to use a file share witness or Azure cloud witness (do not use disk witness)
4c. In the Disks pane, make source data disk CSV or a member of a role (it cannot be in Available Storage).
4d. Ensure all storage is owned by the node where you are running FCM.
4e. Right click the source disk and click Replication, Enable. Follow the wizard to select the source log disk, destination data disk, and destination log disk. Choose unseeded disk.
4f. At the end of the wizard, replication is configured and replication starts.
4g. You can change the source of replication by moving the storage using FCM to a node in the other site.Note 2: You can verify that the replication is complete by checking events in the WVR Admin event log.
On the source server, check for events 5002, 2200, and 5015.
On the destination server, check for events 5015, 5001, and 5009.
Note 3: Removal of replication via FCM does not work in the Technical Preview. Use the following Windows PowerShell commands instead.
On the node that is the current source of replication run:
Get-SRPartnership | Remove-SRPartnership
Get-SRGroup | % { Remove-SRGroup -Name $_.name }
On the node that is the current destination of replication run:
Get-SRGroup | % { Remove-SRGroup -Name $_.name }Ned Pyle [MSFT] | Sr. Program Manager | Windows Server
- Proposed as answer by NedPyle [MSFT]Microsoft employee Saturday, October 4, 2014 12:24 AM
- Marked as answer by Martin Edelius (Atea) Saturday, October 4, 2014 5:44 AM
Saturday, October 4, 2014 12:21 AM -
Just wanted to point out to anyone looking to this tread for guidance that Ned Pyle has published an official guide for setting up Storage Replica (the official name for this feature).
Find it here: http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
- Edited by Martin Edelius (Atea) Friday, October 10, 2014 9:07 AM
- Marked as answer by Martin Edelius (Atea) Friday, October 10, 2014 9:07 AM
Friday, October 10, 2014 9:07 AM
All replies
-
Nothing to see at the moment.
I'm trying to get an overview at Davids post:http://social.technet.microsoft.com/Forums/en-US/187a436e-6c3e-4c15-956e-fa0bdcadb6d6/problem-with-windows-volume-replication-in-failover-cluster-scenario?forum=WinServerPreview
So come over and take a look.
Friday, October 3, 2014 6:45 PM -
There are two main scenarios for Storage Replica in the Windows Server Technical Preview:
- Using Storage Replica to create Server to Server replication using Windows PowerShell
- Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster ManagerPlease find below the details for each scenario.
- Ned Pyle, MSFT (PM for Storage Replica)
---------------------------------------------------------------------------
Using Storage Replica to create Server to Server replication using Windows PowerShell
1. Prerequisites for the Server to Server scenario
1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Two servers (one for each site) with Windows Server Technical Preview installed.
1c. Two disks on each server using local storage (DAS), Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.
Note 1: Sample command to measure latency
ping node-c.contoso.com -4 -f -l 1472 -n 300
Ping statistics for 10.10.0.4:
Packets: Sent = 300, Received = 300, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 9ms, Average = 2ms
2. Requirements for the data and log disks
2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.
3. Pre-Installation steps (to be performed on both servers)
3a. Install the following features and reboot: File Server and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2.
4. Configuration Steps
4a. Use the New-SRPartnership cmdlet to create replication between the nodes on the source node. For instance:New-SRPartnership -SourceComputerName sr-srv05 -SourceRGName rg01 -SourceVolumeName d: -SourceLogVolumeName e: -DestinationComputerName sr-srv06 -DestinationRGName rg02 -DestinationVolumeName d: -DestinationLogVolumeName e: -LogSizeInBytes 8gb
Note 2: You can verify that the replication is complete by checking events in the WVR Admin event log.
On the source server, check for events 5002, 2200, and 5015.
On the destination server, check for events 5015, 5001, and 5009.
4b. You can reverse source and destination by using the Set-SRPartnership cmdlet on the current destination server in order to make it the new source server. For instance:Set-SRPartnership -NewSourceComputerName sr-srv06 -SourceRGName rg02 -DestinationComputerName sr-srv05 -DestinationRGName rg01
Note 3: Remoting of SR cmdlets in the Windows Server Technical Preview only works for New-SRPartnership. You must run all other SR cmdlets on the server you are configuring.
---------------------------------------------------------------------------
Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager
1. Prerequisites for the Hyper-V Stretch Cluster scenario
1a. Windows Server Active Directory domain (does not need to run Windows Server Technical Preview).
1b. Four servers (two for each site) with Windows Server Technical Preview installed. Each server should be capable of running Hyper-V, have at least 4 cores, and have at least 8GB of RAM. You will need more memory for more virtual machines.
1c. Two sets of asymmetric shared storage (2 nodes see one set, 2 nodes see the other set), using Shared SAS JBODs, Fibre Channel SAN, or iSCSI SAN.
1d. At least one 10GbE network connection on each server.
1e. A network between the two sets of servers with at least 8Gbps throughput and average of ≤5ms round trip latency when sending non-fragmented 1472-byte ICMP packets for at least 5 minutes.
Note 1: Sample command to measure latency:ping node-c.contoso.com -4 -f -l 1472 -n 300
Ping statistics for 10.10.0.4:
Packets: Sent = 300, Received = 300, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 9ms, Average = 2ms
2. Requirements for the data and log disks
2a. Four disks are required: a source data disk, a source log disk, a destination data disk and a destination log disk.
2b. The data disks must be formatted as GPT, not MBR.
2c. The data disks must be of identical size.
2d. The log disks should be of identical size.
2e. The log disk should use SSD storage using mirrored Storage Spaces, RAID 1, RAID 10 or similar resiliency.
2f. The data disks can be on HDD, SSD or tiered, using mirror Storage Spaces, parity Storage Spaces, RAID 1, RAID 10, RAID 5, RAID 50 or equivalent configurations.
2g. The data disks should be no larger than 10TB. We recommend testing with than 1TB to reduce initial replication time.
2h. The log volumes must be at least 10% of the size of the data volumes or at least 2GB, whichever is larger.
2i. Add a Volume Label to each volume that identifies its site and purpose, such as “Data 1 Redmond”, to make it easy to identify the disks when they become CSVs.
3. Pre-Installation steps (to be performed on all nodes)
3a. Install the following features and reboot: Failover Clustering, Multipath IO, Hyper-V, and Windows Volume Replication
3b. Enable the inbound firewall rule: File and Printer Sharing
3c. Provision the storage as described on item 2 for each of the two asymmetric storage sets.
4. Installation Steps (to be performed on only one of the nodes)
4a. Using Failover Cluster Manager (FCM), configure a cluster of the four nodes.
4b. Configure the cluster quorum to use a file share witness or Azure cloud witness (do not use disk witness)
4c. In the Disks pane, make source data disk CSV or a member of a role (it cannot be in Available Storage).
4d. Ensure all storage is owned by the node where you are running FCM.
4e. Right click the source disk and click Replication, Enable. Follow the wizard to select the source log disk, destination data disk, and destination log disk. Choose unseeded disk.
4f. At the end of the wizard, replication is configured and replication starts.
4g. You can change the source of replication by moving the storage using FCM to a node in the other site.Note 2: You can verify that the replication is complete by checking events in the WVR Admin event log.
On the source server, check for events 5002, 2200, and 5015.
On the destination server, check for events 5015, 5001, and 5009.
Note 3: Removal of replication via FCM does not work in the Technical Preview. Use the following Windows PowerShell commands instead.
On the node that is the current source of replication run:
Get-SRPartnership | Remove-SRPartnership
Get-SRGroup | % { Remove-SRGroup -Name $_.name }
On the node that is the current destination of replication run:
Get-SRGroup | % { Remove-SRGroup -Name $_.name }Ned Pyle [MSFT] | Sr. Program Manager | Windows Server
- Proposed as answer by NedPyle [MSFT]Microsoft employee Saturday, October 4, 2014 12:24 AM
- Marked as answer by Martin Edelius (Atea) Saturday, October 4, 2014 5:44 AM
Saturday, October 4, 2014 12:21 AM -
Thanks...ran into that removal issue in FCM you mentioned.
I was able to create a 2 node File Server cluster (before this excellent guidance) but now when I go back to recreate it I am getting the following error at the end of the creation wizard. Any ideas?
David A. Bermingham, MVP, Senior Technical Evangelist, SIOS Technology Corp
Saturday, October 4, 2014 12:33 AM -
It appears that there are no possible owners according the cluster, which suggests that the cluster or FCM never updated after the cleanup. Do you see the same behavior after restarting FCM or rebooting the node?
Ned Pyle [MSFT] | Sr. Program Manager for Storage Replica, DFS Replication, Scale-out File Server, SMB 3, other stuff
Saturday, October 4, 2014 1:19 AM -
Thanks again for the pointers. I just published my blog on how to configure Storage Replica for a 2-node file server cluster, using nothing but the GUI.
It's not too bad once you work out all the kinks.
I'm trying to figure out how this would work in the Azure Cloud. Without Physical Disk Resources this is really a non-starter for Azure Cloud deployments. Am I correct?
David A. Bermingham, MVP, Senior Technical Evangelist, SIOS Technology Corp
- Edited by David BerminghamMVP Saturday, October 4, 2014 2:28 AM
Saturday, October 4, 2014 1:51 AM -
I just deleted all the cluster storage, disconnected all the iSCSI targets, delete the iSCSI virtual disk and provisioned new storage. Second time around I did not have the same problem.
David A. Bermingham, MVP, Senior Technical Evangelist, SIOS Technology Corp
Saturday, October 4, 2014 2:26 AM -
Excellent Ned! Thank you very much.
I'll see if I can get this up and running now.
Saturday, October 4, 2014 5:45 AM -
Good work. Thanks a lot Ned.
It's all up and running now.
Directly before I'm going to vacation. ;(
Network switches glowing because of all the replication tasks now. ;)
Saturday, October 4, 2014 3:48 PM -
Nice work on the pointers, Ned thanks!! got both scenarios working. With the cluster the replication status only stay's unknown and had 1 blue screen when reversing the replication in the 1-on-1 scenario.
Now to my understanding its "active-passive", what i mean is i can use the Sourcedisk but can't use the destination disk till i break the replication. is that correct or am i missing something?
Saturday, October 4, 2014 7:43 PM -
Thanks again for the pointers. I just published my blog on how to configure Storage Replica for a 2-node file server cluster, using nothing but the GUI.
It's not too bad once you work out all the kinks.
I'm trying to figure out how this would work in the Azure Cloud. Without Physical Disk Resources this is really a non-starter for Azure Cloud deployments. Am I correct?
Sunday, October 5, 2014 1:06 AM -
There are two main scenarios for Storage Replica in the Windows Server Technical Preview:
- Using Storage Replica to create Server to Server replication using Windows PowerShell
- Using Storage Replica to create a Hyper-V Stretch Cluster using Failover Cluster Manager[ ... ]
Thank you for your wrapping of the things together! Few questions so far if you don't mind :)
Q1. What types of DAS are really supported? Can we use a pair of a physical servers with SATA spindles or PCIe flash to build a 2-node Hyper-V cluster with no physical shared storage (no SAS JBODs) using first listed scenario? What about simple 2-node Scale-Out File Server cluster (same config, SATA-all-around and nothing physically shared)?
Q2. Scalability? Can it be more then 2 servers in the first scenario? More then 4 with the second?
Thanks :)
P.S. Am I correct and what we have here is kind of a "DFS on steroids" rather then further evolution of a Clustered Storage Spaces?
Sunday, October 5, 2014 1:14 AM -
After a weekend of testing, breaking and building stuff a few remarks (i know its preview but i love this feature):
1) is it going to be possible to "hide" the log disks after configuring them so there is no mix up or accidental putting files on them thinking they will replicate?
2) is it going to be possible to see the destination disk in the 1-on-1 scenario as "read-only" on the destination server?
3) any plans on a three-leg cluster option?
4) any plans on creating 2 or more replica's from the same replication group?
5) its a long strech but any insights on future possible "merge-replication"?
Thanks!
Sunday, October 5, 2014 4:20 PM -
Hi,
The feature looks great but needs a lot of improvement.
When building this in the GUI is impossible to select the destination Volume.
Second: when creating this by Powershell It worked but my source volume is gone looks like broken and behind any repair.
But it can be worse I deleted in the FCM GUI the two WVR roles. this worked but now the disks are gone and can't get them in the cluster
with the Get-SRGroup it shows the deleted Replication. No problem I just need to delete this in powershell remove-srgroup and this time with -Force
It looks like chicken egg Story. Somewhere it is stuck and no reboot or storage removal will help.
I could send you more details like events etc and configuration just let me know I will keep this cluster config.
So any good tips to undo this , it seams the gui is not talking to the system but had deleted just that info so that the powershell can't fix the configuration.
Logs below <>
Remove-SRPartnership : A general error occurred that is not covered by a more specific error code.
At line:1 char:1
+ Remove-SRPartnership -Force -DestinationComputerName
PS C:\Users\Administrator.MVP> Get-SRGroup | % { Remove-SRGroup -Name $_.name }
Remove-SRGroup : Unable to delete replication group "rg01", detail reason: "Replication group "rg01" still has
partners, please remove partners first.Get-SRGroup
ComputerName : Windows10
Description :
Id : cfcfba58-5388-4ebe-9113-be5c37eed31c
IsAutoFailover : True
IsCascade : False
IsCluster : True
IsInPartnership : True
IsPrimary :
IsSuspended :
IsWriteConsistency : False
LogSizeInByte : 8589934592
LogVolume : C:\ClusterStorage\Volume2
Name : rg01
NumOfReplicas : 1
Partitions : {674a4697-4890-4642-b76d-fb61c798d7f8}
PSComputerName :
ReplicationMode :ComputerName : Windows10
Description :
Id : 264b27f4-9f0c-40d6-8bca-82a9fd3817e0
IsAutoFailover : True
IsCascade : False
IsCluster : True
IsInPartnership : True
IsPrimary :
IsSuspended :
IsWriteConsistency : False
LogSizeInByte : 8589934592
LogVolume : \\?\Volume{089d016c-6de7-4fad-9400-0871efe4126b}\
Name : rg02
NumOfReplicas : 1
Partitions : {39161b09-e7fc-446f-a27e-276640bc30f2}
PSComputerName :
ReplicationModeGreetings, Robert Smit Follow me @clustermvp http://robertsmit.wordpress.com/ “Please click "Vote As Helpful" if it is helpful for you and Proposed As Answer” Please remember to click “Mark as Answer” on the post that helps you
Monday, October 6, 2014 5:20 PM -
Marco,
Thanks for the suggestions. We have forwarded these to the Product Group. And unfortunately, we cannot comment on any future plans or features.
John Marlin
Microsoft Server Beta Team
Thanks, John Marlin Microsoft Server Beta Team
- Proposed as answer by John Marlin [MSFT]Microsoft employee Monday, October 6, 2014 8:56 PM
- Unproposed as answer by Martin Edelius (Atea) Tuesday, October 7, 2014 6:41 AM
Monday, October 6, 2014 7:31 PM -
Hi John,
Thanks for the reply and no problem that you cannot comment.
Great work on this new feature and i will be following the progress :-)
Regards,
Monday, October 6, 2014 8:27 PM -
8 Gbps is required even when doing just storage replica?Wednesday, October 8, 2014 1:03 PM
-
If interested, i put together a Guide using my labbuildr automated Lab Building environment to Create a 2-Node Scaleout Fileserver consuming LUN´s from to Windows iSCSI Targets, all vNext based.
HyperV_Guys Guide to Storage replication in vNext
I also die my "survival Guide" :-)
Storage Replica survival Guide
However, i am having issues currently to detect correct states Cluster UI vs. SR PowershellWithout checking eventlog for 5009, one might be lost ...
Also upon installation, the CSV shows redirected .... f
The volume transfer to remote seems to be full an not 10 Agnostic ... my Thin iSCSI LUN remote filled up the Complete space.
When removing a CSV that is not Part of the replica, but Lower Number ( eg. ClusterStorage\Volume1), upon Next reboot, the repicated data Voluem 8 and Only the datavolume ) get´s moived into the "free" Position.
My Data CSV was ClusterStorage\Volume3 and became ClusterStorage\Volume1
- Edited by azurestack_guy Friday, October 10, 2014 3:12 PM
Wednesday, October 8, 2014 6:11 PM -
Robert,
The GUI config works, but it is not always "automatic". In order to select the Target Disk it needs to be Online. I think I have seen AVailable Storage automatically move to the Secondary Server as part of this process, but not always. Instead, I just make sure Available Storage is online on the secondary server before you have to choose the target. I'm pretty sure that process in meant to be automated, but it seems to not work all the time.
David A. Bermingham, MVP, Senior Technical Evangelist, SIOS Technology Corp
Wednesday, October 8, 2014 8:56 PM -
Just wanted to point out to anyone looking to this tread for guidance that Ned Pyle has published an official guide for setting up Storage Replica (the official name for this feature).
Find it here: http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
- Edited by Martin Edelius (Atea) Friday, October 10, 2014 9:07 AM
- Marked as answer by Martin Edelius (Atea) Friday, October 10, 2014 9:07 AM
Friday, October 10, 2014 9:07 AM -
He should outline and mark FAT that it is absolutely important to check for event 5009 :-)
Friday, October 10, 2014 3:13 PM -
I'm testing this with "Server To Server" and all going okay with the install and initial setup. I also did a couple of tests changing the direction of Source & Destination and that also going okay.
But then, I wanted to test a real DR scenario when the Source goes down suddenly. From this point, I was stuck and not able to recover the Destination volume. The last messages I see in eventlog are ..
.. Connection lost to computer hosting the primary replication group
.. WVR secondary entered stand-by state
I could not revive the disk to be available .. and also I can not remove SRPartnership or SRgroup.
Is there specific steps to resolve this?
Thanks!
Wednesday, November 19, 2014 9:28 AM