none
Windows 2012 iSCSI Target configuration

    Question

  • Hi,

    I'm looking at setting up a Windows 2012 R2 iSCSI storage server (target), that will act as storage to an Windows 2012 R2 Hyper-V cluster.

    In each of the Hyper-V hosts, I have 2 dedicated 1Gbps NICs.
    In the Storage Server, I have 6 dedicated 1Gbps NICs.

    The question is: How do I setup MPIO in this case?

    I've looked at setting up 6 connections per host (3 connections from 1 NIC on the host to 3 separate NIC on the Storage, and 3 connections from the other NIC on the host to the other 3 NIC on the storage) and use round robin.

    Would this be the most optimal way of doing it?

    I also looked at teaming the NICs on the storage server: 2 teams of 3 NICs each. Then connect the Hyper-V host with two connections (from each NIC to individual teams) using MPIO round robin.
    This reduces the configuration maintenance and I haven't seen any impact on performance, but of course I'm worried about the team at the Storage Server level.

    Any suggestions?

    Thank you,
    Stephane

    Thursday, December 19, 2013 12:28 AM

Answers

  • Thank you for your answer.

    Although all very good points, this does not address my question though.

    The storage will be used for labs only, so the single point of failure is not a problem.
    We are already using SMB in production (with multichannel), but in this case, I need to use iSCSI for several reasons (VMM management, other storage in iSCSI,..).

    From a caching side of things, the iSCSI problem will be partly resolved by using tiered storage spaces.
    A few SSD's will be the first point of entry for the data, so performance is not an issue (and it will be used for labs VMs anyway)

    My question then remains: if I have 6 NICs on the storage server (and I want to used them all so I don't create a bottleneck), how should I connect my 2 NICs from each hosts?

    Cheers,
    Stephane
     

    1) Was trying to revert you from iSCSI to use SMB 3.xx but you told you know what you're doing. Good :)

    2) I actually told not to use MPIO with iSCSI. The link I gave clearly states iSCSI and LACP coming with Windows Server 2012 (R2) *is*

    supported config so maybe there's some confusion... See if you *can* do something does not mean you *should*. And here's why: by default iSCSI initiator would create only one iSCSI session with one TCP connection. So if you layer your virtual IP SAN link over teamed connection you'll have iSCSI/TCP session/conection bond to one physical NIC rendering other ones useless for performance (fault tolerance is OK), performance would NOT increase. If you use all-Microsoft-around config (and that's what you do) you can use a workaround and enabled "Multiple Connections per Session" (MC/S) so you'll have many TCP connections per iSCSI session. That would do the trick from a performance point of view but... MCS is a nasty thing, read more here:

    http://scst.sourceforge.net/mc_s.html

    ...so iSCSI + properly configured MPIO is a way to go. Hope this answers your "MPIO Vs. LACP" question fully.

    3) Back to your MPIO config. OK, here we go:

    a) Use two subnets. Say 192.168.0.xx (SN1) and 192.168.1.xx (SN2). With a clients (both) one NIC would have SN1 address and other SN2 address. Say C1: 192.168.0.1 and 192.168.1.1 and C2: 192.168.0.2 and 192.168.1.2

    b) Server would have 3 NICs in SN1 and 3 NICs in SN2. Say 

    192.168.0.100

    192.168.0.101

    192.168.0.102

    and

    192.168.1.100

    192.168.1.101

    192.168.1.102

    c) If you use StarWind then go to the config and enable iSCSIDiscoverListAllInterfaces, other targets: make sure they listed on incoming

    traffic on ALL the added NICs 

    d) On your every client inside MS initiator on the "Discovery" tab put all 6 server IPs. See:

    http://jaminquimby.com/joomla253/images/stories/Servers/Windows2008/ms-iscsi-initiator-003.jpg

    e) Go to "Targets" tab you should see all the targets you've created on your target (not LUNs)

    f) Click selected target, click "Connect", put these checkboxes, see:

    http://media.community.dell.com/en/dtc/zdl-ueg_poslkapeyy2sqw241974.bmp

    click "Advanced" and in a "Local Adapter" field select "MS iSCSI Initiator", select one of the local IPs as your

    local address, as a destination IP - ones of the 3 server IPs (corresponding subnet of course). See:

    http://paulgrevink.files.wordpress.com/2012/02/iscsi-5.jpg

    click "OK" to connect your target. Then click the same target once again, select "Connect", once again put checkboxes, see:

    http://media.community.dell.com/en/dtc/zdl-ueg_poslkapeyy2sqw241974.bmp

    but this time select second server IP address from the used subnet. 

    g) Repeat for the 3d IP / adapter.

    h) Repeat starting from f) for the second subnet.

    After that you'll have fully configured MPIO paths. Make sure you select Round Robin as it's the best performing one. See:

    http://technet.microsoft.com/en-us/library/dd851699.aspx

    http://blogs.msdn.com/b/san/archive/2008/07/27/multipathing-support-in-windows-server-2008.aspx

    Good luck :)


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts, starts with just a pair of boxes and scales-out to infinity.

    • Marked as answer by Stephane-OTG Sunday, December 22, 2013 10:41 PM
    Saturday, December 21, 2013 10:05 PM

All replies

  • Hi,

    I'm looking at setting up a Windows 2012 R2 iSCSI storage server (target), that will act as storage to an Windows 2012 R2 Hyper-V cluster.

    In each of the Hyper-V hosts, I have 2 dedicated 1Gbps NICs.
    In the Storage Server, I have 6 dedicated 1Gbps NICs.

    The question is: How do I setup MPIO in this case?

    I've looked at setting up 6 connections per host (3 connections from 1 NIC on the host to 3 separate NIC on the Storage, and 3 connections from the other NIC on the host to the other 3 NIC on the storage) and use round robin.

    Would this be the most optimal way of doing it?

    I also looked at teaming the NICs on the storage server: 2 teams of 3 NICs each. Then connect the Hyper-V host with two connections (from each NIC to individual teams) using MPIO round robin.
    This reduces the configuration maintenance and I haven't seen any impact on performance, but of course I'm worried about the team at the Storage Server level.

    Any suggestions?

    Thank you,
    Stephane

    1) With your design storage is a single point of failure. Not good. If you care about uptime you need to consider having something fault tolerant. See:

    http://technet.microsoft.com/en-us/library/cc938489.aspx

    2) Microsoft iSCSI target has no server-side cache so is very slow (because of working with VHDX in a pass-thru mode it's also a very bad choice for all-flash configs, it will just burn flash cells). If you're fine with your storage being a SPOF (see 1) consider using SMB share instead of a Microsoft iSCSI target. See:

    http://technet.microsoft.com/en-us/library/dn305893.aspx

    "System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed."

    3) You'd better create all-symmetric config with 3 redundant physical paths from Hyper-V hosts to storage. Recommended SMB share (see 2) can do SMB Multichannel just fine. See:

    http://blogs.technet.com/b/josebda/archive/2012/05/13/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0.aspx

    3) Don't use LACP with IP SAN protocols. It's not supported (unless running in a loopback). See:

    http://blogs.technet.com/b/askpfeplat/archive/2013/03/18/is-nic-teaming-in-windows-server-2012-supported-for-iscsi-or-not-supported-for-iscsi-that-is-the-question.aspx

    With SMB Multichannel you can create NIC teaming.

    Hope this helped!


    StarWind iSCSI SAN & NAS


    • Edited by VR38DETTMVP Friday, December 20, 2013 11:08 PM
    Thursday, December 19, 2013 10:38 AM
  • Thank you for your answer.

    Although all very good points, this does not address my question though.

    The storage will be used for labs only, so the single point of failure is not a problem.
    We are already using SMB in production (with multichannel), but in this case, I need to use iSCSI for several reasons (VMM management, other storage in iSCSI,..).

    From a caching side of things, the iSCSI problem will be partly resolved by using tiered storage spaces.
    A few SSD's will be the first point of entry for the data, so performance is not an issue (and it will be used for labs VMs anyway)

    My question then remains: if I have 6 NICs on the storage server (and I want to used them all so I don't create a bottleneck), how should I connect my 2 NICs from each hosts?

    Cheers,
    Stephane
     

    Thursday, December 19, 2013 10:35 PM
  • Thank you for your answer.

    Although all very good points, this does not address my question though.

    The storage will be used for labs only, so the single point of failure is not a problem.
    We are already using SMB in production (with multichannel), but in this case, I need to use iSCSI for several reasons (VMM management, other storage in iSCSI,..).

    From a caching side of things, the iSCSI problem will be partly resolved by using tiered storage spaces.
    A few SSD's will be the first point of entry for the data, so performance is not an issue (and it will be used for labs VMs anyway)

    My question then remains: if I have 6 NICs on the storage server (and I want to used them all so I don't create a bottleneck), how should I connect my 2 NICs from each hosts?

    Cheers,
    Stephane
     

    1) Was trying to revert you from iSCSI to use SMB 3.xx but you told you know what you're doing. Good :)

    2) I actually told not to use MPIO with iSCSI. The link I gave clearly states iSCSI and LACP coming with Windows Server 2012 (R2) *is*

    supported config so maybe there's some confusion... See if you *can* do something does not mean you *should*. And here's why: by default iSCSI initiator would create only one iSCSI session with one TCP connection. So if you layer your virtual IP SAN link over teamed connection you'll have iSCSI/TCP session/conection bond to one physical NIC rendering other ones useless for performance (fault tolerance is OK), performance would NOT increase. If you use all-Microsoft-around config (and that's what you do) you can use a workaround and enabled "Multiple Connections per Session" (MC/S) so you'll have many TCP connections per iSCSI session. That would do the trick from a performance point of view but... MCS is a nasty thing, read more here:

    http://scst.sourceforge.net/mc_s.html

    ...so iSCSI + properly configured MPIO is a way to go. Hope this answers your "MPIO Vs. LACP" question fully.

    3) Back to your MPIO config. OK, here we go:

    a) Use two subnets. Say 192.168.0.xx (SN1) and 192.168.1.xx (SN2). With a clients (both) one NIC would have SN1 address and other SN2 address. Say C1: 192.168.0.1 and 192.168.1.1 and C2: 192.168.0.2 and 192.168.1.2

    b) Server would have 3 NICs in SN1 and 3 NICs in SN2. Say 

    192.168.0.100

    192.168.0.101

    192.168.0.102

    and

    192.168.1.100

    192.168.1.101

    192.168.1.102

    c) If you use StarWind then go to the config and enable iSCSIDiscoverListAllInterfaces, other targets: make sure they listed on incoming

    traffic on ALL the added NICs 

    d) On your every client inside MS initiator on the "Discovery" tab put all 6 server IPs. See:

    http://jaminquimby.com/joomla253/images/stories/Servers/Windows2008/ms-iscsi-initiator-003.jpg

    e) Go to "Targets" tab you should see all the targets you've created on your target (not LUNs)

    f) Click selected target, click "Connect", put these checkboxes, see:

    http://media.community.dell.com/en/dtc/zdl-ueg_poslkapeyy2sqw241974.bmp

    click "Advanced" and in a "Local Adapter" field select "MS iSCSI Initiator", select one of the local IPs as your

    local address, as a destination IP - ones of the 3 server IPs (corresponding subnet of course). See:

    http://paulgrevink.files.wordpress.com/2012/02/iscsi-5.jpg

    click "OK" to connect your target. Then click the same target once again, select "Connect", once again put checkboxes, see:

    http://media.community.dell.com/en/dtc/zdl-ueg_poslkapeyy2sqw241974.bmp

    but this time select second server IP address from the used subnet. 

    g) Repeat for the 3d IP / adapter.

    h) Repeat starting from f) for the second subnet.

    After that you'll have fully configured MPIO paths. Make sure you select Round Robin as it's the best performing one. See:

    http://technet.microsoft.com/en-us/library/dd851699.aspx

    http://blogs.msdn.com/b/san/archive/2008/07/27/multipathing-support-in-windows-server-2008.aspx

    Good luck :)


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts, starts with just a pair of boxes and scales-out to infinity.

    • Marked as answer by Stephane-OTG Sunday, December 22, 2013 10:41 PM
    Saturday, December 21, 2013 10:05 PM
  • That's awsome!

    Thank you, that's exactly what I was after :-)

    Thanks again,

    Stephane

    Sunday, December 22, 2013 10:41 PM