locked
LUN limit with iSCSI (2012 R2) RRS feed

  • Question

  • Hi. Asked a similar question before, and really not looking for a lecture on best practice (although happy to discuss my architecture if people are curious), but I have a use case where I want to address lots of iSCSI LUNs on a 2012 R2 machine. I seem to have hit a limit of 255 mounted iSCSI volumes. Some of the documentation seems to suggests this is a limit per 'target', but I'm a bit unsure what target really means in this context. Is it possible to mount more LUNs somehow? Does 'target' simply mean IP address and port? Can you have 255+ LUNs on one network card\HBA if the you use multiple IP addresses for the (same) target.

    Thanks very much! 

    Wednesday, January 1, 2014 3:50 PM

Answers

  • Hi. Asked a similar question before, and really not looking for a lecture on best practice (although happy to discuss my architecture if people are curious), but I have a use case where I want to address lots of iSCSI LUNs on a 2012 R2 machine. I seem to have hit a limit of 255 mounted iSCSI volumes. Some of the documentation seems to suggests this is a limit per 'target', but I'm a bit unsure what target really means in this context. Is it possible to mount more LUNs somehow? Does 'target' simply mean IP address and port? Can you have 255+ LUNs on one network card\HBA if the you use multiple IP addresses for the (same) target.

    Thanks very much! 

    There are multiple software and hardware limitations. Hardware one is - you cannot have more then 255 (2^8) LUs on a single SCSI device. That's by spec as there's only one byte to address LUs. To workaround this you can have multiple devices and here comes software limit as quite a lot of implementations create one target (device) and add LUs to it so cannot handle more then 2^8 LUs (last time I've checked MS target was a good example of such an implementation, but this may have been changed...). So it you have issues on server (target) side - just swap your iSCSI target vendor for one you know working. If you have issues on client (initiator) side then it's not software rather then some configuration as I think we had more then 1,000 LUs working with pre-R2 versions.

    Back to architecture. First goes "server" (most of the properly written iSCSI targets can have multiple instances running bind to the different IP/port pair on the same machine, again not sure about MS one). So yes, you can have many servers on a single host (software limitation). Then goes actual targets. You can have multiple targets per server (software limitation). Each target can have multiple LUs and up to 2^16 (that's HARDWARE limitation).

    Hope this helped :)

     

    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    • Marked as answer by Hob_Gadling Thursday, January 2, 2014 11:41 AM
    Wednesday, January 1, 2014 6:10 PM
  • Thanks for this. 

    The Starwind initiator doesn't seem to work properly on 2012 R2 - i.e. I can connect to additional LUNs but they don't appear in disk manager. Also there doesn't seem to be a command line interface, but it also looks like you don't support it any more (which is fair enough). Is really does seem to be the only other initiator option our there too. Naughty Microsoft. 

    The Microsoft initiator definitely seem to hit a limit of 255 -  can anyone confirm if this is indeed the case, and if i can circumvent it somehow? Can i install multiple versions. 

    I am getting embarrassed but the Linux team at the moment as open-iscsi seems to have no problem at all addressing a million, billion LUNs.

    That's the way it works: Microsoft embeds some functionality into core OS, basically kills competitor products being usually first and superior to MS follow up ones (happened to Stacker from Stec Electronics replaced with Double[Drive]Space, happened to Navigator from Netscape phased out by Internet Explorer, happened to StarWind Software and also IBM iSCSI initiators replaced with Microsoft one, BTW both StarWind and IBM initiators were also free, and happened to millions of other titles) and then people at some point start complaining about missing features, performance, stability etc with MS products. But there's no choice any more and virtually no way back :) 

    Even if we'd publish our initiator as open source it would still 1) have non-passing issue with Microsoft cluster validation as we've built our one as a monolithic SCSI port, design being both undocumented and unsupported by Microsoft for independent software developers (like usual - MS keeps goods things for own team) so it cannot be WHQL'ed or logo'd for Windows, should be re-written as StorPort miniport and tha'ts a) slow and b) requires time and c) forces to dump 80% of code and 2) it requires quite a lot of Windows internals knowledge to support (that's BTW why open source AoE initiator driver is *SO* broken and basically abandoned). There are very few people around who can do it these days.

    You're right. Open-iSCSI as initiator and LIO and SCST it has replaced (especially if paired with DRBD) are ages in front of MS network block storage technologies. Microsoft storage team decided to surrender and push SMB 3.0 they already have instead of improving iSCSI stack and matching it with a proper clustered file system they don't have (ReFS is a joke compared to VMFS).

    Modern trend with VMware is vVols, keeping every VM on own LU (many benefits like to need to have intermediate FS, no need to handle reservation conflicts, quite a lot of intermediate protocols removed from storage stack so it's lighter and faster) so I absolutely understand what you're doing and why...

    Good luck!


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    • Marked as answer by Mandy Ye Friday, January 3, 2014 9:00 AM
    Thursday, January 2, 2014 1:34 PM

All replies

  • Hi. Asked a similar question before, and really not looking for a lecture on best practice (although happy to discuss my architecture if people are curious), but I have a use case where I want to address lots of iSCSI LUNs on a 2012 R2 machine. I seem to have hit a limit of 255 mounted iSCSI volumes. Some of the documentation seems to suggests this is a limit per 'target', but I'm a bit unsure what target really means in this context. Is it possible to mount more LUNs somehow? Does 'target' simply mean IP address and port? Can you have 255+ LUNs on one network card\HBA if the you use multiple IP addresses for the (same) target.

    Thanks very much! 

    There are multiple software and hardware limitations. Hardware one is - you cannot have more then 255 (2^8) LUs on a single SCSI device. That's by spec as there's only one byte to address LUs. To workaround this you can have multiple devices and here comes software limit as quite a lot of implementations create one target (device) and add LUs to it so cannot handle more then 2^8 LUs (last time I've checked MS target was a good example of such an implementation, but this may have been changed...). So it you have issues on server (target) side - just swap your iSCSI target vendor for one you know working. If you have issues on client (initiator) side then it's not software rather then some configuration as I think we had more then 1,000 LUs working with pre-R2 versions.

    Back to architecture. First goes "server" (most of the properly written iSCSI targets can have multiple instances running bind to the different IP/port pair on the same machine, again not sure about MS one). So yes, you can have many servers on a single host (software limitation). Then goes actual targets. You can have multiple targets per server (software limitation). Each target can have multiple LUs and up to 2^16 (that's HARDWARE limitation).

    Hope this helped :)

     

    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    • Marked as answer by Hob_Gadling Thursday, January 2, 2014 11:41 AM
    Wednesday, January 1, 2014 6:10 PM
  • Thanks for this. 

    The Starwind initiator doesn't seem to work properly on 2012 R2 - i.e. I can connect to additional LUNs but they don't appear in disk manager. Also there doesn't seem to be a command line interface, but it also looks like you don't support it any more (which is fair enough). Is really does seem to be the only other initiator option our there too. Naughty Microsoft. 

    The Microsoft initiator definitely seem to hit a limit of 255 -  can anyone confirm if this is indeed the case, and if i can circumvent it somehow? Can i install multiple versions. 

    I am getting embarrassed but the Linux team at the moment as open-iscsi seems to have no problem at all addressing a million, billion LUNs.

    Thursday, January 2, 2014 11:45 AM
  • Thanks for this. 

    The Starwind initiator doesn't seem to work properly on 2012 R2 - i.e. I can connect to additional LUNs but they don't appear in disk manager. Also there doesn't seem to be a command line interface, but it also looks like you don't support it any more (which is fair enough). Is really does seem to be the only other initiator option our there too. Naughty Microsoft. 

    The Microsoft initiator definitely seem to hit a limit of 255 -  can anyone confirm if this is indeed the case, and if i can circumvent it somehow? Can i install multiple versions. 

    I am getting embarrassed but the Linux team at the moment as open-iscsi seems to have no problem at all addressing a million, billion LUNs.

    That's the way it works: Microsoft embeds some functionality into core OS, basically kills competitor products being usually first and superior to MS follow up ones (happened to Stacker from Stec Electronics replaced with Double[Drive]Space, happened to Navigator from Netscape phased out by Internet Explorer, happened to StarWind Software and also IBM iSCSI initiators replaced with Microsoft one, BTW both StarWind and IBM initiators were also free, and happened to millions of other titles) and then people at some point start complaining about missing features, performance, stability etc with MS products. But there's no choice any more and virtually no way back :) 

    Even if we'd publish our initiator as open source it would still 1) have non-passing issue with Microsoft cluster validation as we've built our one as a monolithic SCSI port, design being both undocumented and unsupported by Microsoft for independent software developers (like usual - MS keeps goods things for own team) so it cannot be WHQL'ed or logo'd for Windows, should be re-written as StorPort miniport and tha'ts a) slow and b) requires time and c) forces to dump 80% of code and 2) it requires quite a lot of Windows internals knowledge to support (that's BTW why open source AoE initiator driver is *SO* broken and basically abandoned). There are very few people around who can do it these days.

    You're right. Open-iSCSI as initiator and LIO and SCST it has replaced (especially if paired with DRBD) are ages in front of MS network block storage technologies. Microsoft storage team decided to surrender and push SMB 3.0 they already have instead of improving iSCSI stack and matching it with a proper clustered file system they don't have (ReFS is a joke compared to VMFS).

    Modern trend with VMware is vVols, keeping every VM on own LU (many benefits like to need to have intermediate FS, no need to handle reservation conflicts, quite a lot of intermediate protocols removed from storage stack so it's lighter and faster) so I absolutely understand what you're doing and why...

    Good luck!


    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

    • Marked as answer by Mandy Ye Friday, January 3, 2014 9:00 AM
    Thursday, January 2, 2014 1:34 PM
  • Thanks. It's really nice to find someone who understands where i'm coming from. Annoyingly, i'm pretty sure my architecture would work brilliantly in 2012 R2 if i could squeeze a few more LUNs out of Windows. I'm only looking to do about 350 ish per node, 2012 R2 seems to have vastly improved with regards to enumerating larger numbers of logical units, so this is the only thing holding me back. 

    The docs for Microsoft iSCSI do mention things like 

    "Initiator Instance Name is the name of the initiator via which the SendTargets operation is performed. If not specified then the initiator used is selected by the iSCSI initiator service." 

    If initiator instance = SCSI bus, then fair play, problem solved. Simply adding another iSCSI initiators driver in Windows doesn't seem to work though, it seems to be a single bus (unless i made a mistake), although it does display two initiators in the drop down box. 

    SMB 3.0 is great, but i hardly call iSCSI antiquated just yet. 

    Thursday, January 2, 2014 2:27 PM