none
Moving clients to SBS 2011 and using Hyper-V and SSDs, doing away with SCSI, SAS, and RAID configurations

    一般討論

  • I manage about 15 servers across 10 separate clients; all servers are Dell PowerEdge models and use either 15,000rpm SCSI or SAS drives. All customers are on either SBS 2003 or 2008. It will be time to replace some server hardware this year and I’m thinking about going with Hyper-V and solid state drives from here on, no more SAS or RAID. No more RAID is a big deal since all of my servers are currently hardware RAID, which I'm used to, using Dell PERC controllers, but since SSDs have no moving parts I’m not sure if this will be an issue. In about 12 years I can only remember about 4 or 5 drives failing across multiple customers and many servers.

    I have been running my SBS server on Hyper-V (Windows Server 2008 R2 is the host) and SSDs for over a year, started with SBS 2008 as my guest OS and then in December 2010 migrated to SBS 2011 and haven’t had any issues to speak of. My SBS virtual machine boots and is ready to go in under 2 minutes which is pretty awesome.

    Anyway, I’m looking for some feedback on going all SSD with Hyper-V and any comments or caveats from others that have experience in this area.

    Any new servers will be Dell PowerEdge models, probably the T410 since that can handle up to 6 cabled SATA drives. The Dell PowerEdge raid controllers don’t play nice with non-Dell hard drives anymore or non-Dell SSDs since they try to lock you into their special firmware on their drives. TRIM isn’t supported when using the Dell RAID controllers either from what I’ve read so I won’t be doing any hot swap drives or RAID, just using the SATA ports on the motherboard.

    I will be using Intel SSDs. I know there are some good things coming out of OCZ but I’m really happy with the Intel drives and haven’t had any issues to speak of with their X25-m products, even with multiple firmware upgrades. So, this isn’t a discussion about which SSD is the king of speed, I’m going for fast and reliable here, other drives are faster but a lot of times new firmware seems to brick the drives or you have to jump through hoops to upgrade them. The Intel process is very simple and has always worked for me.

    I plan on using the new Intel line of model 320 SSDs and here’s what I’m thinking:

    Server will be a Dell PowerEdge T410 with 32GB of RAM, on APC battery power as backup, about 10 users, desktops are all Windows 7 Pro w/ SP1.

    Server Host OS will be Windows Server 2008 R2 w/ SP1 running the Hyper-V role, installed on an Intel model 320 SSD, 120GB, all guest VHDs will be fixed size.


    Guest OS will be SBS 2011, 16GB of RAM allotted for this guest: installed on an Intel 320 SSD, 300GB drive. C:\ drive VHD will be on this SSD and I will leave Exchange server on C:\. I will install a second Intel 320 SSD, 300GB, and put the D: drive VHD file on this second SSD. I will move the user data shares, my docs redirection, and roaming profiles to this drive. I will also move the SharePoint Foundation data here; they will use companyweb a bit.

    I may add a third SSD and put a few Windows 7 Pro VHD files on there, hence 32GB of RAM in the server…trying to plan ahead.

    I will then install a Samsung F4 SATA drive, 2TB in size, only 5,400rpm but it has three platters instead of four so I’ve seen it put some 7,200 rpm SATA drives to shame and it runs nice an cool. On this drive I will create a shadow copy volume VHD for the SBS 2011 file system shadow copies, using the default times of 7am and noon. On this Samsung I’ll also have a VHD for the WSUS DB and downloads (I don’t need crazy speed there) and also setup a 1TB VHD to do SBS backups, using the native SBS 2011 backup program. Actually I will probably install 2 of these Samsung drives and put a 1TB backup VHD on each of them just for redundancy of the SBS backups. Maybe backup M, W, F to one and other days to the other, just to have the backups on two separate spindles.

    I’m also looking into real time replication software, http://www.steeleye.com/DataKeeper_69.htm , which can do real time replication of the SBS 2011 guest OS into a separate VHD, probably on one of the Samsung drives. I have to see if they’ll be able to keep up though since they will run quite a bit slower than the Intel SSDs.

    It’s tough going the non-RAID route but I feel with multiple backup drives and possible using this replication software, along with the non-moving parts of the SSD setup, I should have a pretty reliable system….and FAST performance!  Any comments would be appreciated!

    • 已變更類型 AITD - Jon 2011年4月11日 下午 06:31
    2011年4月11日 下午 06:30

所有回覆

  • When you say you've been running your SBS in this configuration for over a year, is that a production SBS?  How many clients? 

    The reason I ask is that I run my home SBS on a very basic server with desktop-quality drives.  However, that's purely a test machine, and I am much more particular about the production SBS here at the office.  My first thought when I read your post was that I would be reluctant to use desktop-quality SSDs on a business-critical server, but I don't have enough SSD experience to know if this is a "bad thing" or not.

    IMO the likelihood of failure is only half the equation when talking about giving up RAID.  Regardless of how reliable an SSD might be, some do fail.  In the configuration you describe, loss of any one of the three drives has your whole domain out of commission while you get a replacement and restore from backup, as opposed to a RAID configuration where a drive failure can be mostly transparent to the users.  Even replicating the guest VHDs onto mechanical drives leaves the host SSD as a single point of failure.

    For perspective, I'm a big fan of 15K SAS drives - I find SSD too expensive for servers at this point, even with the performance gain.  This is more religion than science, and I hope some people who feel the other way will jump in and disagree with me.


    Dave Nickason - SBS MVP
    2011年4月11日 下午 09:53
  • Thank you for the comments Dave and I do agree that SAS / RAID is good stuff, I am just trying to think outside the box a little, and looking for opinions of others, just as you have provided.

    My SBS box at home is used to run my business but I am the only user so It's not getting hit by multiple people as it would in a production environment. I beat it up more than most of my clients though I'd say, lots of file I/O, heavy email use, big file copies, testing multiple VMs at once, etc. That's all running on a PowerEdge T110 with 16GB of RAM, 8GB for the SBS 2011 VM. Running on 80GB Intel X25-M SSDs.

    As for cost, I'm not too worried about that. As a boot drive for Hyper-V I would use a 120GB drive, cost is under $300. A single 146GB 15k SAS drive from Dell is more than that. The 300GB 15k rpm SAS drives from Dell are usually around $400 - $450 and the 300GB SSD is around $500 so not much of a difference. Not to mention if I were to go the traditional route (SAS / RAID controller) I'd have a few different RAID 1 mirrors and a RAID 5 set here so we're talking lots of drives, making the cost quite a bit more than the SSD route I'm thinking about.

    I agree with your single point of failure comment and that is a major caveat. I'm not sure how much downtime there would be though, I agree some if a drive were to fail but with Hyper-V backups or this continuous replication idea, I could bring up the VM on the Samsung 2TB drive. It would be slow but it would work. Or, since I wouldn't have all of the RAID / SAS drive overhead, I could just buy a spare SSD to have on hand most likely. SSD technology is really taking off and I'm sure it won't be long before TRIM is supported in SSD RAID arrays or a better method becomes available.

    This idea goes away from my methods in the past, reliability first, performance second...and that's the very reason I've posted this. I'm curious to see if people will think I'm crazy or if they've actually implemented something like this. I've seen people running production servers on single SCSI, SAS, or SATA drives, no redundancy in disks, or power supplies. Crazy if you ask me but since I have had a great experience with SSDs I'm actually thinking about it. I'd of course do redundant power either way though. Lots of things can fail though, not just drives. I had a server motherboard fail, RAM fail, etc...so even with RAID / SAS there are things that could break, although these types of failure are less common.

    2011年4月12日 上午 01:00
  • This is indeed a very interesting setup. As I use a OCZ Vertex SSD for private with no issues in two years, I have had some thoughts of using SSD for Enterprise as the speed really is very impressive for a lot of I/O. There are some questions though that I still do ask myself:

    - "Enterprise" SSDs are announced here and there but its hard to really tell the difference or whether they are more reliable. Well this is rather a thought than a question...

    - How could I monitor SSD "wear out" for the Flash Modules?

    - Is SLC Nand worth the high extra cost?

    For your setup:

    - Does that board support SATA3 drives? I would make sure it does for reason of Performance and options to upgrade Drives as better SSDs come along.

    - Might it be even faster/more reliable to use PCIe-SSDs instead of SATA?

    - How much downtime is acceptable for your customers? I would consider that as a factor for choosing SSD Setup or not.

    I guess you wont find many who already tried using SSDs for Enterprise purpose. But someone's got to try it. Thats why I in your position would not switch

    over to this strategy for all your customers at once. Maybe you can start with some less critical Servers and see how it goes.

    2011年4月12日 上午 07:30
  • Hi,

    We have been using Intel X25-M 160GB G2 code (second generation) SSDs in the following settings:

    • Via Intel desktop on board software RAID: RAID 0, 1, 10 (Screamingly fast).
    • Via nVidia desktop on board software RAID: A/A and just as fast.
    • Intel Modular Server: RAID 5 set up for Hyper-V host OSs, Quorum, shared memory files.
    • Intel Server Systems: Intel SRCSASxxx RAID and RS2BL0x0 RAID: RAID 1, 10, 5

    We have been using G2 code X25-M SSDs since we could get them in Canada (we were one of the first for both 80GB and 160GB in Canada).

    For clustering purposes we would drop medium to low I/O requirements onto either 146GB 15K SAS (2.5"), or the new 300GB 15K SAS (2.5"), or at the 3.5" level on 450GB 15K SAS and up.

    Note that the soon to be 300GB and 600GB Intel SSDs will be quite expensive. So, the reality will be a balance between SSD and 15K SAS IMNSHO.

    Philip


    Philip Elder SBS MVP Blog: http://blog.mpecsinc.ca
    2011年4月12日 下午 05:54
  • I appreciate all the feedback.  I'm pretty excited about flash technology but will probably hold off for now and stick with SAS for my customer servers. I agree there needs to be a way to monitor the life span of an SSD.  I wasn't too worried about the Dell servers having SATA3 since I would go PCIe at some point, as you have pointed out. The Intel 320 model drives are getting really good IOPS too, more than my Intel 510 SSD, but the 510 seriese is SATA3 and can move more data at one time. 

    I know server level SSDs use SLC flash but even Intel is getting away from that in favor of MLC.  Dell will sell SSDs in their server but for $1,000 for a 50GB drive is just out of the question.   I will continue to run SSDs it in house and hopefully try out the OCZ Z-Drive R3 soon, http://www.ocztechnology.com/ocz-z-drive-r3-p84-pci-express-ssd.html .  This thing has some really neat tech behind it and they have TRIM working because of their virtual controller.  If they'd just find a way to run TWO of these in some type of mirror! I'm sure it will come at some point.

    For now I can get a T610 server, load it up with 15k SAS drives and get pretty good performance.  I'd spec it with a high end processor, lots of RAM, and a bunch of PCIe slots for future changes.  Hyper-V will be part of my installs from now on for sure.  That way when the PCIe cards become a bit more common, I can just move the VHD files over to the flash and repurpose the SAS drives as a backup repository or find some other use.  I could keep the Hyper-V host (boot drive) as one of the 15k SAS RAID 1 sets too, since I hear some systems (including some Dell models) don't let you boot off the PCIe flash boards as of now.  That would also make the boot volume redundant so Hyper-V wouldn't crash if a drive failed, as it would in the single drive model.

    Keep the comments coming though, I'm curious to see what other people are doing in the SSD / Hyper-V arena.

    - Jon

    2011年4月13日 下午 11:56
  • How about a mix of the two technologies? Right now the industry is in a transition phase and the new technologies are just so expensive! LSI has been doing some amazing things with their RAID controllers. They have this technology called Cachecade: http://www.lsi.com/storage_home/products_home/solid_state_storage/cachecade_software/index.html#White Paper

    Basically, by adding SSDs to your array as second layer of expanded controller cache, your SCSI drives will be able to chug away since the RAID cache is no longer a bottle neck. Additionally, this primarily affects read speed since commonly accessed data will be stored in cache to improve the customer experience. If the application does not necessarily require heavy writing, this is a great solution. I work for a server company and a lot of our clients are seeing huge gains in performance with this technology. As far as anything else I've researched, this is the best way to maximize IOPS without crushing your budget into oblivion.


    • 已編輯 Snuffelz 2011年5月10日 下午 04:59
    2011年5月10日 下午 04:32
  • How about a mix of the two technologies? Right now the industry is in a transition phase and the new technologies are just so expensive! LSI has been doing some amazing things with their RAID controllers. They have this technology called Cachecade: http://www.lsi.com/storage_home/products_home/solid_state_storage/cachecade_software/index.html#White Paper

    Basically, by adding SSDs to your array as second layer of expanded controller cache, your SCSI drives will be able to chug away since the RAID cache is no longer a bottle neck. I work for a server company and a lot of our clients are seeing huge gains in performance with this technology. As far as anything else I've researched, this is the best way to maximize IOPS without crushing your budget into oblivion.


    Intel's take on it is SSD Cache:

    We have not run into the need to go beyond 15K SAS in RAID 5 with a battery backup on the RAID controller as of yet.

    With the major architecture changes that have been made in Exchange 2010, the I/O demands that were there for SBS 2008 and Exchange 2007 are no longer a factor in our system configuration considerations.

    Don't get me wrong, SBS 2011 has its own I/O requirements, but they are not the same as SBS 2008 by far.

    When configuring for virtualization the SSD Cache configuration may not be of benefit depending on how the cache technology will behave with a series of large contigious files (VHDs). We will be testing this configuration to see if there are any real world benefits for us in the SBS/SMB world, just not right now. :)

     


    Philip Elder SBS MVP Blog: http://blog.mpecsinc.ca
    2011年5月10日 下午 04:48

  • Intel's take on it is SSD Cache:

    Actually, Intel's products are OEM'd from LSI, using the same technology (same cards) as in IBM's MegaRAID products. IBM has been selling the LSI-based products forever, all the way back to the FASsT series...
    2011年8月9日 下午 03:06