none
Terrible performance after moving VM's to new hardware

    Question

  • We recently decided to purchase a new server to run our VM's on since the old one did not have redundant drives. Unfortunately our new and improved server has resulted in far worse performance now that I've moved the VM's over to it! I am not really sure why, but can only assume it is to do with the drive configuration.

    Basically when the VM's are running on the new virtual server host, it takes a long time just to login to the VM's (with our without a roaming profile). General use once you are logged in seems OK, but it is much slower than before. We have a DC as one of the VM's and when it was on the new server it was taking a very long time to provide AD group membership information resulting in some applications that request this information to perform very badly. As a result, I had to move that VM back to the old server.

    The new server has the following specs:

    Dell PowerEdge R620

    2x Xeon E5-2630v2 2.6GHz processors

    64GB RAM

    PERC H710 integrated RAID controller, 512MB NV CACHE

    6x 1TB 7.2K RPM 6Gps SAS 2.5" HDD

    O/S (C drive) configured on RAID 1 using 2x 1TB drives, GPT partition

    Hyper-V storage (D drive) configured on RAID 10 using 4x 1TB drives on 2 spans, GPT partition on simple volume in Windows formatted with 64KB cluster size

    Running Windows Server 2012 Datacenter R2 with Hyper-V

    The old server has the following specs:

    Dell PowerEdge 2950 (2007 model)

    2x Xeon E5335 2.0GHz

    24GB RAM

    Unknown RAID controller

    4x 300GB 15K SAS drives running with NO RAID. Each drive holds 2-3 separate VM's.

    Running Windows Server 2012 Datacenter with Hyper-V

    The new server is currently only running 6 VM's and already displaying the performance problems. We have over 10 VM's that need to be hosted ideally. The current 6 VM's which are running include our production Exchange 2010 server (only 10 staff with an 80GB VHD), Windows Update Server (100GB VHD), SQL 2000 server with limited use (100GB VHD), FTP server with minor use, build server for .NET development with infrequent use and a basic server which is used for licensing/connection services for our own applications we develop (low use).

    The only thing I can think of is the difference in the disk/RAID configuration. But it seems this configuration I am using is quite popular and therefore I would expect it to easily handle 6 VM's given the specs of the server?

    Any help much appreciated.

    Wednesday, January 29, 2014 12:53 AM

Answers

  • OK I think I've finally found it!!! Turns out the NICs in these servers come with a setting called "Virtual Machine Queues" which defaults to enabled. I disabled it on the 2 NIC's which I have in use and BOOM! Performance is back to where it should be :-)

    Looks like this is a fairly common issue. This is the article that led me to the solution:

    http://social.technet.microsoft.com/Forums/en-US/b7e50892-2ff2-4410-a545-c43584f30674/poor-network-when-create-virtual-switch-in-hyperv-2012?forum=winserverhyperv

    Mark - your comments about the creating the V-NIC shared with the host OS. I am not sure...  I just copied the config from the old server, but I will look at changing this too if it's not optimal. I thought I needed to share at least one of the V-NICs with the OS in order to allow the OS connect to the network?

    Thanks to everyone for their advice and persistence. Much appreciated.

    • Marked as answer by emssol Wednesday, February 05, 2014 9:46 AM
    Wednesday, February 05, 2014 9:46 AM

All replies

  • Hi,

    Just checking if you have try upgrade the integration services for the entire VMs?


    Lai (My blog:- http://www.ms4u.info)

    Wednesday, January 29, 2014 6:34 AM
  • Yes I have uninstalled/reinstalled integration services on the VM's after importing them in from the old server. After I did that and restarted the VM's I saw the messages in the event log such as:

    The storage device in 'VMNAME' is loaded and the protocol version is negotiated to the most recent version (Virtual machine ID 4507C662-438D-4727-88E2-A080BECD8C60).

    Might be worth noting that the VM's were exported from Hyper-V on Windows Server 2012 (old) and have been imported to Windows Server 2012 R2 (new). Not sure if that would cause any problems? I simply used the right-click and Export/Import option via Hyper-V manager on each server.

    Just looking at each VM's settings, there is also a mix of .VHD and .VHDX for the VM's virtual hard disks. Not sure if that is an issue.

    Wednesday, January 29, 2014 10:34 AM
  • Hi your hardware setups looks good, and should perform nicely with 6 VM's, it could be network related, but make sure you have write cache allocated to your VM Drive.

    I presume you installed all the latest drivers and firmware updates to your new server for 2012R2, for network and storage controller etc.

    Ive seen slow VM's when the network isn't performing, or configured wrong, you could test a non domained server, with no network connectivity, or boot an exsisting one into safe mode without networking just to see how it performs.

    Ruling out network, make sure if you have any AV, that you have excluded these directories :-

    ·         Default virtual machine configuration directory(C:\ProgramData\Microsoft\Windows\Hyper-V)
    ·         Custom virtual machine configuration directories
    ·         Default virtual hard disk directory (C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks)
    ·         Custom virtual hard disk directories
    ·         Snapshot directories
    ·         Vmms.exe
    ·         Vmwp.exe
    <o:p></o:p>

    also that you have disabled C-States in your System Bios, and you have set any performance plans to maximum performance.

    Mixed VHD and VHDX will have no effect on your underlying disk performance.

    you can check the disk performance on your host server using the resource monitor, Queue length needs to less than 1 for your RAID 10 disk.

    hope this helps. let us know how you get on.

    regards

    MArk



    Wednesday, January 29, 2014 11:02 AM
  • Just reading through the descriptions of your symptoms the first thought that comes to mind is disk contention / storage BUS / backplane / RAID configuration / firmware, etc.

    I have rarely seen networking causing symptoms beyond RDP performance.  So, unless your performance is all RDP connection specific (you do not see it when using a VM console at the Hyper-V Server console), then it is not network.  If remote profiles wee a problem then it would be logon only (as the remote profile is cached) and first logon more than repeated.

    What are the disk queue lengths within the VM OSes?

    Also, boot and logon are very disk intensive times - and usually any slowness during these phases usually points to something with storage.

    It can also be storage driver / firmware combinations.

    It can be VMQ if you are using any 10GB NICs for storage.


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Wednesday, January 29, 2014 5:18 PM
    Moderator
  • Not sure but R2 uses Generation 2 VMs and there may be a migration requirement?

    http://vniklas.djungeln.se/2013/08/08/successfully-migrate-gen1-vm-to-gen2-on-windows-hyperv-r2/


    Micky Hunt

    Wednesday, January 29, 2014 5:53 PM
  • Generation 2 is a totally different thing.  It is equal to an evolution in hardware. 

    It is not a requirement.  Generation 1 will be around for many years.  And outside of UEFI boot vs BIOS boot speed (strictly the boot speed difference of the two - shows on hardware too) there is no inherent performance differences between the two.


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Wednesday, January 29, 2014 5:59 PM
    Moderator
  • Thanks very much for all the responses. To answer some of the questions and points raised above:

    - I just checked and write caching is ticked under the C and D drive properties under Disk Management in Windows Server 2012.

    - I am also working with Dell support and have sent them some logs from their diagnostic tools which should identify any hardware or driver/firmware issues. I did not apply any Dell updates prior to installing this server, so perhaps this could be the problem.

    - The server has a 4 port Gigabit NIC installed and I am currently using 2 ports/virtual switches to spread the network load across the OS and 6 VMs. I think it's setup OK but will double check.

    - There is no AV installed on the host

    - I have not touched the C-States in the BIOS (didn't even know what it was until now!). So I will check that after hrs tonight and disable. Could be another potential fix.

    - I checked disk activity on the server and I am regularly seeing quite a few instances of response times of 1 (or higher) for the VHD's on the D drive of the server. I am also seeing response times of up to 7 for some items on the C drive. Not sure if I should be concerned about that either? See images below.

    - I checked the disk queue length on one of the VM's and performing a couple of simple operations, it managed an average of 9.8! That doesn't sound good?

    Wednesday, January 29, 2014 11:11 PM
  • And here's a snippet showing the response times from our Exchange Server VM basically sitting idle.

    Wednesday, January 29, 2014 11:23 PM
  • hi, those response times look okay, I would be very concerned if they go over 25ms, but your disk queue lengths sound pretty bad, !! it does sound like it could be a disk IO performance issue. (still disable C-states though).

    it almost sounds like the hardware Raid controller cache isn't enabled. this setting would be from the dell Raid controller application, we don't use Dell, but on HP, you run the HP array configuration utility, and you can set the raid controller cache settings, and make sure its enabled , and the disk have cache allocated.

    Some times the cache will not work if the on-board battery is not charged or faulty, I would install the latest dell Raid controller drivers and Raid application software. and go from there.

    but initially it does sounds like disk.

    Cheers

    Mark

    Thursday, January 30, 2014 7:43 AM
  • Looking at your specs, you've gone from 15k SAS drives to 7200 SAS drives. Aren't 7200 SAS drives basically SATA drives with a SAS drive interface, I think Dell called them inline SAS drives or something like that. 

    If you're queue lengths are that high wouldn't going back to 15k SAS drives, which spin up at 2x the speed, resolve the issue?

    Olly

    Thursday, January 30, 2014 9:43 AM
  • Ive been using HP 7.2 1TB SAS 6G drives on a standalone Hyper-V 2012 server, it works really well, it shouldn't be an issue.

    regards

    Mark

    Thursday, January 30, 2014 10:25 AM
  • OK so I've spoken to Dell today and they gave me a couple of things to try but couldn't really pinpoint any problems based on the diagnostic logs I sent them yesterday. The server was running a power saving performance plan, so has now been changed in the BIOS to the maximum performance plan which also disables C-States. I also updated to the latest BIOS/firmware which included an update to the RAID controller.

    Interesting you mention the RAID controller cache Mark. The Dell tech said the same thing on the phone today and he said according to his logs, the cache is enabled and using the full 512MB available, so that should be fine.

    Unfortunately, my initial tests since the BIOS changes and firmware upgrades are showing no changes. The VM's are still quite slow...

    I started up another VM tonight which is actually an old Windows XP Pro system our developers sometimes use to run SQL 2000 and VB6. As soon as I logged in with a roaming/locally cached profile (either via the Hyper-V console or RDP), it took a few minutes to login. Whereas on the old server login time was a matter of seconds.

    I tried a bit of a test to see if I could log some disk queue lengths for you guys. So I copied a ~180MB SQL backup from our file share over to the C drive on this VM and monitored it. It took about 15-20mins to copy 180MB across a 1Gb connection! Below is a screen shot of the performance/network monitor during the copy. Not sure if this helps at all?

    Still at a bit of a loss unfortunately :( I will send some more results to Dell too. Again, thanks for all the comments.


    • Edited by emssol Thursday, January 30, 2014 11:14 AM
    Thursday, January 30, 2014 11:09 AM
  • yep that's bad! just to confirm :-

    • you have installed the latest PERC Drivers.
    • can you install the PERC configuration software and check the Cache settings for yourself. ?
    • Could you try running 1 VM from the C-Drive, copy the VHD across etc and see how it performs.

    had a quick look at the spec's of the H710 it looks like a pretty good card, it almost sounds like the cache isn't working. also you can normal vary the amount of cache that is allocated to Reads and writes IE. 25% writes / 75% reads, you can on HP cards. not sure about the Perc. Make sure you have cache for both reads and writes, the percentages above should be a good starting point.

    Cheers

    Mark

    Thursday, January 30, 2014 12:03 PM
  • Yes it's pretty hopeless trying to login and use one of these VM's at the moment the way they are performing!

    I can confirm I have installed the latest PERC drivers last night. I am still awaiting Dell to review my latest logs to confirm everything is now up to date and as they recommend though. I am having problems running the Dell Server Admin software in Windows 2012 R2, but Dell are also going to solve that for me too hopefully, so then I can verify the PERC settings myself.

    I will ask the Dell tech about the cache settings for read/write and see what it's currently set to.

    As suggested, I tried moving one of the VM's over to the C drive (RAID 1 mirrored pair also running the OS) and performance is still bad. Possibly worse! Below is a performance log of me copying a 10MB file off the network onto the local C drive, then unzipping the file. Am seeing peak avg disk queue lengths of 33 if I am reading correctly?

    This VM is worse. It's a build server that our developers login to so they can build our .NET applications. This is still on the D drive (RAID 10) and all I did was open the Build software we use and open the project. I am seeing disk queue lengths of ~95 on this if I am reading correctly. It's pretty hopeless to try and use this one. Takes about 5-10 mins to logon/off with a <5MB roaming profile as well.

    Friday, January 31, 2014 2:20 AM
  • A bit more information - I spoke to Dell again and they confirmed I now had the latest firmware/drivers. I am still a bit unclear on the cache mode it's running, other than the fact it is definitely enabled. I managed to get the Dell Admin tools working on the server, which helps so I can go and check/change a lot of these settings myself now. See below for a screen shot of the PERC H710 mini with all the specs and cache settings, including the VD's.

    One thing I hadn't thought of that the Dell tech asked today is that the Hyper-V host operating system itself actually performs quite well. I just copied a large file off the network to it's C drive and it was blindingly fast! A matter of seconds for nearly 200MB. This seems to point away from a general hardware issue, to a hardware to hyper-v issue?

    They suspect that because I loaded the OS before I upgraded to the latest BIOS/firmware, the Dell Lifecycle Controller won't have installed the latest Dell hardware drivers for Windows 2012 R2. The only way to fix it is to reload the OS (now I have the latest version of that firmware installed). Seems a bit odd that's the only way to sort it out, but I may have to try it, given the results I am seeing so far. It shouldn't take me long to reinstall Windows again on the C drive and then I can just enable the D drive again and import the VM's in place without having to move anything around.  Might be worth a try I guess...


    • Edited by emssol Friday, January 31, 2014 5:51 AM
    Friday, January 31, 2014 5:47 AM
  • it would be good to check the cache allocation, it could make a massive difference.

    all so some great information here on Hyper-V 2012 best practices, worth checking out.

    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx

    let us know how you get on. if I think of anything else ill post back.

    Cheers

    Mark

    Friday, January 31, 2014 9:59 AM
  • The Hyper-V Storage folks wanted me to pass on this question:

    Try using xcopy to copy the large file from the c drive to one of the drives on the SAN using the unbuffered semantics ( /j  option) and let us know how it compares. 

    This will mirror the behavior of writes to the VHD files.


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Friday, January 31, 2014 5:12 PM
    Moderator
  • Also suggested was this:

    Copying a large file from C to D with xcopy /j would also be good to know.  I would expect at least 100MB/sec on that setup if things are working correctly.

    Also, be sure they check the battery status.  The PERCs disable caching if the battery level is too low leading to a substantial drop in performance.  The screenshots he shows are the configured policy – not active state.


    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Friday, January 31, 2014 6:34 PM
    Moderator
  • Thanks for that Brian. I tried using the xcopy /j from the C drive of the host to the network and a 180MB file was almost instant. Same as copying from the C to D drive on the host server. Still quite different to the way the VM's perform for some reason?

    As for the battery status, it appears to be fine, but I only have limited information from the Dell iDRAC/Server Administrator tools. This is what I am seeing in the iDRAC console.

    I also spent a few hours last night and deleted/re-create the RAID 1 VD which holds the O/S and completely reloaded Windows again using the Dell Lifecycle controller with the latest firmware updates. Then once I got the OS and Hyper-V role installed again, I did an in-place restore of all the VM's which were sitting on the D drive.  This is what Dell were suspecting might be a cause, but unfortunately there is still no difference in performance on the VMs :-(

    One thing I noted when I reinstalled the OS is that during the process, the Dell Lifecycle controller asks me what boot mode I want to use - UEFI or BIOS. I chose UEFI, which I assume is OK since the hardware supports it?

    Will continue with some more tests next week.

    Friday, January 31, 2014 10:56 PM
  •  Hi,

    Sorry if this questions are repetitive however may help, When you say the VM performance is slow, what exactly are the symptoms you have seen, slow in terms of opening application, slow logon, slow boot.

    To recover from the slowness what steps do you usually take ? What the Guest OS ?

    Hope server is patched with latest updates.

    For startup you can check this http://support.microsoft.com/kb/2911037/EN-US


    Sanket. J Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

    Saturday, February 01, 2014 2:43 PM
  • Hi Sanket. The symptoms are generally very slow logon/off times via RDP on the VM's. Also very slow copying anything from either the network to a local drive, or between local drives. Any local applications seem to have poor performance too. One of the VM's is an internal licence/connection server for our own internally developed applications, it is very slow to provide connection information and licence info, which results in the client application slowly progressing through that stage on startup (sometimes even reporting timeout errors).

    Another example is that one of the VM's is a SQL 2000 server. If you try and run up one of our applications which uses that database, it can take 15-20 seconds to connect to the database and start the application. It normally only takes a few seconds and you were into the application ready to use.

    I did another experiment today which might be a big clue. I created a brand new VM (Windows 7 64-bit) on the new host server just to see if I got the same bad performance I saw on all the other VM's that I'd restored from the old server. Interestingly, the brand new VM with a clean Windows 7 install seemed to perform brilliantly. Copied a large file from the network in an acceptable time for a 1Gb connection and navigating and using the PC was quite responsive. This was all using the PC while it was off the domain and via the Hyper-V console connection

    Then I joined the PC to our domain, configured remote access and rebooted etc and started using it via RDP - back to terrible performance!

    So it appears this is NOT an issue with the hardware, but something odd happening with the VM's when they are running on our domain and on this Hyper-V server. I'm still at a loss as to why this would happen though.

    Monday, February 03, 2014 6:36 AM
  • Hi this harks, back to my original Post,  about the network causing slow VM performance, because the guest OS needs to talk to the DC a lot, and if the network is an issue this can cause very poor performance of the guest OS.

    Have you tried Creating a brank New BLANK, guest. then just mount the existing VHD files to this new guest server. the guest will loose any of its original network settings, in favour of the new Network card that it will see, so any static settings will need to be put back.

    its definitely worth a try.  I could be the exporting VM, network isnt quite working correctly.

    Cheers

    Mark

    Monday, February 03, 2014 8:31 AM
  • Hi this harks, back to my original Post,  about the network causing slow VM performance, because the guest OS needs to talk to the DC a lot, and if the network is an issue this can cause very poor performance of the guest OS.

    Have you tried Creating a brank New BLANK, guest. then just mount the existing VHD files to this new guest server. the guest will loose any of its original network settings, in favour of the new Network card that it will see, so any static settings will need to be put back.

    its definitely worth a try.  I could be the exporting VM, network isnt quite working correctly.

    Cheers

    Mark

    Yes it seems to be pointing at a network issue as you said earlier Mark. But it's only occurring with this hosts VMs. No other network issues anywhere.

    I just tried to create a blank VM and attached the existing VHD from one of the guests. Still seems to perform badly. Then I tried flushdns and release/renew on the VM to see if it improved things, but still the same.

    I might try removing/adding the guest to the domain tomorrow, or even copying a VHD which wasn't exported from the old server and attaching to a brand new guest VM. But again, even a completely new VM seems to perform badly when it's joined to the domain... It's so bizarre and frustrating!

    Monday, February 03, 2014 12:00 PM
  • Hi, As you mentioned the new server is having 1TB(7.2k) drives and the old server is having 300Gb with 15krpm drives. This is basically the IOPS issue, SATA drives are the capacity drives and SAS drives and performance drives. in SATA drive you will get max 75 to 80 IOPS and in SAS drive you will get max 150 IOPS excluding RAID penalty. This all about sizing the hardware before creating and implementing a solution. Do you have any IOPS requirement or have you done any sizing before purchasing the new server. I don't now why you have gone for from SAS disks to SATA disks for the new servers as the application which you have mentioned require more performance in terms of IOPS. Please try to run the Windows Performance Monitor with Physical disk counter enabled and check the drive utilization and IOPS requirement for your setup. Regards, Sunil Tumma
    Monday, February 03, 2014 12:26 PM
  • We mustn't forget though that the disk queue lengths were pretty bad, and this will affect the performance of the VM's . in theory, you could turn off all but one of the guests, and see how a single VM performs.

    What is the network configuration on the Host, just so we have a better picture of the config.

    have you used NIC teaming for the VM Virtual network adapter, is this share with the host or is it independent. ?

    Do you have any Storage devices, where you can serve out an ISCSI Lun ?

    does your old 2012 Server have free storage ? I was thinking perhaps you could mount an ISCSI Lun, or an SMB3.0 Share from your other 2012 server. 100GB could be shared out. just to try an alternate location for the VHD files to try and rule out local storage issues.

    Cheers

    Mark

    Monday, February 03, 2014 1:08 PM
  • Hi, Mr.Sunil Tumma (above)  has got the key where your problems' Lock can be unlocked. Now for any database or any application, there is consideration of IOPS. The cumulative IOPS are generated by using any RAID group in server or storage are different from raw IOPS from Drives. It also varies from Drive types also. As you reformed & configured in  RAID 10 with 4 drives, there is write penalty overhead in performance as well. So the IOPS are reduced. 

    7..2K SATA are not performance drives. Total IOPS for your configuration would not be more than 200. 7.2k Drive has rotational latency of 4.17milisec whereas 15k drive has 2milisec. It reduces the performance. 

    Run 'perfmon' when you see performance issue.add counters physical & logical disk.  You will get to know that Drive utilization. Get the report of csv file in which see the read IOPS+write IOPS  & total IOPS. Check th total disk transfer per sec. Get the Read write ratio. If write IOPS is more or equal/nearer  to Read IOPS then you will face the performance issue. RAM helps you in only READ operation.

    Rgds,

    Ashish SI

    Monday, February 03, 2014 1:38 PM
  • The OP is using a High performance RAID controller, which should have Write and Read cache. this will make a big difference to the performance, also typically the IO profile of guest VM's is 30% write 70% read.

    the IOPS for a DELL 1TB 7.2 K disk is, 143 IOPS.

    so total IOPS is 143x4 = 572 IOPS. so Read IOPS is 572 and Write IOPS will be 286. (RAID10)

    If you notice from the orginal Spec the disk are not in a RAID group, so IOPS for 15K SAS is 246.

    So 246 Read and Write IOPS. but that's split down to a couple of VM's .

    Hence why I suggested trying to run only 1 or 2 of the orginal VM's to see what the performance is like.

    It could be that the SQL box is imposing most of the IO load, because of Tlogs, TempDB and data IO profiles.

    I am running a similar setups with SATA 1TB disks running 6 VM's on HP hardware on 2012 R2 with no performance issues at all. all our remote sites run this config, I cannot see the storage being a major issue here, but the figures for the queue length do indicate storage performance issues.

    it could also be worth testing each VM one at a time, and seeing how it affects queue length  ? if that's possible in your environment.

    emssol are you based in the UK ?

    Cheers

    Mark

    Monday, February 03, 2014 3:48 PM
  • Had the same issue with a Hyper-V server on 2008 R2.  Issue turned out that the Virtual Networking was basically hosed and for some reason it was trying to use the wrong physical NIC for communication. 

    I cleared my Virtual Network configuration and recreated it (ensured I had a dedicated NIC assigned for all VM traffic) which seemed to fix the issue at the time.  But prior to that, it seemed to be acting the same as what you are seeing, terribly long login times and painfully slow file copies.

    Monday, February 03, 2014 5:03 PM
  • Some interesting points to consider, so thanks to all for the feedback. On paper it does look like it could be a drive issue when comparing the old drives, but that doesn't explain why a brand new VM (running on the same RAID10 array with 6 VMs already) performs very well until I join it to the domain. Seems to rule out any drive IO issue I would have thought?

    I am sure I found the performance was quite bad when I first setup the RAID 10 drive and started importing the first couple of VMs onto it (before I had all 6 loaded). So again, I don't think it's that, but I can try again after hours to be sure.

    The network configuration on the host is as follows:

    Broadcom 5720 quad port 1GB network daughter card - currently has 2 ports connected to the LAN

    I have one virtual switch shared with the OS, and the other dedicated to the VMs only. Below is the config. Does that look correct? The virtual switch 1 shared with the host OS uses TCP/IP v4 and v6 and is auto for DHCP/DNS. Same as the old server.

    Not based in the UK Mark. Way over in Australia!


    • Edited by emssol Monday, February 03, 2014 11:57 PM
    Monday, February 03, 2014 11:55 PM
  • A couple more bits of information in case it helps. The brand new VM I created yesterday (Windows 7 Enterprise SP1 x64) still seems to perform quite well when accessed via Hyper-V console, even though it has been joined to the domain. If I access it via RDP, it's a lot worse.  The logon/off times are slower and copying a file from the network can only manage a best of about 3MB/second (starts at 200Kb/sec and works its way up to 3MB). If I use the Hyper-V console to connect to that same VM, the logon/off is almost instant and the copying of the same file from the network shows 10MB/sec.

    I tried removing from the domain one of the old VM's I'd imported from the old server to see if it improved performance. It made no difference and performed quite bad - even from the Hyper-V console. Slow logon/off times and slow copy of network files...

    One thing I have noticed is that the VM's which are performing worse seem to be running the older generation OS. E.g. Windows XP and Windows Server 2003. Anything running Windows 7, or 2008 R2 etc appears to be that bit faster and perhaps more acceptable.

    I have no idea why this would be, but noticed in the LAN connection settings on the older VM running Windows Server 2003 Enterprise (SP2), there was a 'Virtual Machine Network Services' item. The driver version number is 2.4.0.24 and dates back to 2003? I am not sure what this service does, but thought it was a bit strange that it did not appear in my new Windows 7 VM...



    • Edited by emssol Tuesday, February 04, 2014 5:29 AM
    Tuesday, February 04, 2014 5:27 AM
  • Try disabling the individual TCP task offloading options of the nic.

    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Tuesday, February 04, 2014 6:05 AM
    Moderator
  • Hi,

    Have you tried copying the data(heavy file and folder) in the same drive itself, if yes wht is the speed your getting.

    If not, try copying some files or folder in the same drive and check the data transfer speed and also try to get the console of the vm at the same time while copying and check the vm performance also.

    I am still thinking, it is related to disk drive IOPS.

    Regards,

    Sunil Tumma

    Tuesday, February 04, 2014 6:12 AM
  • Hi, try forcing the guest OS, to re-detect the HAL, this will force then to update any drivers, from the latest IC.

    http://social.technet.microsoft.com/Forums/windowsserver/en-US/907aec80-c0ec-49c3-929b-359686211af7/intergrations-not-upgrading-hal-on-windows-xp-sp3

    it could be the HAL isn't correct for the host OS. although you have come from 2012.

    another question what was the reason for creating a V-NIC shared with the HOST OS ?

    you shouldn't need todo that if you have a dedicated NIC for the VM's

    Australia, I shouldn't mention the ashes LOL.!

    Cheers

    Mark

    Tuesday, February 04, 2014 2:06 PM
  • Try disabling the individual TCP task offloading options of the nic.

    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

    Thanks Brian. I tried this via the registry on the host server, as well as in the registry of one of the VMs and also switched off these settings below (VMQ and Task Offloading). But still no effect. Not sure if I need to go back and change the registry settings in the host server in case this causes problems down the track?

    Wednesday, February 05, 2014 6:10 AM
  • Hi,

    Have you tried copying the data(heavy file and folder) in the same drive itself, if yes wht is the speed your getting.

    If not, try copying some files or folder in the same drive and check the data transfer speed and also try to get the console of the vm at the same time while copying and check the vm performance also.

    I am still thinking, it is related to disk drive IOPS.

    Regards,

    Sunil Tumma

    Yes I tried to copy a 1.5GB file (MEMORY.DMP) file from C:\WINDOWS to C:\Temp when logged into the VM via RDP and the file copied in about 10 seconds. Very fast, so it sounds like the disk IO is OK?
    Wednesday, February 05, 2014 6:14 AM
  • Hi, try forcing the guest OS, to re-detect the HAL, this will force then to update any drivers, from the latest IC.

    http://social.technet.microsoft.com/Forums/windowsserver/en-US/907aec80-c0ec-49c3-929b-359686211af7/intergrations-not-upgrading-hal-on-windows-xp-sp3

    it could be the HAL isn't correct for the host OS. although you have come from 2012.

    another question what was the reason for creating a V-NIC shared with the HOST OS ?

    you shouldn't need todo that if you have a dedicated NIC for the VM's

    Australia, I shouldn't mention the ashes LOL.!

    Cheers

    Mark

    I got a bit lost with this sorry Mark. I've never used sysprep before and got a bit confused what I needed to do to force the OS to re-detect the HAL. It rebooted the machine at one point and went into a Windows installer sort of mode and then came back before I could do anything. I'm just a manager with a bit of technical background from my old days of doing network admin unfortunately. One of the problems of being a small business with limited resources!

    Haha, yes the ashes was a bit of a rough visit for you guys. I think we're in for a bit of a different story in South Africa!

    We are a MS Partner, so I've also contacted Microsoft Partner technical support and have had some initial contact. Perhaps they will be able to see what is happening here. It's really bizarre...

    Wednesday, February 05, 2014 6:20 AM
  • OK I think I've finally found it!!! Turns out the NICs in these servers come with a setting called "Virtual Machine Queues" which defaults to enabled. I disabled it on the 2 NIC's which I have in use and BOOM! Performance is back to where it should be :-)

    Looks like this is a fairly common issue. This is the article that led me to the solution:

    http://social.technet.microsoft.com/Forums/en-US/b7e50892-2ff2-4410-a545-c43584f30674/poor-network-when-create-virtual-switch-in-hyperv-2012?forum=winserverhyperv

    Mark - your comments about the creating the V-NIC shared with the host OS. I am not sure...  I just copied the config from the old server, but I will look at changing this too if it's not optimal. I thought I needed to share at least one of the V-NICs with the OS in order to allow the OS connect to the network?

    Thanks to everyone for their advice and persistence. Much appreciated.

    • Marked as answer by emssol Wednesday, February 05, 2014 9:46 AM
    Wednesday, February 05, 2014 9:46 AM