locked
Renaming the content database server RRS feed

  • Question

  • Platform: MOSS 2007

    So, when my predecessor set up this 2 machine farm (1 machine sharepoint, 1 machine sql server), he used some sort of alias for the machine. That is, Central Admin reports the name of the sql server as  PRODSQL. This is not the name of any server here. Whatever kind of name this is somehow is communicating both the machine name and the instance.

    The farm is running just fine - updating the appropriate machine and instance.

    However, an analysis tool used by Microsoft is reporting a problem accessing the sql data and we are assuming this is because it is trying to use the literal name rather than the symbolic name.

    Is it safe to just put the real machine name and instance in central admin and click ok? This won't trigger a reset or anything, will it?

    Friday, September 21, 2012 3:06 PM

Answers

  • This alias is probably set up with a SQL alias on the Web Front End (WFE) machine.  Look in All Programs  -> Microsoft SQL Server -> Configuration Tools -> SQL Server Configuration Manager.  Under SQL Native Client Configuration, Aliases, you will see where the Alias you see in Central Admin is mapped to a physical server.  This is a good, and recommend, configuration.  If you SQL server dies you can restore from backups on another machine and then point your alias to the new machine name.  This way you can service the old machine without having to change its name.

    I wouldn't recommend changing this.  Consider using the alias in the MS analysis tool.  What is the analysis tool?

    • Marked as answer by lwvirden Monday, September 24, 2012 5:22 PM
    Friday, September 21, 2012 4:50 PM

All replies

  • This alias is probably set up with a SQL alias on the Web Front End (WFE) machine.  Look in All Programs  -> Microsoft SQL Server -> Configuration Tools -> SQL Server Configuration Manager.  Under SQL Native Client Configuration, Aliases, you will see where the Alias you see in Central Admin is mapped to a physical server.  This is a good, and recommend, configuration.  If you SQL server dies you can restore from backups on another machine and then point your alias to the new machine name.  This way you can service the old machine without having to change its name.

    I wouldn't recommend changing this.  Consider using the alias in the MS analysis tool.  What is the analysis tool?

    • Marked as answer by lwvirden Monday, September 24, 2012 5:22 PM
    Friday, September 21, 2012 4:50 PM
  • It is a scoping tool used as part of risk assessment.

    I liked the idea - I am just trying to figure out how to communicate this to microsoft. You have given me a buzzword to use that might make things easier.

    Thank you.

    I went into All Programs > Microsoft SQL Server and there wasn't something called SQL Server Configuration Manager. There was a link to configure XML in IIS, but that is definitely not the right thing.

    However, I found it in the Client Network Utility under the Alias tab.

    This may help resolve things.

    Thank you so much!

    Friday, September 21, 2012 4:58 PM
  • The path to SQL Server Configuration Manager might be slightly different depending on the version of SQL server you are using.  Most likely the WFE does not have a full SQL installation on it, but rather some small components on it to make it so the WFE can communicate to the actual SQL server (via alias in this case).  But if it does have a full installation then you will see a lot of SQL stuff, like SQL server management studio.  The SQL server configuration manager is in a sub-folder called Configuration Tools (in SQL server 2008 anyway).

    I'm a bit curious about the scoping tool.  Is the tool for SharePoint upgrade analysis? Is it for SharePoint db size considerations?  Is the tool free?  We are looking to upgrade at some point so I might be something I need to consider too.

    Thanks,

    Friday, September 21, 2012 7:26 PM
  • It turns out that once I let Microsoft know what the issue was, they pointed me to cliconfg or something like that, and told me that I would need to create both the 64 bit and 32 bit alias for their tool.

    The tool is not free... well, you don't buy the tool, you buy in some form a risk analysis by Microsoft. Premium support comes with at least one, I think, per year. The tool's results are used by Microsoft as they analyse your SharePoint site to indicate where implementing best practices might improve your farm. 

    I don't know yet how they will use it - what kinds of things are reported back. I am hoping they will be able to make some suggestions regarding a degrading performance issue we have with our farm.

    Friday, September 21, 2012 10:35 PM
  • We had degrading performance for a while.  I later got it fixed by doing better backups.  The databases were set to full recovery but we were not taking log file backups.  I then set the databases to simple recovery and after the next full backup then the log files got smaller and performance went great.  However, I later changed back to full recovery and then started doing weekly full backups, daily differential, and hourly log file backups.  This made it towhere I could keep three months of backups instead of one week.

    Our content database is over 120GB and performance is great.  That is just the content database, and I got to that info by right clicking the content db in SQL -> properties, and then looking at the size.

    Are your db's bigger or smaller?


    • Edited by Eric Sammann Friday, September 21, 2012 11:06 PM
    Friday, September 21, 2012 11:05 PM
  • We have 2 content databases. One is smaller and one is about 200 GB. We have 4 site site collections on it that are over 20 GB - 1 over 100 GB.

    The sql server admins do regular backups (I am not certain of the details, though now that you have described a bit about yours I want to ask them about it). We also run daily full stsadm -o backup full backups of each site collection.

    The site collection backups are taking over 7 hrs to complete.

    And because of the huge size of the site collections it will take a very long time to restore stuff to our recovery farm.

    We've proposed getting one of the specialized backup third party tools that would allow a granular recovery with hopes that will speed along recoveries.

    Saturday, September 22, 2012 9:18 AM
  • I work for a company that has 4 locations around the world.  Some of the other locations use a 3rd party granular backup solution and I really wish I could convince them not to.  The third party tool basically uses an stsadm -o export when doing the backup.  The granular backup process is extremely slow and it is really bad about making the SharePoint sites un-usable while it is running. 

    I don't have as much experience using stsadm backup.  We used it at one time and stopped because it often would fail.  However, we didn't use it long enough for me to see if it was making SharePoint slow.  I assume it does as well.

    I found a link that seems to indicate that stsadm backup is not as fast.  The 'Applies To' is not showing up corectly for me, but I think it might be written for SP 2003 because SP 2007 has a recycle bin build in (MOSS anyway).
    http://office.microsoft.com/en-us/windows-sharepoint-services-it/backing-up-and-restoring-web-sites-HA001160826.aspx?CTT=5&origin=HA001160827

    The article above actually says If you are using SQL Server 2000 or SQL Server 2005 as your database, using the stsadm.exe utility as the primary backup and restore solution for Windows SharePoint Services is not recommended. Backing up sites exclusively with stsadm.exe can cause locking issues that prevent users from accessing their SharePoint sites. 

    Also, the stsadm backup page seems to suggest the same, and mentions not to use it for db's over 15GB.
    http://technet.microsoft.com/en-us/library/cc263441(v=office.12).aspx

    Considering those two posts, I assume stsadm backup is similar to stsadm export and that it is a performance hog.  My experience whith third party tools is that they use stsadm and their performance is not good.

    Since it sounds like your problem is performcance degredation my suggestion would be to analyze SQL server backups and see if they are frequent enough.  Something like weekly or monthly full backups, daily differentials, and log files every hour, half-hour, or 15 minutes.  The up side is much better SP performance and it will get better backup coverage than stsadm backup.  The downside might be the time it takes to restore if you need to restore only a single site.  For restoring you need a test system that you can restore the full backups to.  Once restored then you do stsadm export on the test system, so it is less of a system impact and you are doing it only for the subsite involved. 

    Remember any lists or list items that get deleted go into recycle bin, so hopefully you only need this restoring process if someone deletes their site.  People can delete their site only if they have full control.  If you want to keep people from deleting their site then you can also try creating your own permission level.  This is easy to do.  I have one called 'Full Control - Almost' for this reason.  At the top level of the site collection go to Site Actions -> Site Settings -> Site Permissions (left nav) -> Settings (list content area) -> Permission Levels.  When creating your own start with full control, then take away:
    Manage Web Site - Grants the ability to perform all administration tasks for the Web site as well as manage content.  The downside to that is that it takes away the ability to review site usage reports, I think (it's been a while).

    Monday, September 24, 2012 5:12 PM
  • Thanks! It has been our experience that, while the stsadm backup is running, people experience varying slowdowns, from annoying to what can only be thought of as outages.

    Our problem is that we are having tremendous slow downs when the backups are not running.

    I truly believe that it is the content database size as well as the site collection size. We are trying to work on a way to split the largest site collections into their own content database. That should provide a little relief for the smaller site collection users, and even the users of the too large site collection will get some relief since they will no longer be contenting for the same resources.

    The database admins are looking at a variety of possible directions when they upgrade their server in the future, each of which has the possibility of improving things.

    Unfortunately, for the here and now, the users just continue seeing inconsistent performance. Some see only occasional bad performance. I have one person who tells me that performance of his site is always slow and has been daily for months.

    Until we get the logistics in place to start moving some of the improvements in , things will continue to be an issue.

    Thank you again, so much!

    Monday, September 24, 2012 5:22 PM
  • Glad to help!

    I'll suggest one more thing too.  If performance gets bad even when backups are are not running then it  the crawl might be involved.  The crawl doesn't usually affect our performnce even though full crawls can run for hours.  However, I did have problems in the past if a backup was running, or started running when a crawl was in progress.  I'm guessing the stsadm commands could affect the crawl even more so.

    Also, at one point in the past we had major performance problems.  It turned out that wiki's were the only thing affected and if we stopped SQL and then restarted then performance would be good again, but sometimes it took doing this 3 times.  This was really strange, and the fix was really strange.  It took me months to figure out and I just happend across it one day. 

    There is a tool called SharePoint Manager. http://spm.codeplex.com/releases/view/22762  One day while poking around in SPM I found that there were some Application Pools to sites that we no longer had.  I had deleted the site from SharePoint and from IIS, but there they were in SPM.  I restored db's to a test system and tried deleting the extra app pools using SPM.  Then everything started working great again, including our wiki's.

    I have absolutely no idea why SP was remembering those app pools, or why that would make any difference to SP, or why wiki's were the only thing affected.  I have never heard this from anyone else either so the chances of this helping may be slim.  Still, it is work a look.

    Our SQL server has 12GB of ram, our sister site with performance problems has 16.  DB's are very close in size, they have maybe 20 GB more than us.  Just FYI.

    -Eric

    Monday, September 24, 2012 5:47 PM
  • Interesting.

    First, my predecessor used to have incremental crawls every hour, and a full crawl once a day. We changed the schedule to do incrementals at noon and 6pm (least heavily used period) with a full crawl on Saturday morning. From a task manager point of view, that certainly reduced cpu usage.

    Many of the users still report big issues.

    I just recently found that our prod farm has several app pools which do not appear, to me, to have a site associated with them. I have SharePoint Manager, but was not certain how to use it.

    Our SQL server is 5+ years old and we share it with a number of other applications. They are planning an upgrade to a newer (not newest) version of sql server and at that time we are supposed to get our own server, with a number of tuning enhancements to the system. I am hopeful that will make a big difference.

    I am so glad to have been able to have this conversation with you. Most of the web forums I have used tend to not answer questions asked, or answer them tersely. This conversation has given me many useful tips and I really appreciate you help.

    Monday, September 24, 2012 5:54 PM
  • No problem, glad to help.  If you try deleting those extra app pools I would be interested to hear back if it helps.  I thought about blogging on that but haven't gotten around to it.

    To use SPM I think you need to run it as an administrator (Server 2008, 2008R2), and on one of the SharePoint WFE servers.  Right click the extra app pool (SharePoint Config -> Content Service -> ApplicationPools (I think, it's been a while again)), then click 'Delete'.  Finish by clicking save in the tool bar area.

    You are right about forums though.  Sometimes not a lot of help.  These forums come complete with a moderator (Mike W.) that loves to delete post just because he doesn't like it.  I posted once about a problem in SP3 and he just deleted the whole post :/.

    Monday, September 24, 2012 6:24 PM
  • I am probably going to wait for Microsoft's analysis of our farm before I do much more. I did turn off at least one of the pools because it was associated with one of those "toy" applications that vendors sometimes make available for "free" but that do not help a lot. I had stopped running it because it would say "oh, there's a problem on your server" but not really identify what kind of problem it was or what to do to fix things. I could tell my machine had a problem - it didn't take a lot of skill to tell that when users were calling with complaints that there was a problem.

    The tool I would love to find is one that would look at IIS and SharePoint 12 hive log files and then make recommendations on potential solutions to the issues that are found there.

    Tuesday, September 25, 2012 11:12 AM