1. Disk IO to read the data and write it to the abf. Ideally you should write the backup to a different drive, this split's the IO load and means that you don't loose your backups in the event of a drive failure. So you need to monitor your IO sub-system
to make sure it can cope with the additional IO load. SSAS does a pretty good job with it's caching, but the IO load also depends on the size of your database and certain feature usage (like distinct counts) - so you need to measure this in your environment.
2. There will be some CPU load for the compression of the data. Again you need to monitor this, typically I don't do backups during business hours, but I don't expect the load to be significant. However if your system is already CPU constrained it might
The biggest issue could be if you are also processing during the day. Then you could run into locking issues where the processing commit operation needs an exclusive write lock while the backup is holding a read lock. This could result in the backup or the
processing operation rolling back depending on your configuration of either the CommitTimeout or the ForceCommitTimeout settings. The default configuration would be to cancel anything holding a read lock after 30 seconds to allow the processing operation to
http://darren.gosbell.com - please mark correct answers
Marked as answer bydb042188Friday, June 15, 2012 12:27 PM
Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.