none
ServiceManager DB growing super fast RRS feed

  • Question

  • So I haven't been on top of the data retention, both from preference and from not appreciating how big a SCSM ServiceManager DB can get.

    So it blew up to close to 700GB. So I decided it was time to deal with it, I lowered the data retention settings and closed a few thousand Services and Incidents in bulk. I set the data retention to 0 days and as expected all the Requests can't be found in the console now. 

    I backup the ServiceManager DB and did notice it had grown by almost 100GB, didn't think much off it and after the backup I go to shrink the DB file, expecting to be able to shrink it to maybe 500-600GB in this initial stage, I wanted to close/delete/groom a few thousand requests at a time and shrink the DB file as I go along.

    Problem is I could only shrink the DB to around 730GB!, over 30GB bigger than it was before I began this exercise and very shortly the DB kind of "unshrunk" itself. taking itself up to the size it was before again with the roughly same amount of available space to shrink minus maybe 5GB. I tried a SQL maintenance plan with all the steps except the backup and it didn't seem to help one bit.

    Now somehow the DB was up well over 1000GB and I could only shrink it down to around 980GB. 

    Am I wrong to expect that grooming Requests should free up space and allow me to shrink the DB file?

    Monday, February 22, 2016 10:34 PM

Answers

All replies

  • Hi,

    If you are having issues with you workflows running very frequently and filling up the databases, take a look at this blogpost:
    http://blogs.technet.com/b/servicemanager/archive/2010/12/07/more-aggressively-grooming-the-jobstatus-and-windowsworkflowtaskjobstatus-tables.aspx


    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.

    Tuesday, February 23, 2016 8:39 AM
    Moderator
  • To figure out which table is consuming the most disk space you can use the SQL Server Management Studio (SSMS).

    • Open SSMS
    • Mark the "ServiceManager" DB
    • Right click and select "Reports" - "Standard Reports" - "Disk Usage by Table"
    • Order column of report "Data (KB)"

    My guess would be the table "dbo.JobStatus". But better be safe than sorry ;-)

    If it's the JobStatus table you can go ahead with the blogpost Xin Guo mentioned.

    Hope this helps.


    Andreas Baumgarten | H&D International Group

    Tuesday, February 23, 2016 9:12 AM
    Moderator
  • I believe I had a similar issue at one point, the log file itself was hoarding all the free space on the hard drive for itself.  I had to use Microsoft SQL Server Management Studio > Connect with Database Engine to my DataWarehouse SQL Server > Right click on each Database and go properties....

    1. Under Options section > Set Recovery model to Simple, hit OK
    2. Re-open properties, under File Section - change SM_LOG Initial Size to 0, then optionally set Max-Size to desired setting (like 10240 MB) to prevent this in the future.

    Note:  It won't let you set the Recovery model to Simple if you have your Data Warehouse Databases using AlwaysOn.

    Tuesday, February 23, 2016 7:49 PM
  • Thanks Xin and Andreas!

    So dbo.jobstatus is relatively big compared to most other tables and updated the default settings from 7 and 60 to 3 and 15 as the article suggests.

    But the problem isn't dbo.jobstatus, which actually only is ~1GB. The problem is dbo.WindowsWorkflowTaskJobStatus which is 890GB. 

    Will those PartitionAndGroomingSettings apply to this table as well?

    C Sharp Conner, thanks but my DBs and Logs don't live on the same disks. Either way my log file not even 10GB, my .MDF file is 900GB. 

    Tuesday, February 23, 2016 11:01 PM
  • In the blog post Xin Guo mentioned before is stated that bot tables (dbo.jobstatus and dbo.WindowsWorkflowTaskJobStatus) will be groomed.

    Quote: "After making the change you’ll need to wait overnight for the next scheduled grooming job to run before you see a change in the size of the tables."

    Hope this helps.


    Andreas Baumgarten | H&D International Group

    Wednesday, February 24, 2016 7:01 AM
    Moderator
  • Thanks, sorry I missed that in the blog.

    However it doesn't seem to have worked, I can still see rows from 7 days ago in the dbo.WindowsWorkflowTaskJobStatus table and the available space to shrink is only about 120GB out of the 890GB. Not what I expected from cutting the history by more than half from 7 to 3 days. 

    Is there any way to manually force the grooming?

    Edit: adding image:

    • Edited by Trana010 Wednesday, February 24, 2016 11:45 PM
    Wednesday, February 24, 2016 11:42 PM
  • The two tables are still that large? Or are the tables now smaller after 2 days?

    Andreas Baumgarten | H&D International Group

    Friday, February 26, 2016 10:07 PM
    Moderator
  • Hi Andreas, thanks for checking.

    The size on disk is down to 680GB, with over 300GB available space to shrink. I'm still able to select rows from dbo.WindowsWorkflowTaskJobStatus that are older than 7 days. So not sure whats going on there, but the main point the DB is much smaller. So all good. Thanks again for your assistance!

    Monday, February 29, 2016 2:18 AM