none
Script to Loop through each DB RRS feed

  • Question

  • Hello,

    I'm not very good with scripting loops.

    I want to loop through each database and check the LogFile Size and Free Space on Log File.

    If it meets these conditions I want to truncate.

    If @logsize >= 20000 -- and @logspace >= 80  
    set @SQL = 'use'  + [Database_Name]  +  'dbcc shrinkfile ([LogFileName], 20000, TRUNCATEONLY) WITH NO_INFOMSGS'
    exec (@SQL)

    Please help

    Friday, July 10, 2020 3:06 PM

Answers

  • h Veena,

    If you run the Log Backup frequently, the Log file will not fill up unless you have long running Open Transaction query. The Open/Active transaction will prevent any Shrink option as well.

    It is advisable to run Database Log Backup every hours or less based on your database usage for any database which is in Full or Bulked Log recovery mode. . This will solve your problem.

    Tuesday, July 14, 2020 6:59 PM

All replies

  • Hi Veena,

    Can you look at this blog post for the general idea of how to loop through databases:

    https://blogs.lessthandot.com/index.php/datamgmt/datadesign/how-to-get-information-about-all-databas/

    You can apply ideas from that blog article for your problem easily.


    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    Friday, July 10, 2020 3:22 PM
    Moderator
  • The issue I face is :

    We have about 50 databases on the SQL Instance, all in Simple Recovery model.

    However from time to time, there is lot of activity in the databases and the Log Files suddenly balloons and fills up the Log drive.

    I thought I could run a script maybe twice a day to check the Log File Size for all databases, and if they meet

    a certain criteria then Release Space.

    Can anyone help with this please?

    If @logsize >= 20000 -- and @logspace >= 80  
    set @SQL = 'use'  + [Database_Name]  +  'dbcc shrinkfile ([LogFileName], 20000, TRUNCATEONLY) WITH NO_INFOMSGS'
    exec (@SQL)

    Friday, July 10, 2020 5:43 PM
  • Try this as a starting point for your investigation:

    drop table if exists #FileSizes;
    create  table #FileSizes (DBName sysname, [File Name] varchar(max), [Physical Name] varchar(max),
    Size decimal(12,2))
    declare @SQL nvarchar(max)
    set @SQL = ''
    select @SQL = (select 'USE'  + QUOTENAME(name) + '
    insert into #FileSizes
    select ' + QUOTENAME(name,'''') + ', Name, Physical_Name, size/1024.0 from sys.database_files ' 
    from sys.databases d
    where d.database_id > 4
    order by d.[name] for xml path(''), type).value('.', 'nvarchar(max)')
    
    execute (@SQL)
    declare @logSize decimal(12,2) = 2000;
    select * from #FileSizes 
    where size > @logSize
    order by DBName, [File Name]


    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    Friday, July 10, 2020 6:41 PM
    Moderator
  • The idea of doing something like this is dubious, and if this is a production system, just don't do it. What shrinks will grow again, and growing a log takes resources.

    Then again, if this is a server with various dev/test databases, this be a legit thing to, as someone may do a large operation, or leave a transaction open. And in this environment, you can be more brutal.

    Take a look at this section on my article on dynamic SQL for how to do this:
    http://www.sommarskog.se/dynamic_sql.html#alldatabases


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Friday, July 10, 2020 7:48 PM
  • To avoid running out of space, unless log truncation is delayed for some reason, truncation occurs automatically after the following events:Under the simple recovery model, after a checkpoint.

    If you do not sure how to write the loop code,you can shcedule a job to achieve that.

    Tuesday, July 14, 2020 3:16 AM
  • So I went to LessThanDot website trying to see if I covered another problem and ran into that very first (I think) blog of mine

    https://blogs.lessthandot.com/index.php/datamgmt/datadesign/truncate-log-in-all-your-user-databases/

    Unfortunately, this site is in the maintenance mode and all original important comments are no longer visible.

    Anyway, perhaps that blog may be of some help.


    For every expert, there is an equal and opposite expert. - Becker's Law


    My blog


    My TechNet articles

    Tuesday, July 14, 2020 5:50 PM
    Moderator
  • h Veena,

    If you run the Log Backup frequently, the Log file will not fill up unless you have long running Open Transaction query. The Open/Active transaction will prevent any Shrink option as well.

    It is advisable to run Database Log Backup every hours or less based on your database usage for any database which is in Full or Bulked Log recovery mode. . This will solve your problem.

    Tuesday, July 14, 2020 6:59 PM
  • Hi Atul,

    Yes I also thought the same, that I should change the recovery model to Bulk Logged and do frequent Log backups.

    Thank you,

    Veena

    Tuesday, July 14, 2020 8:44 PM