locked
best practices for handlding this database log file growth issue? RRS feed

  • Question

  • I have a a couple of SQL databases that collect website log files, replicate them to another location, and then flush out the older records every often, so there are a lot of transactions.  The log files are set to unresticted size and grow to several times larger than the database files themselves.  Every so often we need to shrink the log files and they will also release back 99% of their space.

    The databases are configured for simple recovery model and they are not backed up (not deemed critial data).  I am not very experienced with SQL and looking for suggestions/best practices on how to handle the situation.  Should we schedule a nigthly job to shink the log files?  Would scheduling a backup job (though not for the purpose of recovery) commit the log files to the MDF files and essentially clear them out?  Any other suggestions?

    Tuesday, June 4, 2013 12:10 PM

Answers

  • Hallo Wallst,

    add storage to your system. If LOG-file expands to this size it means that your workload NEED this size because of large transactions. If you shrink your file you will decrease the performance of your application because with the next run it will be increased again!


    Uwe Ricken

    MCM SQL Server 2008
    MCSE - SQL Server 2012
    MCSA - SQL Server 2012

    db Berater GmbH
    http://www-db-berater.de
    SQL Server Blog (german only)

    • Proposed as answer by Fanny Liu Thursday, June 6, 2013 7:13 AM
    • Marked as answer by Fanny Liu Monday, June 10, 2013 7:38 AM
    Tuesday, June 4, 2013 1:25 PM
  • Hi

    As Uwe suggested you all the things not to do and increase storage.

    But I think there is was also  to slow down the growth of log file by working on ur code. If you process your transaction in small batches sql server can clear log file faster than u fill.

    Its like  if you want to run a smooth party with lot of guest you have to call them in small no.s instead all together


    Thanks Saurabh Sinha http://saurabhsinhainblogs.blogspot.in/

    Please click the Mark as answer button and vote as helpful if this reply solves your problem


    • Edited by Saurabh Sinha DBA Tuesday, June 4, 2013 6:44 PM
    • Proposed as answer by Fanny Liu Thursday, June 6, 2013 7:13 AM
    • Marked as answer by Fanny Liu Monday, June 10, 2013 7:38 AM
    Tuesday, June 4, 2013 6:43 PM

All replies

  • Because you are using Simple recovery, there is obviously no log backups. Changing to full recovery and setting up log backups will probably not help because even if you have log backups, only committed transactions are cleared from the log when the log backup is done, uncommitted transactions are held in the log.

    The transaction log is growing because there are no frequent commits and log running transactions, to remedy this, I would look at running your job is smaller batches to control the log growth, include commits in your transactions.

    The recommended way to handle a transaction log is to pre-size the log with a decent size, this is to avoid too much growth which causes VLF fragmentation.

    What is the bulk of the work being done on the database? Inserts, Updates, Deletes?

    Regards

    Tuesday, June 4, 2013 1:08 PM
  • best pracitce of Log file handling

    if your application supports with the simple  recovery mode ,then set database is SImple recovery mode and make sure that there are no  transactin open in the database

    check my blog for details

    sqlservr.blog.com/2012/06/26/best-practice-to-shrink-in-logldf-file-in-sql-server/


    Ramesh Babu Vavilla MCTS,MSBI


    Tuesday, June 4, 2013 1:19 PM
  • Hello,

    We have databases set to SIMPLE mode  as well. Our trasaction log sizes are contained by daily backups taken. When the daily backups fail in a row the transaction log size ends up growing with no free space in the huge T-log file. This must be happening due to to active trasaction being held. In my case I have found backups to be more effective than regular shrinks.

    But for your case if you think space for storing backups is an issue. Then go ahead with the night log shrinks..

    Shrinking is the most adverse solution you can suggest! Shrinkage means:

    - growing with the next workload
    - time consuming expandation of log files
    - log files cannot participate from "instant file initialization"

    So DON'T use shrinkage as an option. If the log file has this size it uses this size. Only storage can be a practicable solution; every other recommendation is nonsence.

    Concerning your first statement I do not really understand what benefits you decribe. Will you say that you free log space when you run backups? If your database is running in SIMPLE recovery mode the log will be truncated automatically after the transaction has been committed.

    So if your log file increases to ... -let's say 2 GB in SIMPLE recovery mode it means that you had at least one transaction which consumes most of the log space.

    If your application is running hundreds of thousands of transaction simultaneously the growth could be a matter of concurent transactions because ...

    if one transaction blocks the VLF at the end of the log file it will be expanded for the second one (if other VLF have not been released (because they are in use by transactions, too). But I doubt that this will be root cause for the log file growth! :)


    Uwe Ricken

    MCM SQL Server 2008
    MCSE - SQL Server 2012
    MCSA - SQL Server 2012

    db Berater GmbH
    http://www-db-berater.de
    SQL Server Blog (german only)

    Tuesday, June 4, 2013 1:24 PM
  • Hallo Wallst,

    add storage to your system. If LOG-file expands to this size it means that your workload NEED this size because of large transactions. If you shrink your file you will decrease the performance of your application because with the next run it will be increased again!


    Uwe Ricken

    MCM SQL Server 2008
    MCSE - SQL Server 2012
    MCSA - SQL Server 2012

    db Berater GmbH
    http://www-db-berater.de
    SQL Server Blog (german only)

    • Proposed as answer by Fanny Liu Thursday, June 6, 2013 7:13 AM
    • Marked as answer by Fanny Liu Monday, June 10, 2013 7:38 AM
    Tuesday, June 4, 2013 1:25 PM
  • Shrinking is the most adverse solution you can suggest! Shrinkage means:

    - growing with the next workload
    - time consuming expandation of log files
    - log files cannot participate from "instant file initialization"

    So DON'T use shrinkage as an option. If the log file has this size it uses this size. Only storage can be a practicable solution; every other recommendation is nonsence.


    Absolutely have to agree with this....
    Tuesday, June 4, 2013 1:29 PM
  • Hi

    As Uwe suggested you all the things not to do and increase storage.

    But I think there is was also  to slow down the growth of log file by working on ur code. If you process your transaction in small batches sql server can clear log file faster than u fill.

    Its like  if you want to run a smooth party with lot of guest you have to call them in small no.s instead all together


    Thanks Saurabh Sinha http://saurabhsinhainblogs.blogspot.in/

    Please click the Mark as answer button and vote as helpful if this reply solves your problem


    • Edited by Saurabh Sinha DBA Tuesday, June 4, 2013 6:44 PM
    • Proposed as answer by Fanny Liu Thursday, June 6, 2013 7:13 AM
    • Marked as answer by Fanny Liu Monday, June 10, 2013 7:38 AM
    Tuesday, June 4, 2013 6:43 PM