none
How t minimize the loging in sql server

    Question

  • Hi friends,

      I am looking to minimize the loging in sql server 2012. My process is I have many base tables, once data will get procced we are moving these all data into their respective history table but here the problem is Deleting huge data from multiple time from multiple table  base table making Log full. Can we minimize the loging.

    Thanks,


    Regards Vikas Pathak


    Saturday, September 07, 2013 6:00 AM

Answers

  • >> Deleting huge data from multiple time from multiple table  base table making Log full. Can we minimize the loging.

    Hello Vikas,

    I would suggest you to delete records batch wise,Please dont delete all records in one go.

    You can keep your recovery model to simple and perform delete operation but logging will be there ,in this case SQL server will truncate records after log reaches 70 % automatically but you will loose point in time recovery.It means if your DB crashes at point of time you are doing insert you cannot recover using transaction logs ,just full database restore will be possible.

    You can keep recovery model full and  can delete 5000 records in one go .After that take trn log backup and truncate logs ,may be you can also run DBCC shrinkfile command on log to shrink it.

    It depends on your requirement you can use both full and simple but with simple there might be consequences.

    Also just to make point clear Logging happens in SIMPLE recovery also but it is DB engine which makes sure it does not grow beyond point by truncating it.

    You can take script from below Link for batch delete

    http://stackoverflow.com/questions/896810/sql-batched-delete


    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers


    Saturday, September 07, 2013 7:05 AM

All replies

  • Yes, you can go to database properties > option > set Recovery Model to Simple.

    for more info see this link:

    View or Change the Recovery Model of a Database


    The most important motivation for the research work that resulted in the relational model was the objective of providing a sharp and clear boundary between the logical and physical aspects of database management. - E. F. Codd


    My blog

    Saturday, September 07, 2013 6:15 AM
  • >> Deleting huge data from multiple time from multiple table  base table making Log full. Can we minimize the loging.

    Hello Vikas,

    I would suggest you to delete records batch wise,Please dont delete all records in one go.

    You can keep your recovery model to simple and perform delete operation but logging will be there ,in this case SQL server will truncate records after log reaches 70 % automatically but you will loose point in time recovery.It means if your DB crashes at point of time you are doing insert you cannot recover using transaction logs ,just full database restore will be possible.

    You can keep recovery model full and  can delete 5000 records in one go .After that take trn log backup and truncate logs ,may be you can also run DBCC shrinkfile command on log to shrink it.

    It depends on your requirement you can use both full and simple but with simple there might be consequences.

    Also just to make point clear Logging happens in SIMPLE recovery also but it is DB engine which makes sure it does not grow beyond point by truncating it.

    You can take script from below Link for batch delete

    http://stackoverflow.com/questions/896810/sql-batched-delete


    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers


    Saturday, September 07, 2013 7:05 AM
  • You can think of changing the recovery mode to simple, or bulk recovery. 

    But changing the recovery mode just to simple wont work, if you are deleting a huge chunk of records with in one transaction. You might have to do "delete in batches" so that log can be recycled.


    moving to any of this recovery models will affect your recover-ability from a crash.


    Satheesh

    Saturday, September 07, 2013 7:12 AM
  • >> Deleting huge data from multiple time from multiple table  base table making Log full. Can we minimize the loging.

    DECLARE @x INT
    SET @x = 1
    WHILE @x < 44,000,000  -- Set appropriately
    BEGIN

       delete from tbl  where ID BETWEEN @x AND @x + 10000

        SET @x = @x + 10000
    END

    PS. Make sure that ID column has a Clustered index defined on.


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Blog: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance

    Saturday, September 07, 2013 11:19 AM
  • Hi,

    Truncate is minimally-logged operation .

    Try this link and make your decision accordingly  - http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx


    sathya - www.allaboutmssql.com ** Mark as answered if my post solved your problem and Vote as helpful if my post was useful **.

    Saturday, September 07, 2013 5:12 PM
  • Thank you very much .. its realy helpfull

    Regards Vikas Pathak

    Saturday, September 07, 2013 5:30 PM
  • Hallo Satheesh,
    hallo SathyanarrayananS

    "You can think of changing the recovery mode to simple, or bulk recovery. "

    setting the database recovery model to BULK_LOGGED will NOT decrease the amount of log space because a DELETE operation doesn't support "minimal logging". To get a list of all operations which are minimal logged see the following article:

    http://msdn.microsoft.com/en-us/library/ms190925.aspx#MinimallyLogged

    "You might have to do "delete in batches" so that log can be recycled."

    This is a myth either - the log will not be released in the moment the transaction is done. It needs a separate CHECKPOINT to write dirty pages to disk and after the CHECKPOINT the log can be reused.

    But... - to use CHECKPOINT manually the user need to be member of sysadmin role :(

    http://technet.microsoft.com/en-us/library/ms188748.aspx

    Deleting in batches will not have a positive effect to the log. You can check it out with a simple demo:

    USE master;
    GO
    CREATE DATABASE demo_db
    ON 
    (
    	NAME = demo_dat,
    	FILENAME='S:\MSSQL11.MSSQLSERVER\MSSQL\DATA\demo_data.mdf',
    	FILEGROWTH = 1MB
    )
    LOG ON
    (
    	NAME = demo_log,
    	FILENAME='S:\MSSQL11.MSSQLSERVER\MSSQL\DATA\demo_data.ldf',
    	SIZE = 5MB,
    	MAXSIZE = 50MB,
    	FILEGROWTH = 5MB
    );
    GO
    ALTER DATABASE demo_db SET RECOVERY SIMPLE;
    -- Now we create a simple table with app. 10 MB of size
    USE demo_db;
    GO
    IF OBJECT_ID('dbo.table_a', 'U') IS NOT NULL
    	DROP TABLE dbo.table_a;
    	GO
    CREATE TABLE dbo.table_a
    (
    	Id	int	NOT NULL	IDENTITY (1, 1),
    	col1	char(497)	NOT NULL	DEFAULT ('just stupid stuff')
    );
    GO
    SET NOCOUNT ON;
    GO
    INSERT INTO dbo.table_a DEFAULT VALUES
    GO 15000
    CREATE UNIQUE CLUSTERED INDEX ix_table_a_Id ON dbo.table_a (Id);
    GO
    CHECKPOINT;
    GO

    After the database and the table_a has been created we run a delete operation with limited records per "batch"...

    SET ROWCOUNT 1000
    GO
    DELETE dbo.table_a;
    WHILE @@ROWCOUNT != 0
    BEGIN
    	DELETE FROM dbo.Table_A
    	DBCC LOGINFO('demo_db');
    END
    SET ROWCOUNT 0;

    As you can see from the coding after each DELETE-Operation we check the condition of the log file and you will get the following results (it's shortend because of a better understanding...)

    As you can see from the above picture new VLF will be allocatd but the previous allocated will not be released. The release will occure with a separate CHECKPOINT of an automatic CHECKPOINT initiated by SQL Server itself!


    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

    Sunday, September 08, 2013 7:10 AM
  • Yes it is, but what if you need some filter on the table?

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Blog: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance

    Sunday, September 08, 2013 7:27 AM
  • Hallo SathyanarrayananS,

    "Truncate is minimally-logged operation ."

    Yes that's true but TRUNCATE has to much side effects directly to the user AND the relation itself

    • User need to have DDL permissions on the object
    • TRUNCATE cannot be used for deletion of predicated data (Filtering)
    • TRUNCATE will rebuild the table and allocate new data pages

    I assume that the first option will be a NO GO for standard users...

    <sarcasm>allthough - I assume 99% of all applications are running with all users in db_owner group or with sysadmin privileges :) </sarcasm>


    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

    Sunday, September 08, 2013 7:33 AM
  • To minimize de logging, different solutions :

    1. If your tables are partitionned, create a stagging table, switch partition and drop the stagging table, this operation don't log.
    2. Truncate the table, this operation don't log but isn't possible to filter
    3. Delete by block, in SQL 2012, that is possible to delete with TOP and using the filter clause is allowed :
    DECLARE @BlockSize INTEGER
    SET @BlockSize = 1000
    DELETE TOP(@BlockSize) FROM [AdventureWorks2012].[dbo].[DatabaseLog]
    WHERE ...

    Sorry for my English, am french.

    • Proposed as answer by Grégory_Nail Sunday, September 08, 2013 7:48 AM
    Sunday, September 08, 2013 7:47 AM