none
Shrinking a 4 tb DB with 2.8 tb free space

    Question

  • Was cleaning up some old tables. Dropped a single table that was 1.5 tb and truncated maybe 15 tables that resulted in 1.3 tb free space. The data file will not grow again to 4 tb size and the network person wants the space freed up. Do I dare shrink the data file? Do I have any other options? 

    The database is in Simple recovery mode.

    • Edited by TheBrenda Wednesday, April 17, 2019 6:01 PM
    Wednesday, April 17, 2019 5:58 PM

Answers

  • 2900 GBs may take more than 10 minutes, but it does not have to be nine hours. I would expect that when you are doing things in chunks are you redoing things.

    If you don't want the shrink to interfer with the nightly ETL, just kill it when the ETL is to start - shrink is perfectly interruptable as it works with small transactions.

    And as Phil points out - don't shrink too much. You need free space to be able to rebuild the index of your largest table.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    • Marked as answer by TheBrenda Friday, April 19, 2019 3:02 PM
    Thursday, April 18, 2019 9:52 PM
  • >>2900 gbs?

    Does this mean you're going to try and shrink ALL of the allocated, but unused space out? 

    If I were you, I wouldn't take out more than 1 or 2 tb. That leaves enough 'headroom' (unused space) in the database to allow for database maintenance (index, integrity check, etc) to occur without causing regrowth events.

    HTH,


    Phil Streiff, MCDBA, MCITP, MCSA

    • Marked as answer by TheBrenda Friday, April 19, 2019 3:02 PM
    Thursday, April 18, 2019 3:32 PM

All replies

  • You can shrink the data file, but since that operation fragments indexes, you will need to reorg/rebuild indexes afterwards, to avoid query performance degradation.

    If possible, it's best to do this during a low activity period, in order to avoid interfering with normal database processing during business hours. Shrinking a database that large may take a while.

    Also, don't try and shrink ALL allocated/unused space to try and reclaim 2.8TB free space. You should keep at least 20-30% free space to avoid future auto-growth operations, as well as allow enough room for index maintenance to be done without regrowing files.

    >Do I have any other options?
    If your goal is to reclaim free disk space and the only thing taking up space is data files, that's about the only option I can think of.

    HTH,


    Phil Streiff, MCDBA, MCITP, MCSA

    • Edited by philfactor Wednesday, April 17, 2019 7:27 PM
    Wednesday, April 17, 2019 6:32 PM
  • I saw a script where they were shrinking down the file 1 gb at a time. How is this beneficial?

    DECLARE @from int = 400000; -- 4 tb current size
    DECLARE @leap int = 1000;  -- shrink by 1 gb
    DECLARE @to int = 150000;  -- want 1.5 tb
    DECLARE @datafile varchar(128) = 'Store01_Data' ;
    DECLARE @cmd varchar(512);
    
    
    /* 
    
    https://medium.com/@anna.f/shrink-oversized-data-files-in-microsoft-sql-server-53fb640f893e 
    
    */ 
    
    PRINT '--- SATS SHRINK SCRIPT START ---';
    WHILE @from - @leap > @to
        BEGIN
            SET @from = @from - @leap;
            SET @cmd = 'DBCC SHRINKFILE (' + @datafile + ', ' + CAST(@from AS varchar(20)) + ')';
            PRINT @cmd;
            EXEC (@cmd);
            PRINT '==>    SATS SHRINK SCRIPT - ' + CAST(@from - @to AS varchar(20)) + 'MB LEFT';
        END;
    SET @cmd = 'DBCC SHRINKFILE (' + @datafile + ', ' + CAST(@to AS varchar(20)) + ')';
    PRINT @cmd;
    EXEC (@cmd);
    PRINT '--- SATS SHRINK SCRIPT COMPLETE ---';
    GO

    Wednesday, April 17, 2019 7:37 PM
  • According to the article, it breaks up the shrink operation into smaller chunks, thereby reducing amount of time to complete.

    I haven't tested this, but it makes sense. Shrinking large databases can take a while so this could be just the ticket for saving time.

    HTH,


    Phil Streiff, MCDBA, MCITP, MCSA

    • Edited by philfactor Wednesday, April 17, 2019 7:51 PM
    Wednesday, April 17, 2019 7:51 PM
  • What about the TRUNCATEONLY? Would that help in reducing fragmentation?
    Wednesday, April 17, 2019 9:45 PM
  • I haven't tested this, but it makes sense. Shrinking large databases can take a while so this could be just the ticket for saving time.

    Or the opppsite. Shrinking 1GB at a time does not seem like a bright idea to me. Maybe 100 GB at a time. Then again, shrink works with small transactions, I would probably go the for big shrink directly.

    Beware that if there are LOB columns involved shrink can be very slow.

    The alternative would be to copy out all data on disk, build an empty database from scripts and copy back in. Quite time-consuming and the database is offline during the operation.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Wednesday, April 17, 2019 10:03 PM
  • What about the TRUNCATEONLY? Would that help in reducing fragmentation?

    No matter what DBCC SHRINKFILE option you would choose, it doesn't help you reduce fragmentation. In my opinion, you'd better not shrink 1 GB at a time on the datafile for many times, and do until to the target size. You should shrink the datafile to the target size once instead.

    It is also mentioned at https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-2017#best-practices,  A shrink operation doesn't preserve the fragmentation state of indexes in the database, and generally increases fragmentation to a degree. This fragmentation is another reason not to repeatedly shrink the database.

    Best Regards,

    Will


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Thursday, April 18, 2019 5:43 AM
    Moderator
  • >>>Do I have any other options? 

    Yes., not to shrink at all. As Phil pointed out  that shrinking  will cause heavy fragmentation and you need to rebuild/reorg indexes but that will cause growing  the space again... As data changes your database will be grown gain so what is the point? Moreover your backup file will be smallest after truncating some objects and it does not count empty space data pages .


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Thursday, April 18, 2019 5:55 AM
    Answerer
  • Uri - I do not understand what is meant by this "Moreover your backup file will be smallest after truncating some objects and it does not count empty space data pages ."

    Thursday, April 18, 2019 10:46 AM
  • After massive deletion of the data you want to reclaim empty pages from extents  and DBCC SHRINK does it but at the expense of causing horrible index fragmentation, thus I recommend avoid shrinking, 

    Now to your question, what I meant is a backup not only does not include empty space at the end of the file, it also does not include unallocated extents spread throughout the file and thus the size will be reduced.


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Thursday, April 18, 2019 10:55 AM
    Answerer
  • Uri - so are you suggesting backing up the database and then restoring it? I am not sure that there is enough space, seeing how the network people are yelling about the 4 tb. 

    Also, from what i am reading the TRUNCATEONLY does not cause fragmentation. Would that be an option? 

    https://www.mssqltips.com/sqlservertip/4368/execute-sql-server-dbcc-shrinkfile-without-causing-index-fragmentation/

    Thursday, April 18, 2019 11:05 AM
  • No, RESTORE creates  the database in original size, I talked about backups

    TRUNCATEONLY  option reclaim space at the end of the file...

    BTW storage is not expensive  as it was in the past, see  No problem having 4TB


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Thursday, April 18, 2019 11:11 AM
    Answerer
  • I would say that with the scenario that Brenda described, she have a legit reason to shrink. But of course she needs to plan not only for the shrink but also for the fragmentation. And if there LOBs involved, it goes up to the next level.

    Yes, disk space is cheap these days, at least if you are talking about your machine at home. In a corpoprate environment, things are not as logical as they should be.

    And a database is often not just one database, but a couple: production, QA, test and maybe also a few dev environments.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    Thursday, April 18, 2019 11:28 AM
  •  the network people are yelling about the 4 tb.

    Why are network people yelling about storage? Anyway, tell them you're just the DBA, not the owner of the data, if they want to complain about the size of the database, tell them to talk to the people who own the data (business). The database is big because there is lots of data in it.

    If the storage has been allocated, that's what it's there for, to be used.

    Out of curiosity, after all this discussion, have you even tried to shrink the database by just a little bit yet? Try shrinking it by just 100GB, for example, just to see what happens. You may be able to answer a lot of your own questions.

    HTH,


    Phil Streiff, MCDBA, MCITP, MCSA

    • Edited by philfactor Thursday, April 18, 2019 11:40 AM
    Thursday, April 18, 2019 11:29 AM
  • There is no LOB data. 

    Three times I shrunk 50gb and it took about 10 minutes. It is a client site so i do not have unlimited access. Part of the reason I like breaking it up in chunks because at 10 minutes for 50 gb it would take 9 hours to clear 2900 gbs. My plan is to create a SQL Agent job and shrink in chunks within a WHILE loop. Within the WHILE loop would check how long the job has been running and if > set number of minutes then return. I don't want the shrink to interfere with the nightly ETL. This way it would run a set amount of time and then quit, and then pick up the next day, until it was done. 

    I checked the fragmentation before I shrunk and it looked pretty bad. Guessing that they have not done a reorg/rebuild - ever.

    Thursday, April 18, 2019 2:45 PM
  • >>2900 gbs?

    Does this mean you're going to try and shrink ALL of the allocated, but unused space out? 

    If I were you, I wouldn't take out more than 1 or 2 tb. That leaves enough 'headroom' (unused space) in the database to allow for database maintenance (index, integrity check, etc) to occur without causing regrowth events.

    HTH,


    Phil Streiff, MCDBA, MCITP, MCSA

    • Marked as answer by TheBrenda Friday, April 19, 2019 3:02 PM
    Thursday, April 18, 2019 3:32 PM
  • 2900 GBs may take more than 10 minutes, but it does not have to be nine hours. I would expect that when you are doing things in chunks are you redoing things.

    If you don't want the shrink to interfer with the nightly ETL, just kill it when the ETL is to start - shrink is perfectly interruptable as it works with small transactions.

    And as Phil points out - don't shrink too much. You need free space to be able to rebuild the index of your largest table.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

    • Marked as answer by TheBrenda Friday, April 19, 2019 3:02 PM
    Thursday, April 18, 2019 9:52 PM
  • If it's not urgent, the other thing you can try is restore the database on to a non-prod box and try shrinking there. That will give you a rough estimate of how long it might take in prod (Can vary due to differences in hardware, concurrent workloads etc).

    Shrinking is a resource intensive operation and you will have to watch for a few things - backups can get interrupted, blocking (as it acquires exclusive locks on the pages when moving the data), fragmentation etc. 


    Please remember to click "Mark as Answer" if my response answered your question or click "Vote as helpful" if it helped you in any way.

    Thursday, April 18, 2019 11:20 PM