none
Meaning of "Index Fragmentation" return for HEAP tables....?

    Question

  • I use this query to analyze index fragmentation. I applied it to and old DB and it returned several tables meeting the criteria but with IndexType = HEAP. As a HEAP has no index what, in this case, is the meaning of index fragmentation returned by the query?

    >>>>>>>

    SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName,
    ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
    indexstats.avg_fragmentation_in_percent
    FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
    INNER JOIN sys.indexes ind 
    ON ind.object_id = indexstats.object_id
    AND ind.index_id = indexstats.index_id
    WHERE indexstats.avg_fragmentation_in_percent > 30
    ORDER BY indexstats.avg_fragmentation_in_percent DESC

    >>>>>>>

    TIA,

    edm2

    Saturday, December 21, 2013 9:25 AM

Answers

  • Hallo edm,

    that HEAPS are located in sys.Indexes is because of the structure of the storage engine. Whether heaps, clustered Indexes, non clustered Indexes... - all Information about is stored in sys.Indexes.

    Your query is not the correct one for covering "fragmentation" of heaps. You can do that BUT you have to ask for what reason you wanna do that.

    By definition a heap is a bunch of unordered records and by design Microsoft SQL Server will use a completely different way to search for free space on a page than in a clustered index. I've written a WIKI-article about that:

    SQL Server: How does SQL Server allocate space in a heap?

    Dropping a clustered index will not reorganize the heap

    Let's give you a quick example about the attributes in sys.dm_db_index_physical_stats which are relevant for heaps:

    REATE TABLE dbo.tbl_heap
    (
    	id		uniqueidentifier	not null default (newid()),
    	c1		char(4000)			not null DEFAULT ('just filling stuff')
    );
    GO
    
    SET NOCOUNT ON
    GO
    
    INSERT INTO dbo.tbl_heap DEFAULT VALUES
    GO 100
    
    SELECT	fragment_count,
    		page_count,
    		avg_page_space_used_in_percent,
    		forwarded_record_count
    FROM	sys.dm_db_index_physical_stats(db_id(), object_id('dbo.tbl_heap', 'U'), 0, DEFAULT, 'DETAILED');

    The above code creates a heap and inserts 100 records. After insertion the Information about the fragmentation will be queried. As you can see from the example only 4 attributes are relevant for rating a a table fragemented:

    Fragment_count gives you an Information about the numbers of fragments. BUT... if a table will be created and the first time data will be inserted you get at least 1 fragment because of the internal handling. The first allocated page is the data page and AFTERWARDS the next one is the IAM-page. When you insert data into a new table the first 8 pages are positioned in a mixed extent and if the table grow Microsoft SQL Server will use extents only (8 pages per extent). It would burst this post but should be an explanation why you will have fragments in every table when it will be filled the first time.

    The page_Count gives you Information about the number of pages you have on the Level (a heap has only 1 Level because it has no b-tree structure!).

    The most important value is in avg_page_space_in_percent. This value gives you the density of data on a page. This is the one and only measure point to check for a heap! If a record does not fit into a page (after an UPDATE) the record will be moved to another page where enough space is available. This measures will be recorded in Forward_record_count. This measure point can only be raised in heaps. Indexes will have page splits!

    I hope it's not to complicate but is essential for the next part of the demo. After the data have been inserted and the first query against the physical structure I've got the following result (may vary to yours!)

    The result shows 50 pages and 5 fragments in my heap. Most important is that all my pages are filled with over 90% - GREAT. Next query demonstrates why the table is "fragmented"

    SELECT	%%lockres%%, * FROM dbo.tbl_heap;


    As you can see from the above picture there are gaps in between the allocated pages. The first data are located on page 464 and - instead of 465 for the next - the next page is 3546. This will be count as 1 fragment. Scroll down your own results and you will count the numner of gaps if you replay the demo.

    Now I delete half of the data from the table to reduce the page density (unfornutately I can only post 2 pics in a post so you have to check the upcoming results by yourself.

    ;WITH cte
    AS
    (
    	SELECT	ROW_NUMBER() OVER (ORDER BY id) AS rownum,
    			*
    	FROM	dbo.tbl_heap
    )
    DELETE	cte
    WHERE	rownum % 2 = 0;

    When you delete half of the data and rerun the query for the physical stats you get app. 50% of avg_page_space_used_in_percent. This is an indicator of "fragmentation".

    Instead of buildung a clustered index and dropping it (a really bad advice! - see Paul Randal) just rebuild the table with the following command.

    ALTER TABLE dbo.tbl_heap REBUILD;

    This will rebuild the table and the fragments are gone. But - as Paul mention in his article - don't think about "defragmentation of heaps" but using a sensible clustered index.


    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)



    Saturday, December 21, 2013 11:24 AM

All replies

  • Heap is a table with No clustered index. If you have table with only non-clustered indexes, its also a heap.  Technically, you cannot defrag a heap table.  The only way to you can achieve the results is to add a clustered index, then remove the index.  This will order the data pages based on the field(s) you choose, which will remain after the clustered index is dropped.  Obviously the heap will continue to add unordered data thereafter though.

    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    --------------------------------------------------------------------------------
    This posting is provided "AS IS" with no warranties, and confers no rights.
    --------------------------------------------------------------------------------
    My Blog | Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn - Paperback, Kindle

    Saturday, December 21, 2013 10:28 AM
  • Hallo edm,

    that HEAPS are located in sys.Indexes is because of the structure of the storage engine. Whether heaps, clustered Indexes, non clustered Indexes... - all Information about is stored in sys.Indexes.

    Your query is not the correct one for covering "fragmentation" of heaps. You can do that BUT you have to ask for what reason you wanna do that.

    By definition a heap is a bunch of unordered records and by design Microsoft SQL Server will use a completely different way to search for free space on a page than in a clustered index. I've written a WIKI-article about that:

    SQL Server: How does SQL Server allocate space in a heap?

    Dropping a clustered index will not reorganize the heap

    Let's give you a quick example about the attributes in sys.dm_db_index_physical_stats which are relevant for heaps:

    REATE TABLE dbo.tbl_heap
    (
    	id		uniqueidentifier	not null default (newid()),
    	c1		char(4000)			not null DEFAULT ('just filling stuff')
    );
    GO
    
    SET NOCOUNT ON
    GO
    
    INSERT INTO dbo.tbl_heap DEFAULT VALUES
    GO 100
    
    SELECT	fragment_count,
    		page_count,
    		avg_page_space_used_in_percent,
    		forwarded_record_count
    FROM	sys.dm_db_index_physical_stats(db_id(), object_id('dbo.tbl_heap', 'U'), 0, DEFAULT, 'DETAILED');

    The above code creates a heap and inserts 100 records. After insertion the Information about the fragmentation will be queried. As you can see from the example only 4 attributes are relevant for rating a a table fragemented:

    Fragment_count gives you an Information about the numbers of fragments. BUT... if a table will be created and the first time data will be inserted you get at least 1 fragment because of the internal handling. The first allocated page is the data page and AFTERWARDS the next one is the IAM-page. When you insert data into a new table the first 8 pages are positioned in a mixed extent and if the table grow Microsoft SQL Server will use extents only (8 pages per extent). It would burst this post but should be an explanation why you will have fragments in every table when it will be filled the first time.

    The page_Count gives you Information about the number of pages you have on the Level (a heap has only 1 Level because it has no b-tree structure!).

    The most important value is in avg_page_space_in_percent. This value gives you the density of data on a page. This is the one and only measure point to check for a heap! If a record does not fit into a page (after an UPDATE) the record will be moved to another page where enough space is available. This measures will be recorded in Forward_record_count. This measure point can only be raised in heaps. Indexes will have page splits!

    I hope it's not to complicate but is essential for the next part of the demo. After the data have been inserted and the first query against the physical structure I've got the following result (may vary to yours!)

    The result shows 50 pages and 5 fragments in my heap. Most important is that all my pages are filled with over 90% - GREAT. Next query demonstrates why the table is "fragmented"

    SELECT	%%lockres%%, * FROM dbo.tbl_heap;


    As you can see from the above picture there are gaps in between the allocated pages. The first data are located on page 464 and - instead of 465 for the next - the next page is 3546. This will be count as 1 fragment. Scroll down your own results and you will count the numner of gaps if you replay the demo.

    Now I delete half of the data from the table to reduce the page density (unfornutately I can only post 2 pics in a post so you have to check the upcoming results by yourself.

    ;WITH cte
    AS
    (
    	SELECT	ROW_NUMBER() OVER (ORDER BY id) AS rownum,
    			*
    	FROM	dbo.tbl_heap
    )
    DELETE	cte
    WHERE	rownum % 2 = 0;

    When you delete half of the data and rerun the query for the physical stats you get app. 50% of avg_page_space_used_in_percent. This is an indicator of "fragmentation".

    Instead of buildung a clustered index and dropping it (a really bad advice! - see Paul Randal) just rebuild the table with the following command.

    ALTER TABLE dbo.tbl_heap REBUILD;

    This will rebuild the table and the fragments are gone. But - as Paul mention in his article - don't think about "defragmentation of heaps" but using a sensible clustered index.


    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)



    Saturday, December 21, 2013 11:24 AM
  • Balmukund has written, a heap cannot be defragmented - this is not true. You can do that BUT you

    Hi Uwe,
    Apologies for that. It completely went out of my mind.. ALTER TABLE .. REBUILD.

    Still learning :)


    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    --------------------------------------------------------------------------------
    This posting is provided "AS IS" with no warranties, and confers no rights.
    --------------------------------------------------------------------------------
    My Blog | Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn - Paperback, Kindle

    Saturday, December 21, 2013 11:33 AM
  • Balmukund has written, a heap cannot be defragmented - this is not true. You can do that BUT you

    Hi Uwe,
    Apologies for that. It completely went out of my mind.. ALTER TABLE .. REBUILD.

    Still learning :)


    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    --------------------------------------------------------------------------------
    This posting is provided "AS IS" with no warranties, and confers no rights.
    --------------------------------------------------------------------------------
    My Blog | Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn - Paperback, Kindle

    Dear Balmukund,

    don't worry - that's what I'm doing every day, too! :)

    Wish you a merry christmas time and a happy new year!


    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

    Saturday, December 21, 2013 11:48 AM
  • Dear Balmukund,

    don't worry - that's what I'm doing every day, too! :)

    Wish you a merry christmas time and a happy new year!


    I was looking for a "like" button but realized that its forum not Facebook. Merry Christmas to you too.


    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    --------------------------------------------------------------------------------
    This posting is provided "AS IS" with no warranties, and confers no rights.
    --------------------------------------------------------------------------------
    My Blog | Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn - Paperback, Kindle

    Saturday, December 21, 2013 12:05 PM
  • Dear Balmukund,

    don't worry - that's what I'm doing every day, too! :)

    Wish you a merry christmas time and a happy new year!


    I was looking for a "like" button but realized that its forum not Facebook. Merry Christmas to you too.


    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    --------------------------------------------------------------------------------
    This posting is provided "AS IS" with no warranties, and confers no rights.
    --------------------------------------------------------------------------------
    My Blog | Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn - Paperback, Kindle

    Merry Christmas to Both Balmukund and Uwe.Anyways As always Uwe your article are great ,superb.And Balmukund I have read your articles too they are pretty Awesome Going to get a copy Of book you wrote on Always On.Just wrote to tell you I am fan of you  Both.

    Thanks for all learning you provide


    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

    Saturday, December 21, 2013 3:10 PM
  • It can be but rebuild the heaps may lead to very bad performance (as it remove forward pointers) .The only proper way to "rebuild" a heap is to create a clustered index.

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Sunday, December 22, 2013 8:02 AM
  • Uri,

    So I take it you don't recommend using

          ALTER TABLE dbo.tbl_heap REBUILD;

    but instead create, then drop, a clustered index on the HEAP. Right?  (Sorry, my sql internals is too weak to understand your comment as "as it removes forward pointers".)

    edm2

    Sunday, December 29, 2013 4:17 PM
  • Uwe,

    Thank you for your excellent explanation of sql internals. . I can't count the number of articles I have read, some by real authorities on the subject, that explain the internals but leave me a bit confused or with questions (some of the topics seem hard to grasp or a bit theoretical).  Your use of examples is excellent and has helped me bridge the gap.

    edm2

    Sunday, December 29, 2013 4:20 PM
  • So I take it you don't recommend using

          ALTER TABLE dbo.tbl_heap REBUILD;

    but instead create, then drop, a clustered index on the HEAP. Right? 

    I think the issue is that it's hard to imagine using a heap that needs to be cared for at all, typically heaps are used only for very small tables (<< 100 rows) where the fragmentation doesn't matter, or for staging tables that are truncated or dropped and rebuilt anyway.

    Josh

    Sunday, December 29, 2013 4:28 PM
  • I would also be interested to see a reference for this statement. There are two reasons why you would want to defragment a heap:

    1) Get rid of the forwarding pointers, as they make reads more expensive.
    2) Get rid of holes due to deletions.

    You can achieve this with ALTER TABLE REBUILD or with adding a clustered index and then drop it. The disadvantage with the latter method is that the non-clustered indexes on the table needs to be rebuilt twice.

    As for Josh's statement that heaps typically are small or are staging tables, that may have some truth in it. However, if we for whatever reason have a big transactional table that is a heap, the concern how to defragment it is valid.

    ...although more than one knowledgeable person would say "add a clustred index" - and stop there. That may be taking things too far, but if you are into heaps, you better know what you are doing.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Sunday, December 29, 2013 5:19 PM
  • Erland,

    Regarding

    >>> but if you are into heaps, you better know what you are doing.

    I didn't go into the details but I am charged with finding the source of deadlocks in a sql 2005 server DB. The database contains about 400 tables of which 50% are HEAPS and the remaining ones not. (This DB is the production backend of an important app so we need to be careful with it.) The DB began life as an internal "DataFlex" (3rd party product) DB which was later converted to a sql 2005 DB. The client app, written in DataFlex, communicates with the sql DB, not directly, by but using their "Sql Connectivity Kit"..  If we add indexes using SSMS to any table, the DataFlex app builder won't pickup on their existence. Have I got headaches from this?

    Regarding  >>> However, if we for whatever reason have a big transactional table that is a heap, the concern how to defragment it is valid

    Yes some of those HEAPS have large row counts.  (I'm trying to understand from our developers why so many HEAPS exist in the DB. Haven't gotten an answer back yet.)

    edm2

    Sunday, December 29, 2013 9:37 PM
  • It's impossible to say from your description whether the fact that the tables are heaps and that may be fragmentet is the cause of the deadlocks. Of course, the slower the access method, the wider the windows for deadlocks to occur.

    If we add indexes using SSMS to any table, the DataFlex app builder won't pickup on their existence. Have I got headaches from this?

    Well, I am not sure why the DataFlex application should care about the indexes at all. I would expect the application to be concerned with the business logic and not the physical implementation.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Sunday, December 29, 2013 10:40 PM
  • Yes, I do recommend to have a CI on the table.

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Monday, December 30, 2013 6:18 AM
  • >>>I would also be interested to see a reference for this statement. 

    If you use  ALTER TABLE .. REBUILD on the heap all NCI get rebuilt as well,

    http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-2930-fixing-heap-fragmentation/

    http://sqlblog.com/blogs/kalen_delaney/archive/2009/11/11/fragmentation-and-forwarded-records-in-a-heap.aspx


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Monday, December 30, 2013 6:28 AM
  • I would also be interested to see a reference for this statement. 

    If you use  ALTER TABLE .. REBUILD on the heap all NCI get rebuilt as well,

    And? If you create a clustered index and drop it, the non-clustered indexes are rebuilt twice, so what's your point? (And what would be wrong in rebuilding the non-clustered indexes anyway?)


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Monday, December 30, 2013 8:08 AM
  • Yes some of those HEAPS have large row counts.  (I'm trying to understand from our developers why so many HEAPS exist in the DB. Haven't gotten an answer back yet.)

    Hi Edm,

    concerning this statement it's quite simple - they don't know about it.
    With deepest respect ...

    you have to separate database developers from GUI developers.
    The first one doesn't know what the other one is doing.

    In over 80% of my "task force Jobs" exactly THAT was the reason.
    Unbelievable how much licenses have been sold from those products!

    Concerning the statement about about double rebuild of NCI read my WIKI here.

    http://social.technet.microsoft.com/wiki/contents/articles/19211.dropping-a-clustered-index-will-not-reorganize-the-heap.aspx

    It describes each separate step and the "work behind"


    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

    Monday, December 30, 2013 9:37 AM
  • I think you missed my point.. ,my point was that rebuild the heap may lead to the bad performance....

    I have never suggested to create and drop the CI in order to 'fix' the fragmentation if you read my posts..

     

    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence


    Monday, December 30, 2013 9:42 AM
  • I think you missed my point.. ,my point was that rebuild the heap may lead to the bad performance....

    Which I still don't get. I asked why, and you said the NCI indexes will be rebuilt. How would that lead to bad performance?

    Of course, you can argue that declining to fix the problem by adding a clustered index (and keep it), will maintain the bad performance you already have, but the question is how ALTER TABLE REBULID on a heap can cause worse performance you had before.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Monday, December 30, 2013 1:42 PM
  • I didn't go into the details but I am charged with finding the source of deadlocks in a sql 2005 server DB. The database contains about 400 tables of which 50% are HEAPS and the remaining ones not.

    Yes some of those HEAPS have large row counts. 

    Well, if you have many large heaps being used for transactional processing, I think you have found the source of your deadlocks!

    When you say heaps, I gather they do have nonclustered indexes?

    Rebuilding the heaps is not likely to help anything, btw.

    Adding indexes may help.  A lot!  Dataflex doesn't have to know about them, Dataflex will just keep sending the same SQL only now it will run a lot faster.

    What you should probably do is to treat each case separately.  Take one deadlock.  Look at the tables involved.  Look at the SQL involved.  Find a fix.  Move onto the next.  Etc.

    BTW- what kind of row counts do you have on the biggest tables?

    Josh

    Monday, December 30, 2013 4:30 PM
  • Well, if you have many large heaps being used for transactional processing, I think you have found the source of your deadlocks!

    There is no reason to assume that just because there are heaps there will be more or less deadlocks that if all tables have clustered indexes. If you want a really deadlock-prone table, get one with a clustered index and many non-clustrered indexes, where the clustering keys are updated all the time!


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Monday, December 30, 2013 10:35 PM
  • Hi Bulmukund,

    We can also use "alter table <table_name> rebuild" for heap table if SQL Server version is 2008.

    Thanks

    Chiranjib

    Tuesday, December 31, 2013 12:13 AM
  • There is no reason to assume that just because there are heaps there will be more or less deadlocks that if all tables have clustered indexes. If you want a really deadlock-prone table, get one with a clustered index and many non-clustrered indexes, where the clustering keys are updated all the time!

    Oh yes, and unfortunately even if the CI is *not* updated often but you have many insert-selects or update-froms.  I've had horrible problems with deadlocks on heavily indexed but "properly constructed" tables in SQL 2008 and R2.

    And I suppose you're right that just having a heap is not by itself sufficient unless it is also under-indexed and abused with bad code, but given apps originally generated by some foreign tool that didn't know enough to create the CI's in the first place, I'd expect that, too, and even then the problem isn't that a heap is bad, only that it results in scans and escalation to table locks or zillions of key lookups.  Also I sometimes suspect the optimizer has a chip on its shoulder and once it fails to find a CI it doesn't try as hard as it should!

    And of course OP - always check to see what isolation level you're using, if the problem is deadlocks.

    Josh


    • Edited by JRStern Tuesday, December 31, 2013 7:30 AM
    Tuesday, December 31, 2013 7:28 AM
  • >>>How would that lead to bad performance?


    If the table is large and  you have many NCI, so it will generate a lot of transaction log file (many VLF) which are also may lead the bad performance, no?



    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Tuesday, December 31, 2013 8:20 AM
  • If the table is large and  you have many NCI, so it will generate a lot of transaction log file (many VLF) which are also may lead the bad performance, no?

    Yes, index rebuild can cause transaction-log bloat, and you need to size your transaction log accordingly, so what's new? You don't get a lot of VLF just because you rebuild indexes. But start small and rely on autogrow and you are there. And there are many ways to arrive there. There is nothing special about rebuilding heaps.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Tuesday, December 31, 2013 10:07 AM
  • Nothing new to you, but rebuild a heap with many NCI may lead to the bad performance

     >>>You don't get a lot of VLF just because you rebuild indexes

    Correct...but still ...

      create table t1 
    ( c1 char(900),
      c2 char(900),
      c3 char(900),
      c4 char(900),
      c5 char(900),
      c6 char(900),
      c7 char(900),
      c8 char(900)
      )

      insert into t1
      select
       replicate('a',900),
      replicate('b',900),
      replicate('c',900),
      replicate('d',900),
      replicate('e',900),
      replicate('f',900),
      replicate('d',900),
      replicate('l',900)
       from sys.objects cross join  sys.objects t

      create index idx_c1 on t1(c1)
      create index idx_c2 on t1(c2)
      create index idx_c3 on t1(c3)
      create index idx_c4 on t1(c4)
      create index idx_c5 on t1(c5)
      create index idx_c6 on t1(c6)
      create index idx_c7 on t1(c7)
      create index idx_c8 on t1(c8)


      dbcc loginfo(81) --104

      alter table t1 rebuild

      dbcc loginfo(81) --149


    Happy New Year.


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence


    Tuesday, December 31, 2013 2:05 PM
  • The conclusion of that is if you don't pre-grow your log file, but let it self-manage you will get bad performance, no matter you rebuild heaps or not.

    One argument, though, could be that heaps require bigger log files, since you must rebuild all indexes at once, whereas with a clustred index you can do them one by one.

    ...but if you cop out with ALTER INDEX ALL ON tbl REBUILD, for CI-table there is no difference. Watch the script below. The first database has a heap, the other two has a clustered index. In the first I rebuild all indexes at once, and in the second I rebuild them one by one. The first and the last databases end up with the same final size on the log, and the same number of VLFs, whereas the second end up with a bigger log file.

    It is worth observe the the heap database has a smaller log after the initial load; not surprising since it has one index less. The database where rebuild indexes one by one is the database with the smallest final size of the MDF file.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Tuesday, December 31, 2013 5:03 PM
  • >>>Watch the script below. 

    I see no scripts you talked about :-)


    Best Regards,Uri Dimant SQL Server MVP, http://sqlblog.com/blogs/uri_dimant/

    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting: Large scale of database and data cleansing
    Remote DBA Services: Improves MS SQL Database Performance
    SQL Server Integration Services: Business Intelligence

    Wednesday, January 01, 2014 6:21 AM
  • Oops, here is the script:

    CREATE DATABASE heap
    ALTER DATABASE heap SET RECOVERY FULL
    go
    USE heap
    go
     create table t1 
    ( c1 char(900),
      c2 char(900),
      c3 char(900),
      c4 char(900),
      c5 char(900),
      c6 char(900),
      c7 char(900),
      c8 char(900)
      )
    
      create index idx_c1 on t1(c1)
      create index idx_c2 on t1(c2)
      create index idx_c3 on t1(c3)
      create index idx_c4 on t1(c4)
      create index idx_c5 on t1(c5)
      create index idx_c6 on t1(c6)
      create index idx_c7 on t1(c7)
      create index idx_c8 on t1(c8)
    
      insert into t1
      select
       replicate('a',900),
      replicate('b',900),
      replicate('c',900),
      replicate('d',900),
      replicate('e',900),
      replicate('f',900),
      replicate('d',900),
      replicate('l',900)
       from sys.objects cross join  sys.objects t
    
        EXEC sp_helpdb heap
    
      dbcc loginfo() --104
    
      alter table t1 rebuild
    
        EXEC sp_helpdb heap
    
      dbcc loginfo() --149
    go
    CREATE DATABASE CI
    ALTER DATABASE CI SET RECOVERY FULL
    go
    USE CI
     create table t1 
    ( c1 char(900),
      c2 char(900),
      c3 char(900),
      c4 char(900),
      c5 char(900),
      c6 char(900),
      c7 char(900),
      c8 char(900),
        id int IDENTITY PRIMARY KEY
      )
    
      create index idx_c1 on t1(c1)
      create index idx_c2 on t1(c2)
      create index idx_c3 on t1(c3)
      create index idx_c4 on t1(c4)
      create index idx_c5 on t1(c5)
      create index idx_c6 on t1(c6)
      create index idx_c7 on t1(c7)
      create index idx_c8 on t1(c8)
    
      insert into t1
      select
       replicate('a',900),
      replicate('b',900),
      replicate('c',900),
      replicate('d',900),
      replicate('e',900),
      replicate('f',900),
      replicate('d',900),
      replicate('l',900)
       from sys.objects cross join  sys.objects t
    
        EXEC sp_helpdb CI
    
      dbcc loginfo()
    
      alter index ALL ON t1 rebuild
    
      dbcc loginfo()
    
        EXEC sp_helpdb CI
    go
    
    CREATE DATABASE CIA
    ALTER DATABASE CIA SET RECOVERY FULL
    go
    USE CIA
     create table t1 
    ( c1 char(900),
      c2 char(900),
      c3 char(900),
      c4 char(900),
      c5 char(900),
      c6 char(900),
      c7 char(900),
      c8 char(900),
        id int IDENTITY CONSTRAINT pk PRIMARY KEY
      )
    
      create index idx_c1 on t1(c1)
      create index idx_c2 on t1(c2)
      create index idx_c3 on t1(c3)
      create index idx_c4 on t1(c4)
      create index idx_c5 on t1(c5)
      create index idx_c6 on t1(c6)
      create index idx_c7 on t1(c7)
      create index idx_c8 on t1(c8)
    
      insert into t1
      select
       replicate('a',900),
      replicate('b',900),
      replicate('c',900),
      replicate('d',900),
      replicate('e',900),
      replicate('f',900),
      replicate('d',900),
      replicate('l',900)
       from sys.objects cross join  sys.objects t
    
        EXEC sp_helpdb CIA
    
      dbcc loginfo()
    
      alter index pk    ON t1 rebuild
      alter index idx_c1    ON t1 rebuild
      alter index idx_c2    ON t1 rebuild
      alter index idx_c3    ON t1 rebuild
      alter index idx_c4    ON t1 rebuild
      alter index idx_c5    ON t1 rebuild
      alter index idx_c6    ON t1 rebuild
      alter index idx_c7    ON t1 rebuild
      alter index idx_c8    ON t1 rebuild
    
      dbcc loginfo()
    
        EXEC sp_helpdb CIA
    go
    USE tempdb
    go
    DROP DATABASE CI
    DROP DATABASE heap
    DROP DATABASE CIA

    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Wednesday, January 01, 2014 9:29 AM