in-memory operations SQLServer 2012 (x64, SP1 CU10)


  • hi all,

    I am trying to tune a datawarehouse based stored procedure that joins of about a dozen tables, producing a fact table of approx 4gb and 10 million records in approx 15 mins.

    The problem is tempdb seems to be used very heavily even on a server with 96gb of RAM (Max Memory set at 64gb), where I'd expect to use the available RAM exclusively (or at least a lot more than it is currently using, some approx 16gb, it appears).

    We have tried temp tables, temp variables, CTE's, heap tables only -- but it appears that all use tempdb heavily.  Am I missing something elementary?  (We'd eventually like to move to SQL2014, but I'd like to see if I can speed up things in SQL 2012 (SP1 CU10). And yes, we could move tempdb to SSD :-) 

    Are there any specific tricks to lifting the processing into RAM only or allocating more RAM for DW purposes, to minimize physical tempdb usage?

    thx much for any thoughts & recommendations.


    • Edited by Cos2008 Monday, June 09, 2014 7:07 PM
    Monday, June 09, 2014 5:49 PM

All replies

  • May be some times if we have more temporary operations and too long transactions then we cant avoid tempdb usage in high volume DB systems, just check if you can divde the transaction into multiple parts to avoid the tempdb usage and is there anyway to reduce instead of temp variables or temp tables, so that you can plan for permenant tables etc..

    How many Tempdb files you system is having and what is the configuration ( Size ) ? if you have multiple tempdb files,  all are configured equally in size ?

    Few Points to consider ...

    1. As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors

    2. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels.

    3. Check your codes and make changes to the workload/code.

    4. Identify the transaction/query that is causing tempdb to grow, and look at ways of reducing it's impact.

    5. In the extreme case that you can't do that, then the client needs to accept that the requirement of their system is to have more disk space to accommodate the tempdb growth and Memory or CPU etc..

    6. Apart from that it looks like you already know that the recommended methods are a server restart, or shrink without any other activity in tempdb.

    7. Spill warnings in query plans (hash, sort, or exchange) indicating that there was not enough query execution memory and an operator had to spill results to tempdb.

    8. Incorrect, excessive usage of temp tables, such as *always* using a temp table when it may be better sometimes to not do so, pulling more columns or rows into a temp table than are actually required (e.g. SELECT * into the temp table from a user table, with no WHERE clause), creating nonclustered indexes on a temp table that are not used. I’ve seen this over and over with client code.

    9. Index rebuilds that use SORT_IN_TEMPDB.

    10. Using one of the flavors of snapshot isolation and allowing long-running queries that cause the version store to grow very large.

    11. RAID Levels & Faster Hard drives may help you :)

    The following restrictions also apply to tempdb:

        The database CHECKSUM option cannot be enabled.
        A database snapshot cannot be created on tempdb.
        DBCC CHECKALLOC and DBCC CHECKCATALOG are not supported.
        Only offline checking for DBCC CHECKTABLE is performed. This means that a TAB-S lock is needed. There are internal consistency checks that occur when tempdb is in use. If these checks fail, the user connection is broken and the tempdb space used by the connection is freed.

    Reference Links :


    Raju Rasagounder MSSQL DBA

    Tuesday, June 10, 2014 12:48 AM
  • thx Raju, 
    I will take into consideration some of the points, good reminders, the only problem is I'm trying to eliminate tempdb altogether, NOT optimize its usage. With so much RAM on a server, it seems to me there's got to be ways to lift much of the structures into RAM, to not spill to tempdb!!  (maybe I ought to google 'avoiding' tempdb spill  :-)  )  

    Would love to hear other opinions also!  -- thx a bunch though.


    • Edited by Cos2008 Tuesday, June 10, 2014 2:21 AM
    Tuesday, June 10, 2014 2:17 AM
  • yep, it looks like I may have skipped the 'tempdb spill' class in high school.  Ahh, those summer days!!   :-)  seriously now, that's what it seems to be happening... 
    Tuesday, June 10, 2014 3:35 AM
  • Thanks Raju, good pointers to note!!

    SQL Server 2014 is a good one to try.

    Santosh Singh

    Tuesday, June 10, 2014 11:46 AM
  • Hi,

    Cos i am not sure about reason behind your excessive tempdb usage but suggesstion given by Raju always does not holds true. Tempdb data file creating should only be if you are having contention in tempdb a clear myth busted by Paul Randal

    Now I am not saying that there should not be multiple data file but unless you see SGAM ,GAM contentions you should not add data file it can cause issue as explained in above article.I know MS made recommendation about tempdb data file equal to core but there recommendation is for high OLTP whihc might not be your scenario( it can be, that is why I strees on word to test if you are getting contention )

    use below link to find out tempdb contention

    Then move accordingly.

    I dont think tempdb has direct relation to RAM but you can consider increasing it from 64 to 75 G if you have dedicated system for SQL server with just one instance

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.

    My TechNet Wiki Articles

    Tuesday, June 10, 2014 1:03 PM
  • hi Shanky, 

    I appreciate Raju's feedback and your additional notes as well.  All are valuable and are to be taken with a grain of salt as all depends on the environment-- but excellent pointers to investigate further.  Thx a big bunch for the link to Paul's articles -- very valuable stuff!!  

    I was wondering if there are any SQL instance parameters for datawarehouse specific workloads, that, in conjunction to better SQL code writing, would allow more RAM (in SQL2012 SP1 CU10)  to be used for these specific circumstances (that use many joins and scan entire tables).  

    thx again,


    Tuesday, June 10, 2014 1:56 PM
  • I was wondering if there are any SQL instance parameters for datawarehouse specific workloads, that, in conjunction to better SQL code writing,

    Unfortunately I am not well versed with data warehousing so I wont comment but it would be difficult to find such parameters as Almost all environments are different in some prespective.

    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.

    My TechNet Wiki Articles

    Tuesday, June 10, 2014 2:08 PM
  • Cos,

    In similar situations as yours my ultimate solution was to install extra memory into the servers and move TempDB to a local RAMDisk.  You can use something like 12RAM for your software, or any other software package really.  But that should reduce latency to 1ms read/write and free up the SAN HBA from the burden of the high queue depth buildup of sequential IO transactions that goes along with the spills.  So you win on two fronts, the tempdb issue gets resolved and the other disks get the reduced contention.  If you go this route, make sure you lower your SQL max memory values to reflect the loss of the memory that is being used for the RAMDisk allocation so you don't impact the buffer cache performance.  And as always, recommend you test in a non-production environment if possible first.  Luckily the change is very quick to implement and rollback, assuming you can restart your SQL services when needed without jumping through hoops.

    Good Luck!


    Tuesday, January 13, 2015 8:05 PM