none
There is insufficient system memory in resource pool 'internal' to run this query / CACHESTORE_BROKERTO RRS feed

  • Question

  • Hi

    We have a dev environment of SQL 2012 that has 18GB assigned to it.  The server has 48GB total and has 2 other instances.  For one particular instance, let's call it E1, have a database restored to it every day from production. DB is about 1TB with a couple of other smaller ones.   The load on his DB is very light from the web application, only about 2 or 3 users in it testing specific data issues.  We're seeing either the users complaining about the following error (it's same error in event logs, sometimes it is thrown back to user)

    "There is insufficient system memory in resource pool 'internal' to run this query'"

     

    At this point a restart is usually required and things start working.  From looking at dbcc MemoryStatus, one thing we have noticed is that the value for Pages Allocated for 'CACHESTORE_BROKERTO (node 0)' seems to be grow/be at a high value.  I'm not exactly sure what this value means.

    When looking at sys.converation_endpoints there are only around 200 entries...I’ve seen some articles that talk about leaks but I don't see that here given the number of rows. 

    After a restart of SQL, I clear the conversation_endpoints and then check memory status and the Pages Allocated to cachestore_brokerto continues to grow.  I don't know if this is the source of the problem but it's the one value that seems to stand out.

    One could argue that the RAM memory allocation is low, but the usage on this db is also very low.  If there is RAM pressure wouldn't SQL utilize pagefile?

    I'll post output from memorystatus in next post.....

    SQL version

    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)     May 14 2014 18:34:29     Copyright (c) Microsoft Corporation     Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)

     

     

    Anyone got any thoughts or suggestions on how to investigate this further?

     

    Thanks

     

    Amit


    Friday, July 24, 2015 5:42 PM

All replies

  • Seems like this Microsoft blog is your exact issue. You may want to take a look:

    http://blogs.msdn.com/b/poojakamath/archive/2014/12/24/there-is-insufficient-system-memory-in-resource-pool-internal-to-run-this-query-error-701-severity-17-state-123-quot-on-ssb-target-server-cachestore-brokerto-is-consuming-memory.aspx

    It discusses broker endpoint memory leaking over 18GB of RAM.

    It also refers to this article which helps resolve the problem:

    http://rusanu.com/2014/03/31/how-to-prevent-conversation-endpoint-leaks/


    I hope you found this helpful! If you did, please vote it as helpful on the left. If it answered your question, please mark it as the answer below. :)


    Friday, July 24, 2015 7:47 PM
  • When looking at sys.converation_endpoints there are only around 200 entries...I?ve seen some articles that talk about leaks but I don't see that here given the number of rows. 

    Unless there is intense Service Broker traffic on the database, that sounds a tad high. Like you are not closing the conversations properly (maybe because the server you are trying to talk to is not available from this dev server.)

    One could argue that the RAM memory allocation is low, but the usage on this db is also very low.  If there is RAM pressure wouldn't SQL utilize pagefile?

    Not if SQL Server has been granted "Lock pages in memory". And in any case, you have told SQL Server not to use more than 18 GB for the buffer pool.


    Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
    Friday, July 24, 2015 9:16 PM
  • Can you track it to specific queries?  Are they queries that scan or return a lot of data?

    Do you see the same problem in production?  Does the production instance have a lot more RAM?

    Josh

    Friday, July 24, 2015 11:20 PM
  • Not been able to track it to a specific issue just yet, I have a feeling though it may be related to something doing a table scan across all partitions but i am still working that.

    Production doesn't seem to have that issue, that has 48GB RAM.

    Thanks

    Monday, July 27, 2015 1:21 PM
  • Thanks.  I did both of those articles prior to submitting my question, that's one of the reasons I knew to check the endpoint conversations table :).  After manually clearing that table, I still seem to see the values climb for cachebroker_to in memorystatus, this is what is puzzling me.  So it seems I have that problem, but do not see the same symptoms.

    Monday, July 27, 2015 1:23 PM
  • Well, I have no ideas about the cachebroker stuff, that could be causing the memory issue - which *still* should not be occurring, right, unless the cachebroker stuff is totally out of control.  So it sounds like you've tripped across something which is at least a minor SQL Server bug, that surfaces only when memory gets tight.  But might still be avoided by tuning up the one or two queries that are triggering it, if you can just find them.

    Josh

    Monday, July 27, 2015 4:08 PM
  • okay an update on this.  For the dev restores issuing a New_Broker resolves the problem.  So it seems when we were reusing the original production GUID for ServiceBroker, some background tasks start running up which increase the memory usage. I don't know what exactly it is doing as all the transmissions queues are empty, and there's only a handful of stuff in conversation endpoints. There must be some other area that I am missing which has things queued up.  I know new_broker pretty much kills all queues/conversations etc so that helps my problem. I would still like to understand what caused the original issue....


    To address Josh's last comment, I tested this in isolation by restoring the database to a new SQL instance with no traffic/activity on it beyond the internal stuff SQL does.  I agree there may be a bug although as I mentioned I am not sure if there's some other queue which I should clear..anyway problem worked around for now....
    • Edited by itismeap02 Thursday, September 3, 2015 1:13 PM correction
    • Proposed as answer by Naomi NModerator Thursday, September 3, 2015 5:19 PM
    Thursday, September 3, 2015 1:11 PM