none
memory exception when deploying tabular - doesnt jive with what resources are available RRS feed

  • Question

  • Hi we run 2017 std, only ssas is running (not the engine) on the server in question.  i ran something like what you see in the code block for sizing my model.  The query returns 7,464,080,554.0 bytes.

    I seem able to process my tables under vs.  And see the data and calc'd measures in the grid view.  my server has 32 gig of ram and only 5 of that was in use when I first attempted deployment.

    my hard mem limit is 0 value,0 current value,0 default value.  low mem limit (Will says not relevant to tabular, Himanshu seems to disagree at https://blogs.msdn.microsoft.com/himanshu1/2013/04/15/understanding-memory-configuration-in-bi-semantic-model/) 65%,65%,65%.  Total mem limit is 80%,80%,80%.  vertipaq mem limit is 60%,60%,60%.  whether those percents are against the 32 gig or the 16 gig max for std or one of those values minus os ram reqmts is beyond me.  And seemingly nowhere to be found online. 

    I got an error saying...

    Failed to save modifications to the server. Error returned: 'There's not enough memory to complete this operation. Please try again later when there may be more memory available.

    '.

    ...when I tried to deploy.

    Even if I got zero compression wouldn't you think my model should have deployed?

    ;with cte as
    (select 'dimA' t_name,count(*) countstar from star.dimA union all
     select 'dimB' t_name,count(*) countstar from star.dimB union all
     select 'dimC' t_name,count(*) countstar from star.dimC union all
     select 'dimD' t_name,count(*) countstar from star.dimD union all
    
     select 'dimE' t_name,count(*) countstar from star.dimE union all
     select 'dimF' t_name,count(*) countstar from star.dimF union all
     select 'factA' t_name,count(*) countstar from star.factA union all
     select 'factB' t_name,count(*) countstar from star.factB --union all
    )
    select --*,storagesize * countstar extended--
    sum (storagesize * countstar)  totalreqmt --7,464,080,554.0
    from 
    (
    select table_schema,table_name,column_name,data_type, case when data_type='int' then 4
                                                               when data_type='bigint' then 8
                                                                                                       when data_type='tinyint' then 1
                                                                                                       when data_type in ('varchar','char') then CHARACTER_MAXIMUM_LENGTH
                                                                                                       when DATA_TYPE in ('datetime2') then 8
                                                                                                       when data_type='timestamp' then 8
                                                                                                       when data_type = 'date' then 3
    																								   when data_type='smallint' then 2
    																								   when data_type ='bit' then .5 --i know there are 2 of these
    																								   when data_type ='datetime' then 8
    																								   when data_type = 'float' then 4 
                                                                
                                                                                                       end [storagesize],
    		countstar
    from information_schema.columns a
    left join cte b
    on a.table_name=b.t_name
    where ...a bunch  of predicate criteria that filter my schema and tables of interest
    
    ) x

     







    • Edited by db042190 Tuesday, October 22, 2019 4:50 PM clarity
    Monday, October 21, 2019 3:29 PM

All replies

  • here is what i did just to keep things moving. I deleted one of the more voluminous fact tables that really isn't involved yet.  I deleted it from the diagram view of ssdt.  Then i visited each table in the same view and deleted any column that seemed dense.  I think they call that "high cardinality".  That included unnecessary row versions, pk's , dates, id's etc etc.  And the darn thing deployed.  Now SSAS is only using about 2.5 gig of memory where it was up to 8.6 gig following the failed deployment.  Performance is pretty amazing in ssms browser even after slicing about 8 attributes.

    My takeaways are not only that dense columns come with a price (steeper than expected) and economy is a good strategy, but that my earlier attempts to deploy this model loaded and left the tables that were of no use to anybody in ssas (and memory) and were consuming significant memory (about 8.6 gig) even after "error ing" out. 

    Just to be safe, I think I'm going to look into the memory limit in the enterprise edition because this seemed a little too close for comfort anyway.  so far it looks to me like std's limit is 16 gig perhaps with an adjustment for the os.  I cant find yet what the enterprise edition's limit is.


    • Edited by db042190 Tuesday, October 22, 2019 4:40 PM simplifying the question
    Monday, October 21, 2019 7:49 PM
  • Hi db042190,

    Thanks for your post.

    >>in my task manager i see available memory=19gig of the original 32.   this doesn't jive with what >>appears to be a 65% startup low memory grab by ssas.  shouldn't ssas have grabbed about 20 gig if my >>low mem limit is 65%? 

    LowMemoryLimit setting is for multidimensional instance, VertiPaqMemoryLimit setting is for tabular instance. Please not mix the two settings.

    Default memory configuration

    <<

    Failed to save modifications to the server. Error returned: 'There's not enough memory to complete this operation. Please try again later when there may be more memory available.

    '.

     ...when i deploy.  Admittedly i haven't checked yet that all my FK's are 100% present and valid.  Can the community direct me?  I even got memory errors when ram was down at around 5 gig in use .    its possible the 7 or so that is now also in use is being used by a skinnied down model (and workspace)  i deployed since.  not sure because as you can see much later in the post each should only take 103-104 meg.

    <<

    Please increase the value of VertiPaqMemoryLimit setting, or change VertiPaqPagingPolicy setting to 1, then restart SSAS instance and see if it works. Otherwise, you need to add more physical memory to the server.

    By the way, I have no more time to read all replies. I have to spend more time dealing with other tasks.Please express your issue in more simple and is easy to understand.

    Sorry for any inconvenience it may bring to you.

    Best Regards,

    Will


    MSDN Community Support
    Please remember to click "Mark as Answer" the responses that resolved your issue, and to click "Unmark as Answer" if not. This can be beneficial to other community members reading this thread. If you have any compliments or complaints to MSDN Support, feel free to contact MSDNFSF@microsoft.com.

    Tuesday, October 22, 2019 6:48 AM