locked
Cube is getting Offline Multiple times in a day RRS feed

  • Question

  • Hi,

    I am recently encountering proble with SSAS cube,In a day cube is going to offline for several time and unable to browse it and after some time automatically cube is getting online.I am unable to figure it out what is happening.

    FYIP..For every 15 Min cube will be Proccessed Full.

    Regards,

    Ram

    Tuesday, April 14, 2015 9:34 AM

Answers

  • This is normal and expected behavior when processing the cube.

    Please see:

    https://msdn.microsoft.com/en-us/library/ms174860.aspx

    If the processing job succeeds, an exclusive lock is put on the object when changes are being committed, which means the object is temporarily unavailable for query or processing. During the commit phase of the transaction, queries can still be sent to the object, but they will be queued until the commit is completed.

    Tuesday, April 14, 2015 12:38 PM
  • Tom's answer is the most likely cause of this behaviour.

    A couple of extra points of explanation might help shed some light on what's going on:

    For a cube which processes in 6 minutes the commit phase is likely to be very short (probably less that a second), but the catch is that the commit phase needs an exclusive database lock. So once the commit lock is requested any new queries get queued up behind it, but the commit can't start it's work until any existing queries complete. So if you have one long running query that takes 20 seconds it will appear as if the server has stopped responding for those 20 seconds as new queries will not be able to start until the long running query finishes and the commit can do it's work.

    SSAS does have a "saftey valve" setting called the ForceCommitTimeout (which defaults to 30 seconds). So if a commit operation has waited longer than this timeout setting it will attempt to cancel any running queries to allow the commit to complete.



    http://darren.gosbell.com - please mark correct answers

    Tuesday, April 14, 2015 12:56 PM

All replies

  • Hi Ram, 

    Do you have any kind of automated deployment process? For what you say, it seems like the cube is getting unprocessed by something (a deployment can cause that) and then, when the cube is processed full again it gets browseable (as it should). 

    Regards

    Pau

    Tuesday, April 14, 2015 11:35 AM
  • Hi Ram,

    AFAIK processing involves short downtime: Data is deleted and subsequently processed.

    You could try process add (but don't forget to do a process full nightly as performance will degrade over time by this option) or do processing somwhere else and use detach /attach commands to copy the newly processed cube to replace the former one. However this also involves some downtime...but it will be much shorter.

    Cheers

    Martin

    Tuesday, April 14, 2015 11:37 AM
  • Yes it is Automated Pau. Job is scheduled for every 15 Minutes and it will complete in 6 minutes,

    As you said "cube is getting unprocessed by something (a deployment can cause that) and then, when the cube is processed full again it gets browseable",

    If it is uprocessed on first run then it is processed on next run,but In my case cube is unbrowsable for first run  and before starting next run cube is browsable.

    Regards,

    Ram

    Tuesday, April 14, 2015 12:08 PM
  • This is normal and expected behavior when processing the cube.

    Please see:

    https://msdn.microsoft.com/en-us/library/ms174860.aspx

    If the processing job succeeds, an exclusive lock is put on the object when changes are being committed, which means the object is temporarily unavailable for query or processing. During the commit phase of the transaction, queries can still be sent to the object, but they will be queued until the commit is completed.

    Tuesday, April 14, 2015 12:38 PM
  • Tom's answer is the most likely cause of this behaviour.

    A couple of extra points of explanation might help shed some light on what's going on:

    For a cube which processes in 6 minutes the commit phase is likely to be very short (probably less that a second), but the catch is that the commit phase needs an exclusive database lock. So once the commit lock is requested any new queries get queued up behind it, but the commit can't start it's work until any existing queries complete. So if you have one long running query that takes 20 seconds it will appear as if the server has stopped responding for those 20 seconds as new queries will not be able to start until the long running query finishes and the commit can do it's work.

    SSAS does have a "saftey valve" setting called the ForceCommitTimeout (which defaults to 30 seconds). So if a commit operation has waited longer than this timeout setting it will attempt to cancel any running queries to allow the commit to complete.



    http://darren.gosbell.com - please mark correct answers

    Tuesday, April 14, 2015 12:56 PM
  • Hi Ram, 

    If the automated process involves deployment AND processing, yes, it could be because of what Tom and Darren pointed out. 

    Besides that, with that short period of time between processings, you may be interested on partition your cube so you can process the "hot" partition following a Process Data + Process Indexes or Process Add (depends on the operations you performed in the base data) and not rebuilding the whole cube with a Process Full operation. This way you will be refreshing the last data you added / modified without impacting that much your system and queries. 

    Regards

    Pau

    Tuesday, April 14, 2015 1:06 PM
  • Tom's answer is the most likely cause of this behaviour.

    A couple of extra points of explanation might help shed some light on what's going on:

    For a cube which processes in 6 minutes the commit phase is likely to be very short (probably less that a second), but the catch is that the commit phase needs an exclusive database lock. So once the commit lock is requested any new queries get queued up behind it, but the commit can't start it's work until any existing queries complete. So if you have one long running query that takes 20 seconds it will appear as if the server has stopped responding for those 20 seconds as new queries will not be able to start until the long running query finishes and the commit can do it's work.

    SSAS does have a "saftey valve" setting called the ForceCommitTimeout (which defaults to 30 seconds). So if a commit operation has waited longer than this timeout setting it will attempt to cancel any running queries to allow the commit to complete.



    http://darren.gosbell.com - please mark correct answers


    Tuesday, June 9, 2015 12:05 PM