Messages flow from A to B, and you capture those message movements. By persisting these message movements during any time of the day, week, and month can help to identify how many messages you process in a given time frame. With Serverless360 you set up monitoring to receive notifications when the number of messages necessary for business processes in a specific time frame is not received. This capability is offered in Serverless360 through the data monitoring feature.

Introduction

With the data monitoring feature in Serverless360, you can do more than just verifying the number of messages in a given time frame. You can check for specific metrics in Logic Apps and Service Bus too, such as checking for the number of failed runs or number of executions. Data monitoring in Serverless360 supports a business practice in which critical business data are routinely tested against quality control rules to ensure it meets established standards for consistency. In cloud-native integration solution, this means you can guarantee a Logic App run complete, the function executes within boundaries, and queues do not congest with messages.

Scenario

In this blog, we revisit a situation from Monitoring a Composite Cloud-Native Solution using Serverless360 blog post. It is a scenario where a cloud-native integration solution ingests data from a public API (https://openexchangerates.org), changes the data time format, and store the data in a cosmos DB instance. Furthermore, we now assume that this solution is monitored by a support staff member of an organization leveraging Serverless360. 

In the solution, we monitor the ingestion of data from the first Logic App, the service bus queue, the function, and the second Logic App. This cloud-native integration solution runs every hour; the first Logic App is triggered by the schedule set to an interval of 1 hour.

Setting up Data Monitoring

You can set up data monitoring in Serverless360 quickly. You select Monitoring in the left pane and then click Configure Data Monitoring. Subsequently, you click +Create and choose one of the services: queues, topics, Logic Apps, or Azure Functions. Let’s choose Azure Functions and point to the function responsible for converting Epoch to DateTime. A new pane appears, and you can provide a friendly monitor name, and choose the Function App. You see in the screenshot below all kinds of metrics you can want from for your monitoring. The parameters shown belong to Azure Functions and range from Data into Garbage collection.

We set the Function Execution Count: Function Execution Units (Count) Equal 1, as the number we expect. The warning count to two when the functions run more than once, and zero when the function didn’t run at all. The Logic App for ingesting data runs once an hour and provides one payload to the function, and thus one execution. Hence we would expect a successful monitoring run.

For the Logic Apps, we can set The Runs Completed to one and for the queue the active message count equal to zero. These settings are all pretty basic, however, suffice for our data monitoring needs. We expect one run to occur every hour, one execution of the function, and no active messages on the queue. Each data monitor will run every hour.

The rule we set here is pretty granular, that as we place values per hour run. Usually, you would monitor only a daily basis and set run at 23:30 expecting at least 23 runs on a day and any less would result in a failure.

Data Monitoring Dashboard

Once you have set one or more data monitors, you can examine them after they have run at least once in the data monitoring dashboard. The dashboard in our scenario displays the status of the Logic Apps, Function and Queue configured for data monitoring graphically.

As the screenshot above shows, a calendar chart area on the left on the dashboard, and you can select a date and look at the runs in a calendar control on the right. Furthermore, you can choose one of the runs individually and examine the result.

Each of the runs you select contains information about in this case the Logic App responsible for storing data in Cosmos DB. The screenshot above shows a successful run as the runs completed is 1, not zero or two.

You can set up and monitor all kinds of metrics for data monitoring and thus have a good overview of your message flow in your cloud-native integration solution. Moreover, with data monitoring, you can enhance the quality of your integration solution.

Other monitoring features

Serverless360 offers various monitoring options for your cloud-native integration solution. Besides data monitoring, you can set alerts when messages end up in the dead-letter queue, add watches on your Logic Apps and Functions. You can set alarms when certain thresholds are violated like for instance when more than one message is in the dead-letter queue or when the number of active messages exceeds the five.

 

With watches, you can monitor your Logic Apps and Functions constantly and send out a notification when they fail.


The question might pop up when examining these monitoring features and compare them with the data monitoring feature: Do they not look much alike? Or are they not similar? The answer to that question is simple, the data monitoring is intended as a quality measurement versus the alerts and watches are to make you aware of failure quickly.

Wrap up

This blog shows the data monitoring feature of Serverless360 in a given cloud-native integration solution. The feature ensures the health of the solution by routinely check the specific rules like complete runs, active messages, or executions. The rules depend on your business scenario and the integration solution supporting it. The rules for Logic Apps, Functions, Queues, and Topics are set on the provided performance counters for each of these Azure platform services. The data monitoring combined with the other Serverless360 monitoring features offers you a monitoring capability without the need to access the Azure Portal directly. In a larger support setup, Serverless360 is excellent for first and second tier support staff.