Azure Durable Functions vs Logic Apps: How to choose?

01 Feature

Introduction

Azure currently has two service offerings of serverless compute: Azure Logic Apps and Azure Functions. Until recently, one could argue that Azure Functions were code triggered by events while Logic Apps were event-triggered workflows. However, that changed after the release of Azure Durable Functions which have reached General Availability very recently. Durable Functions is an extension of Azure Functions that allows you to build stateful and serverless code-based workflows. With Azure Logic Apps you can create stateful and serverless workflows through a visual designer.

If you are architecting a solution that requires serverless and stateful workflows on Azure, you might be wondering how to choose between Azure Durable Functions and Logic Apps. This post aims to shed some light to select the platform that better suits your needs.

Development

For some people the development experience might be a very key factor when deciding a platform over the other. The development experience of both platforms is quite different as described below:

Durable Functions Logic Apps
Paradigm Imperative code Declarative code
Languages At the time of writing only C# is officially supported. However, you can make them work with F# and JavaScript support is currently in preview. Workflows are implemented using a visual designer on the Azure Portal or Visual Studio. Behind the visual representation of the workflow, there is the JSON-based Workflow Definition Language.
Offline Development Can be developed offline with the local runtime and Storage emulator. You need to be online with access to the Azure to be able to develop your workflows.

Durable Functions allow you to use imperative code you might already be familiar with, but you still need to understand the constraints of this extension. Logic Apps might require you to learn to use a new development environment, but which is relatively straight forward and quite handy for scenarios where less coding is the preference.

Connectivity

Logic Apps is an integration platform, thus, it truly offers better connectivity than Azure Durable Functions. Some details to consider are described in the table as follows.

Durable Functions Logic Apps
Connectors or Bindings The list of supported bindings is here. Some of these bindings support triggering a function, or are inputs or outputs. The list of bindings is growing, especially for the Functions runtime version 2.

Additionally, as Azure Functions can be triggered by Event Grid events, any Event Grid Publishers can potentially become a trigger of Azure Functions.

Logic Apps provide more than 200 connectors, and the list just keeps growing. Among these, there are protocol connectors, Azure Services connectors, Microsoft SaaS connectors, and third-Party SaaS Connectors.

Some of these connectors can trigger Logic App workflows, while others support getting and pushing data as part of the workflow.

Custom Connectors You can create custom input and output bindings for Azure Functions. Logic Apps allow you to build custom connectors.
Hybrid Connectivity Azure Functions hosted on a App Service Plan (not consumption plan) support Hybrid Connections. Hybrid connections allows to have a TCP tunnel to access on-premises systems and services securely.

Additionally, Azure Functions deployed on an App Service Plan can be integrated to a VNET or deployed on a dedicated App Service Environment to access resources and services on-premises.

Logic Apps offers the On-Premises Data Gateway, which, through an agent installed on-premises, allows you to connect to a list of supported protocols and applications.

It’s worth mentioning that the Product Team is currently working on Isolated Logic Apps, which will **in the future** be deployed on your own VNET, thus will have access to resources on-premises, which will unlock many scenarios.

 

Workflow

Both workflow engines are quite different. Even though the underlying implementation is abstracted for us, it’s important to know how they work internally when architecting enterprise-grade solutions. How both engines work and how some workflow patterns are supported is described below.

Durable Functions Logic Apps
Trigger A workflow instance can be instantiated by any Azure Function implementing the DurableOrchestrationClient. Can be initiated by the many different available triggers offered by the connectors.
Actions being orchestrated Can orchestrate Activity Functions (with the ActivityTrigger attribute). However, those Activity Functions could call other services, using any of the supported bindings.

Additionally, orchestrations can call sub-orchestrations.

At the time of writing, an orchestration function can only call activity functions that are defined in the same Function App. This could potentially hinder reusability of services.

Many different workflow actions can be orchestrated. Logic Apps workflows can be calling actions of the more than 200 connectors, workflow steps, other Azure Functions, other Logic Apps, etc.
Flow Control The workflow’s flow is controlled using the standard code constructs. E.g. conditions, switch case statements, loops, try-catch blocks, etc. You can control the flow with conditional statementsswitch statements, loopsscopes and controlling the activity chaining with the runAfter property.
Chaining Pattern Functions can be executed in a sequence and outputs of one can be inputs of subsequent ones. Actions can easily be chained in a workflow. Additionally the runAfter property allows to execute actions based on the status of a previous action or scope.
Fan-Out / Fan-In Pattern Functions can be executed in parallel and the workflow can continue when all or any of the branches finish. You can fan-out and fan-in actions in a workflow by simply implementing parallel branches, or ForEach loops running in parallel.
Async HTTP APIs and Get Status Pattern Client applications or services can invoke Durable Functions orchestration via HTTP APIs asynchronously and later get the orchestration status to learn when the operation completes. Additionally, you can set a custom status value that could be query by external clients. Client applications or services could call Logic Apps Management API to get the instance run status. However, either the client has to have access to this API or you would need to implement a wrapper of this.

Custom Status value is not currently supported out-of-the-box. If required, you would need to persist it in a separate store and expose it with a custom API.

Approval Workflow (Human Interaction) Pattern The Human Interaction (Approval Workflow) Pattern can be implemented as described here. Approval Workflows can be implemented with the out-of-the box connectors or custom as described here.
Correlation Pattern The Correlation Pattern can be implemented not only when there is human interaction, but for broader scenarios in the same way as described above. The Correlation Pattern can easily be implemented using the webhook action or with Service Bus sessions.
Programmatic instance management Client applications or services can monitor and terminate instances of Durable Functions orchestrations via the API. Client applications or services could call Logic Apps Management API to monitor and terminate instances of Logic App Workflows. However, either the client has to have access to this API or you would need to implement a wrapper.
Shared State across instances Durable Functions support what they call “eternal orchestrations” which is a way to implement flexible loops with a state across loops without the need to store the complete iteration run history. However, this implementation has some important limitations, and the product team suggests to use only it for monitoring scenarios that require flexible recurrence and lifetime management and when the lost of messages is acceptable. Logic Apps does not support eternal orchestrations. However, different strategies can be used to implement endless loops with a state across instances. E.g. making use of a trigger state or storing the state in an external store to pass it from one instance to the next one in a singleton workflow.
Concurrency Control Concurrency throttling is supported. Concurrency control can be configured at workflow level or loop level.
Lifespan One instance can run without defined time limits. One instance of Logic Apps can run for up to 90 days.
Error Handling Implemented with the constructs of the language used in the orchestration. Retry policies and catch strategies can be implemented.
Orchestration Engine Orchestration functions and activity functions may be running on different VMs. However, Durable Functions ensures reliable execution of orchestrations. To support this, check-pointing is implemented at each await statement. Additionally, the orchestration replays every time after resuming from an await call until it reaches the last activity check-pointed to rebuild the in-memory state of the instance. For high throughput scenarios, you could enable extended sessions. In Logic Apps the runtime engine breaks down the different tasks based on the workflow definition. These tasks are distributed among different workers. The engine makes sure that each task is executed at least once, and that tasks are not executed until their dependencies have finished with the expected status.
Some additional constraints and considerations The orchestration function has to be implemented with some constraints in mind, such as, the code must be deterministic and non-blocking, async calls can only be done using the DurableOrchestrationContext and infinite loops must be avoided. To control the workflow execution flow sometimes we need advanced constructs and operations, that can be complex to implement in Logic Apps.

The Worfklow definition language offers some functions that we can leverage, but sometimes, we need to make use of Azure Functions to perform advanced operations required as part of the workflow.

Additionally, you need to consider some limits of Logic Apps.

Deployment

The deployment of these two platforms also has its differences, as detailed below. 

Durable Functions Logic Apps
CI/CD Durable Functions builds and deployments can be automated using VSTS build and release pipelines. Additionally, other build and release management tools can be used. Logic Apps are deployed using ARM Templates as described here.
 Versioning Versioning strategy is very important in Durable Functions. If you introduce breaking changes in a new version of your workflow, in-flight instances will break and fail.

You can find more information and mitigation strategies here.

Logic Apps keep version history of all workflows saved or deployed. Running instances will continue running based on the active version when they started.
Runtime Azure Functions can not only run on Azure, but be deployed on-premises, on Azure Stack, and can run on containers as well. Logic Apps can only run on Azure.

 

Management and Monitoring

How you manage and monitor each your solutions on platform is quite different. Some of the features are described in the table as follows.

Durable Functions Logic Apps
Tracing and Logging The orchestration activity is tracked by default in Application Insights. Furthermore, you can implement logging to App Insights.   The run history and trigger history are logged by default. Additionally, you can enable diagnostic logging to send additional details to Log Analytics. You can also make use of trackedProperties to enrich your logging.
Monitoring To monitor workflow instances, you need to use Application Insights Query Language to build your custom queries and dashboards. The Logic Apps blade and Log Analytics workspace solution for Logic Apps provide very rich and friendly visual tools for monitoring.

Furthermore, you can build your own monitoring dashboards and queries.

Resubmitting There is no out-of-the-box functionality to resubmit failed messages. Failed instances can easily be resubmitted from the Logic Apps blades or the Log Analytics workspace.

Pricing

Another important consideration when choosing the right platform is pricing. Even though both options offer a serverless option where you only pay for what you use, there are some differences to consider as described below.

Durable Functions Logic Apps
 Serverless In the consumption plan, you pay per-second of resource consumption and the number of executions. More details described here. For workflows you pay per-action and trigger (skipped, failed or succeeded). There is also a marginal cost for storage.

In case you need B2B integration, XML Schemas and Maps or Liquid Templates, you would need to pay for an Integration Account.

More details here.

Instance Based Durable Functions can also be deployed on App Service Plans or App Service Environments where you pay per instance. At the moment there is no option to run Logic Apps on your dedicated instances. However, this will change in the future.

Wrapping-Up

This post contrasts in detail the capabilities and features of both serverless workflow platforms available on Azure. The platform better suited really depends on the functional and non-functional requirements and also on your preferences. As a wrap-up, we could say that:

Logic Apps are better suited when

  • Building integration solutions and leveraging the very extensive list of connectors would reduce the time-to-market and ease connectivity,
  • Visual tools to manage and troubleshoot workflows are required,
  • It’s ok to run only on Azure, and
  • A visual designer and less coding are preferred.

And Durable Functions are a better fit if

  • The list of available bindings is sufficient to meet the requirements,
  • The logging and troubleshooting capabilities are sufficient, and you can build your custom monitoring tools,
  • You require them to run not only on Azure, but on Azure Stack or Containers, and
  • You prefer to have all the power and flexibility of a robust programming language.

It’s also worth mentioning that in most cases, the operation costs of Logic Apps tend be higher than those of Durable Functions, but that would depend case by case. And for enterprise-grade solutions, you should not decide on a platform based on price only, but you have to consider all the requirements and the value provided by the platform.

Having said all this, you can always mix and match Logic Apps and Azure Functions in the same solution so you can get the best of both worlds. Hopefully this post has given you enough information to better choose the platform for your next cloud solution.

Happy clouding!

Cross-posted on Mexia’s Blog. Follow me on @pacodelacruz.

Advertisements

Azure Durable Functions Pattern: Approval Workflow with Slack

00 Feature

Introduction

Recently, I published a post about implementing an Approval Workflow on Azure Durable Functions with SendGrid. In essence, this post is not very different to that one. However, I wanted to demonstrate the same pattern on Azure Durable Functions, but now using Slack as a means of approval. My aim is to show how easy it is to implement this pattern by using a Restful API instead of an Azure Functions binding. What you see here could easily be implemented with your own custom APIs as well :).

Scenario

In my previous post, I show how Furry Models Australia streamlined an approval process for aspiring cats to join the exclusive model agency by implementing a serverless solution on Azure Durable Functions and SendGrid. Now, after a great success, they’ve launched a new campaign targeting rabbits. However, for this campaign they need some customisation. The (rabbit) managers of this campaign have started to collaborate internally with Slack instead of email. Their aim is to significantly improve their current approval process based on phone and pigeon post by having an automated serverless workflow which leverages Slack as their internal messaging platform.

11 Sorry

Pre-requisites

To build this solution, we need:

  • Slack
    • Workspace: In case you don’t have one, you would need to create a workspace on Slack, and you will need permissions to manage apps in the workspace.
    • Channel: On that workspace, you need to create a channel where all approval requests will be sent to.
    • App: Once you have admin privileges on your Slack workspace, you should create a Slack App.
    • Incoming Webhook: On your Slack app, you would need to activate incoming webhooks and then activate a new webhook. The incoming webhook will post messages to the channel you have just created. For that, you must authorise the app to post messages to the channel. Once you have authorised it, you should be able to get the Webhook URL. You will need this URL to configure your Durable Function to post an approval request message every time an application has been received.
    • Message Template: To be able to send interactive button messages to Slack we need to have the appropriate message template.
    • Interactive Components: The webhook configured above enables you to post messages to Slack. Now you need a way to get the response from Slack, for this you can use interactive message buttons. To configure the interactive message button, you must provide a request URL. This request URL will be the URL of the HttpTrigger Azure function that will handle the approval selection.
  • Azure Storage Account: The solution requires a Storage Account with 3 blob containers: requestsapproved, and rejected. The requests container should have public access level so blobs can be viewed without a SAS token. For your own solution, you could make this more secure.

Solution Overview

The figure bellow, shows an overview of the solution we will build based on Durable Functions. As you can see, the workflow is very similar to the one implemented previously. Pictures of the aspiring rabbits are to be dropped in an Azure storage account blob container called requests. At the end of the approval workflow, pictures should be moved to the approved or rejected blob containers accordingly.

20 Solution Overview

The steps of the process are described as follows:

  1. The process is being triggered by an Azure Function with the BlobTrigger input binding monitoring the requests blob container. This function also implements the DurableOrchestrationClient attribute to instantiate a Durable Function orchestration
  2. The DurableOrchestrationClient starts the orchestration.
  3. Then, the Durable Function orchestration calls another function with the ActivityTrigger input binding, which is in charge of sending the approval request to Slack as a Slack interactive message.
  4. The interactive message is posted on Slack. This interactive message includes a callbackId field in which we send the orchestration instance id.
  5. Then, in the orchestration, a timer is created so that the approval workflow does not run forever, and in case no approval is received before a timeout, the request is rejected.
  6. The (rabbit) user receives the interactive message on Slack, and decides whether the aspiring rabbit deserves to join Furry Models, by clicking either the Approve or Reject button. The slack interactive message button will send the response to the configured URL on the Interactive Component of the Slack App (this is the URL of the HttpTrigger function which handles the Slack approval response). The response contains the callbackId field which will allow the correlation in the next step.
  7. The HttpTrigger function receives the response which contains the selection and the callbackId. This function gets the orchestration instance id from the callbackId and checks the status of that instance; if it’s not running, it returns an error message to the user. If it’s running, it raises an event to the corresponding orchestration instance.
  8. The corresponding orchestration instance receives the external event.
  9. The workflow continues when the external event is received or when the timer finishes; whatever happens first. If the timer finishes before a selection is received, the application is automatically rejected.
  10. The orchestration calls another ActivityTrigger function to move the blob to the corresponding container (approved or rejected).
  11. The orchestration finishes.

A sample of the Slack interactive message is shown below.

31 Sample Message

Then, when the user clicks on any of the buttons, it will call the HttpTrigger function described in the step 7 above. Depending on the selection and the status of the orchestration, it will receive the corresponding response:

32 Sample Response

The Solution

The implemented solution code can be found in this GitHub repo. I’ve used the Azure Functions Runtime v2. I will highlight some relevant bits of the code below, and I hope that the code is self-explanatory 😉:

TriggerApprovalByBlob.cs

This BlobTrigger function is triggered when a blob is created in a blob container and starts the Durable Function ochestration (Step 1 above)

OrchestrateRequestApproval.cs

This is the Durable Function orchestration which handles the workflow and is started by the step 2 above.

SendApprovalRequestViaSlack.cs

ActivityTrigger function which sends the approval request via Slack as an Interactive Message (Step 3 above).

ProcessSlackApprovals.cs

HttpTrigger function that handles the response of the interactive messages from Slack (Step 7 above).

MoveBlob.cs

ActivityTrigger function that moves the blob to the corresponding container (Step 10 above).

local.settings.json

These are the settings which configure the behaviour of the solution, including the storage account connection strings, the Slack incoming webhook URL, templates for the interactive message, among others.

You would need to implement these as app settings when deploying to Azure

Wrapping up

In this post, I’ve shown how to implement an Approval Workflow (Human Interaction pattern) on Azure Durable Functions with Slack. On the way, we’ve also seen how to create Slack Apps with interactive messages. What you read here can easily be implemented using your own custom APIs. What we’ve covered should allow you to build serverless approval workflows on Azure with different means of approval. I hope you’ve found the posts of this series useful.

Happy clouding!

Cross-posted on Mexia’s Blog. Follow me on @pacodelacruz.

Azure Durable Functions Pattern: Approval Workflow with SendGrid

Introduction

Durable Functions is a new (in preview at the time of writing) and very interesting extension of Azure Functions that allows you to build stateful and serverless code-based workflows. The Durable Functions extension abstracts all the state management, queueing, and checkpoint implementation commonly required for an orchestration engine. Thus, you just need to focus on your business logic without worrying much on the underlying complexities. Thanks to this extension, now you can:

  1. Implement long-running serverless code-based services beyond the current Azure Function limitation of 10 minutes (as long as you can break down your process into small nano-services which can be orchestrated);
  2. Chain Azure functions, i.e., call one function after the other and pass the output of the first one as an input to the next one (Function chaining pattern);
  3. Execute several functions asynchronously and then continue the workflow when any or all of the asynchronous tasks are completed (Fan-out and Fan-in pattern);
  4. Get the status of a long-running workflow from external clients (Async HTTP APIs Pattern);
  5. Implement the correlation identifier pattern to enable human interaction processes, such as an approval workflow (Human Interaction Pattern) and;
  6. Implement a flexible recurring process with lifetime management (Monitoring Pattern).

It’s worth noting that Azure Durable Functions is not the only way to implement stateful workflows in a serverless manner on Azure. Azure Logic Apps is another awesome platform, core component of the Microsoft Azure iPaaS, that allows you to build serverless and stateful workflows using a designer. In a previous post, I showed how to implement the approval workflow pattern on Logic Apps via SMS messages leveraging Twilio.

In this post, I will show how to implement the Human Interaction Pattern on Azure Durable Functions with SendGrid. You will see on the way that this implementation requires other Durable Functions patterns, such as, function chaining, fan-out and fan-in, and optionally the Async HTTP API Pattern.

Scenario

To illustrate this pattern on Durable Functions, I will be using a fictitious cat model agency called Furry Models Australia. Furry Models is running a campaign to attract the most glamorous, attractive, and captivating cats in Australia. They will be receiving photos of all aspiring cats and they need a streamlined approval process to accept or reject those applications. Furry Models want to implement this in an agile manner with a short time-to-market and with a very cost-effective solution. They know that serverless is the way to go!

11 Join Us

Pre-requisites

To build this solution, we will need:

  • SendGrid account. Given that Azure Functions provides an output binding for SendGrid to send emails, we will be relying on this service. In case you want to implement this solution, you would need a SendGrid account. Once you sign up, you need to get your API Key, which is required for the Azure binding. You can get more information about the SendGrid binding for Azure Functions and how to use it here.
  • An Azure Storage Account: The solution requires a Storage Account with 3 blob containers: requestsapproved, and rejected. The requests container should have public access level so blobs can be viewed without a SAS token. For your own solution, you might want to make this more secure.

Solution Overview

The picture below shows an overview of the approval workflow solution I’ve build based on Durable Functions.

Pictures of the aspiring cats are to be dropped in an Azure storage blob container called requests. At the end of the approval workflow, pictures should be moved to the approved or rejected blob containers accordingly.

20 Solution Overview

The steps of the process are described as follows:

  1. The process is being triggered by an Azure Function with the BlobTrigger input binding monitoring the requests blob container. This function also implements the DurableOrchestrationClient attribute to instantiate a Durable Function orchestration
  2. The DurableOrchestrationClient starts a new instance of the orchestration.
  3. Then, the Durable Function orchestration calls another function with the ActivityTrigger input binding, which is in charge of sending the approval request email using the SendGrid output binding.
  4. SendGrid sends the approval request email to the (cat) user.
  5. Then, in the orchestration, a timer is created so that the approval workflow does not run forever, and in case no approval is received before the timer finishes the request is rejected.
  6. The (cat) user receives the email, and decides whether the aspiring cat deserves to join Furry Models or not, by clicking the Approve or Reject button. Each button has a link to an HttpTrigger Azure Function which expects the selection and the orchestration instanceId as query params
  7. The HttpTrigger function receives the selection and the orchestration instanceId. The function checks the status of the orchestration instance, if it’s not running, it returns an error message to the user. If it’s running, it raises an event to the corresponding orchestration instance.
  8. The corresponding orchestration instance receives the external event.
  9. The workflow continues when the external event is received or when the timer finishes; whatever happens first. If the timer finishes before a selection is received, the application is automatically rejected.
  10. The orchestration calls another ActivityTrigger function to move the blob to the corresponding container (approved or rejected).
  11. The orchestration finishes.

A sample of the email implemented is shown below.

22b Sample Email

The Solution

The implemented solution code can be found in this GitHub repo. I’ve used the Azure Functions Runtime v2. I will highlight some relevant bits of the code below, and I hope that the code is self-explanatory 😉:

TriggerApprovalByBlob.cs

This BlobTrigger function is triggered when a blob is created in a blob container and starts the Durable Function ochestration (Step 1 above)

OrchestrateRequestApproval.cs

This is the Durable Function orchestration which handles the workflow and is started by the step 2 above.

SendApprovalRequestViaEmail.cs

ActivityTrigger function which sends the approval request via email with the SendGrid output binding (Step 3 above).

ProcessHttpGetApprovals.cs

HttpTrigger function that handles the Http Get request initiated by the user selection (click) on the email (Step 7 above).

MoveBlob.cs

ActivityTrigger function that moves the blob to the corresponding container (Step 10 above).

local.settings.json

These are the settings which configure the behaviour of the solution, including the storage account connection strings, the SendGrid API key, templates for the email, among others. You would need to implement these as app settings when deploying to Azure

Wrapping up

In this post, I’ve shown how to implement an Approval Workflow (Human Interaction pattern) on Azure Durable Functions with SendGrid. Whether you wanted to learn more about Durable Functions, to implement a serverless approval workflow or you run a cat model agency, I hope you have found it useful 🙂 Please feel free to ask any questions or add your comments below.

Happy clouding!

Cross-posted on Mexia’s Blog. Follow me on @pacodelacruz.

Ten-point checklist when migrating from BizTalk to Azure iPaaS (Logic Apps)

Summary

Business are evolving increasingly fast, and IT can be an enabler or a deterrent of this evolution. IT changes should always bring business value and never compromise core business needs.

As part of becoming more agile, a common concern for many of our customers is the transition from on-premises integration platforms to a cloud or hybrid solution, in particular migrating their BizTalk Server environments to the Microsoft Azure Integration Platform-as-a-Service (iPaaS), which is based on Azure Logic Apps.

There are a number of reasons you might want to consider migrating your BizTalk solutions to the Microsoft Azure iPaaS, including:

  1. Enabling or supporting the digital transformation journey. Azure services, like Logic Apps, Azure Functions and API Management allow you to expose and consume modern APIs, which are key enablers of digital transformation.
  2. Reducing your OpEx. Significantly reduce your IT operation and licensing costs by leveraging PaaS and serverless components.
  3. Gaining Agility: Azure allows you to deliver business value in weeks, instead of months. Not only because of the tooling but also because of the capabilities of the platform and availability of hundreds of connectors.
  4. Unlocking new business solutions: With Azure, new business solutions are possible. From asynchronous messaging of Service Bus to eventing of Event Grid, to smart solutions with Stream Analytics, Cognitive Services and Machine Learning, to monetisation of APIs with API Management, to advanced monitoring and alerting with OMS and Log Analytics, to name a few.

With all of that said, supporting your digital transformation leveraging integration can sometimes be challenging. Therefore, it’s imperative you start early, plan thoroughly and implement well to avoid unnecessary complications.

To get the full article, download the document from my employers site with the ten-point checklist to get your BizTalk to Azure iPaaS transition in good shape.

 

Externalising Business Rules on Azure Logic Apps using Liquid Templates

Introduction

In Azure Logic Apps workflows, you can implement conditions and switch cases to control the flow based on runtime inputs and outputs. This functionality is quite useful, and in many cases, can be used to implement the business rules required. However, those business rules are inherent to the workflow, and when business rules change often, they would end up being hard to maintain.

BizTalk Server, which is the on-premises integration platform from Microsoft, provides a Business Rules Engine that allows you to define, build, manage and maintain business rules in a way that the integration orchestrations can be abstracted from changes on the business rules. Unfortunately, at the time of writing, in Logic Apps there is not such a thing.

BizTalk Server, which is the on-premises integration platform from Microsoft, provides a Business Rules Engine that allows you to define, build, manage and maintain business rules in a way that the integration orchestrations can be abstracted from changes on the business rules. Unfortunately, at the time of writing, in Logic Apps there is not such a feature.

Recently, Microsoft have released the support of Liquid Templates to transform JSON and XML objects. These transformations are based on DotLiquid, the .NET Implementation of Liquid templates themes from Shopify.

If we wanted to externalise business rules from Logic Apps workflows, so that when they change we don’t need to update the workflow, we have two options:

  • we could use Azure Functions and implement business rules as .NET code. NRules is an interesting open-source library for this, and now
  • we could use Liquid Templates.

In this post, I’ll show how to implement business rules in Logic Apps using Liquid Templates.

Scenario

 

Farm to Table, the fresh produce drone delivery company, want to implement some business rules to be applied when receiving orders. All the order processing is currently handled by Logic Apps. These business rules can change over time, and they want to be able to update them with the minimal effort (think of development, testing and deployment). They have two main business rules: 1) to define whether an order must be manually approved and 2) applying discounts based on promotions. Promotions rules change much more often than approval rules. Because these rules change often, they want to externalise them from the Logic App workflow.

They want to implement the following business rules:

  • Manual Approval of Orders: Based on the current capacity of the drone fleet, orders with a total weight greater than 18 kg must be approved. Additionally, orders with a subtotal of less than $10 AUD require a manual approval as well.
  • Discount: currently, there is a promotion for orders placed on the mobile app. Other channels don’t have a discount.
    • All orders coming from the mobile app would get at least 5% discount.
    • Orders with a subtotal of $50 AUD or more and with a total weight of less than 10 Kg would get higher discount of at least 7%. Additionally,
      • Orders meeting the criteria above to be delivered to Zone 1  (Zip codes: 3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008), get a 15% discount
      • Orders meeting the criteria above to be delivered to Zone 2 (Zip codes: 3205, 3206, 3207), get a 10% discount.

Solution

As you can imagine, implementing these business rules as part of the Logic App workflow would add a lot of complexity, let alone the effort required when business rules, like promotions, change.

To implement these business rules, we will create a Liquid template, which will be abstracted from the Logic App workflow. This will allow us to update the business rules as required without impacting the workflow, thus we can reduce the time required for development, testing and deployment.

Orders

Orders received by Farm to Table follow the structure shown below

Business Rules as a Liquid Template (version 1)

We will implement the business rules as a Liquid template. If you want to get familiar with Liquid templates, I would recommend to go to the official documentation. It’s worth noting that because Logic Apps implementation of Liquid is based on DotLiquid, not all the filters are supported or work exactly the same. For instance, filters are called with the first letter in uppercase, as opposed to lower case in the Ruby implementation. Additionally, you need to be familiar with the creation of Integration Accounts and adding Liquid templates to it as described here.

Below, there is the first iteration version of the business rules implementation in Liquid. As you can see, I’m creating two arrays of zip codes for the zone conditional. Everything else should be self-explanatory

This Liquid template, should output a response like the one shown below:

Having this response will allow us to use the requiresApproval boolean property, and the calculated discount later in the workflow, with all the business rules externalised from the workflow.

Business Rules as a Liquid Template (version 2)

As part of the requirements, Farm to Table, wants to be able to enable and disable discounts very easily. They also want to be able to easily change the channels to which discounts apply, zones, and thresholds for discount levels and approvals. To do that, we can refactor the Liquid template, so we can create a vocabulary with easy-to-maintain variables.

A sample of the refactored business rules is shown below

This template will create exactly the same output, but now it should be easier to maintain.

The Workflow

The Order processing Logic App workflow is instantiated by an HTTP call. Then we need to apply business rules to know the discount and whether the order must be manually approved. To do so, we will use the Transform JSON to JSON action to invoke the Liquid template with the business rules. We pass the order as the content parameter and the name of the Liquid template as the map. The Liquid template must be already in the Integration Account and the Integration account must be assigned to the Logic App workflow. Then the workflow will proceed with further processing, including the manual approval when required. A snapshot of the first part of the workflow is shown as follows.

10 Workflow Original

Implementing a generic Business Rules Engine

Now, we know how to call externalised business rules using Liquid templates within Logic Apps workflows.

However, you could even implement a serverless Business Rules Engine using these tools, to be used not only from Logic Apps but other apps. This could be handy when business rules change often, and you prefer not to implement them as part of the application code.

To do so, we could deploy all our business rules as Liquid templates in an Integration Account, and then create an Http triggered Logic App workflow, which receives as body the JSON document for which business rules are to be applied. Additionally, this Logic App will need to receive the name of the Liquid template to be invoked.

In my Logic App, I am accepting the name of the Liquid template as an Http header. I named the custom header as X-BusinessRulesName. I used this header value to dynamically invoke the corresponding Liquid template in a Transform JSON to JSON action. The Transform JSON to JSON action is called Apply Business Rules. This Logic App will return the output of the Business Rule as the response body.

The Logic App workflow is shown below, including the code behind the Apply Business Rules (Transform JSON to JSON) action.

21 Workflow for BRE

This Http Logic App, together with business rules as Liquid templates, can be used as a Business Rules Engine. Quite handy, don’t you think? 🙂

Considerations

There are two considerations to be taken into account when using Liquid Templates

  • Liquid Templates, particularly the DotLiquid implementation might not have all the Control Flow tags and Filters required for your business rules. So you need to know what the limitations are.
  • Liquid Templates are to be stored in an Integration Account. As described here, you can use an included (free) Integration Account with no additional cost to host your Liquid Templates. However, this one does not provide an SLA . For production workloads, a Basic or Standard Integration Account is suggested, and there is a cost associated with it.

Wrapping Up

In this post, I’ve shown how to implement externalised business rules on Logic Apps using Liquid templates. Additionally, we’ve seen how we can implement a generic Business Rules Engine using this same approach and dynamically invoking a particular business rule Liquid template. I hope you’ve found this post handy. Feel free to add your comments or ask any questions below.

Happy clouding!

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

Publishing Custom Queries of Logic Apps Execution Logs

In a previous post, I showed how to implement Business Activity Monitoring for Logic Apps. However, sometimes developers, ops, or business users want to query execution logs to get information about the processing of business messages. Whether for troubleshooting or auditing, there are some questions these personas might have, like:

  • When was a business document processed?
  • What was the content of a received document?
  • How was that message processed?

As we saw in that post, we can send diagnostic log information and custom tracked properties to Azure Log Analytics. We also saw how easy is to query those logs to get information about Logic Apps execution and messages processed. Now the question is, how can we publish those custom queries, so different users can make use of them? In this post, I’ll show one easy way to do that.

1. Tracking the relevant custom properties and sending data to Log Analytics.

The first thing to do is to track the relevant custom properties we need for our queries as tracked properties in our Logic App workflow. Then you need to configure the Logic App workflow to send diagnostics information to Azure Log Analytics. You can follow the instructions on my previous post to perform those steps.

2. Creating the queries to get the information our users need

Once the information is being logged on Log Analytics, we need to create the queries to give the users the information they need. For that, first we need to open the Azure Log Analytics Portal. To open the portal we need to

  • Go to the Log Analytics Resource on the Azure Portal
  • Go to Log Search
  • Click on Analytics

And now you are ready to create your own queries.

20 Open Log Analytics

Based on the tracked properties of the Logic App workflow shown in my previous post, I wrote this query to get all orders processed in the time range selected. This query returns, order number, total, date, channel, customer Id, the name of the Logic App workflow which processed this message, and the workflow run id. These last 2 columns would be quite handy for troubleshooting.

21a Query and Results

3. Saving the custom Azure Log Analytics query

Once we have the query ready, we can save it and export it, so later we can publish it. To do that, we follow the steps below

  • Click the Save button
  • Give a name and category to the query. The category is quite useful for searching among all saved queries.
  • Then we click the Export button
  • And select the Share a Link to Query option, so the link to the query is saved in the clipboard.

21 Save and Export Query

4. Publishing the custom Azure Log Analytics query to the users

After we have gotten the link to the query, we can publish it in the same Dashboard we created for our BAM charts described in my previous post. We need to:

  • Edit the shared Azure Dasboard
  • Add the Markdown tile.
  • Add the markdown text which contains the link to the query created above.

Now the users will have all the charts and custom queries they need in one single place!

22 Add Query to Dashboard

Making easier to users to get to the workflow run logs of a particular business message on Logic Apps.

Logic Apps provide a very detailed view of the execution of the workflows. However, I’ve been asked so many times to make easier to users to get the run details of a particular business message. Here is a tip on how to do it.

First, we need to create a query to get the runId of the workflow instance that processed the message. I created this query to get those details for the orders submitted by a particular user.

  • Once we have that query, we publish it to the same markdown tile in our dashboard.
  • We also add the link to the workflow Azure resource to the same dashboard tile.

Now users can query the orders submitted by a user, get the workflow run id, and get the workflow run details in very few clicks.

23 Query Logic App Instance by Business Id

Wrapping-up

In this post, we’ve seen how to create and publish custom queries of Logic Apps execution logs. We’ve also seen how to make easier to users to get the workflow run details of the processing of a particular business message. Now you should be ready to start creating and publishing your own custom queries and creating amazing monitoring and tracking dashboards for your Logic Apps solutions.

I hope you’ve got some useful tips from this post and you’ve enjoyed it. Feel free to leave your questions or comments below,

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

Business Activity Monitoring on Azure Logic Apps with Azure Log Analytics

Introduction

Azure Logic Apps provide built-in monitoring tools that allow you to check the run history (including all inputs and outputs of triggers and actions), trigger history, status, performance, etc. Additionally, you can enable diagnostic logging on your Logic Apps and send all these runtime details and events to Azure Log Analytics. You can also install the Logic Apps Management Solution on OMS, which gives you a very rich aggregated view and charts of all your logic apps that are being monitored.

All these tools are great for developers or system administrators, who want to monitor and troubleshoot the execution of the workflows. However, sometimes we need to track and monitor more business-related information. Additionally, it’s quite common that business users want to monitor, with a business perspective, what’s happening at the integration layer.

In this post, I will show how to implement tracking capabilities for business-related information and how to create a Business Activity Monitoring (BAM) dashboard for Logic Apps.

Scenario

In a previous post, I introduced a fictitious company called “Farm to Table”, which provides fresh produce drone delivery. This company has been leveraging Logic Apps to implement their business processes, which integrate with multiple systems on the cloud. As part of their requirements they need to monitor the business activity flowing through this integration solution.

“Farm to Table” want to be able to monitor the orders they are receiving per channel. At the moment, customers can place orders via SMS, a web online store, and a mobile app. They also want to be able to track orders placed using customer number and order number.

Solution

Prerequisites

To be able to track and monitor business properties and activity on Logic Apps, there are some prerequisites:

  1. Azure Log Analytics workspace. We require an Azure Log Analytics workspace. We can use a workspace previously created or create a new one following the steps described here.
  2. Enable diagnostic logging to Azure Log Analytics on the Logic App which processes the messages we want to track, following the instructions detailed here.
  3. Install the Logic App Management solution for OMS on the Azure Log Analytics.

Adding Tracking Properties to the Logic App Workflow

Once enabled diagnostic logging, by default, Logic Apps tracks all workflow instances and actions on Azure Log Analytics. This tracking can very easily be extended using Tracked Properties on the workflow actions. In the tracked properties, we can include business related data; for example, in this scenario we could track customer Id, order number, order total, and channel.

“Farm To Table” has implemented an HTTP Triggered Logic App workflow that receives orders from different channels, validates the orders, maps it to a canonical model, enriches the message, and then puts them into a Service Bus Queue. The order canonical model processed by this workflow follows the schema of the instance below:

To track business properties of the orders, we will add the tracked properties to the action that sends the message to Service Bus. It’s noteworthy that when we use tracked properties within a Logic App action, we can only use the trigger input and the action’s inputs and outputs.

In the Logic App action that sends the message into a Service Bus queue, we will add the tracked properties to include the customer Id, order number, order Total, date, and channel. I’m also adding a flag to simplify my queries, but that’s optional. The code below shows the trackedProperties section added to our workflow action.

Once we have started tracking those properties and we have information already logged on Azure Log Analytics, we can start querying and creating charts for our own Business Activity Monitoring Dashboard.

Querying Azure Log Analytics

Let’s start querying what we are tracking from the Logic Apps on Log Analytics. Bear in mind that there is a delay between the execution of the Logic App and the log being available on Log Analytics. Based on my experience, it usually takes anywhere between 2 and 10 minutes. You can find detailed documentation on the Log Analytics query language here.

The query below returns the custom data I’m tracking on the Logic App workflow. When building your queries, it’s worth noting that

  • Workflow actions are being logged using the AzureDiagnostics Log Type and with the WorkflowRuntime category.
  • Logic Apps prepend the prefix “trackedProperties_” to each property and append a suffix to declare its type.

This query should return a result set as the one below:

Additionally, we can add filters to our queries, for instance, to get all orders by the CustomerId, we could use a query as follows:

Creating a Monitoring Dashboard.

Before, Microsoft suggested to create OMS custom views and queries for this purpose. The steps to create a custom OMS dashboard for Logic Apps are described here. However, after the upgrade of OMS to the new Log Analytics Query Language (previously known as Kusto query language), the recommended approach is now to use the new Azure Log Analytics portal, create queries and charts, and pin them to a shared Azure Dashboard. If have created both, custom OMS dahsboards and custom Log Analytics Azure Dashboards, you must agree that shared Azure Dashboards and Log Analytics charts are much more user friendly.

The steps to create and shared an Azure Dashboard to include Log Analytics data and charts are described here. We will follow these steps to create our own Business Activity Monitoring Dashboard as a shared Azure Dashboard. I won’t repeat what’s in Microsoft’s documentation, I’ll just show how I’m creating the charts to be added in the Dashboard.

Order Count by Channel

“Farm to Table” want to have a chart with the order count summarised by channel for the last 7 days in their BAM Dashboard. The query below returns those details.

Once we get the results, we need to select Chart and then the Doughnut option. After that, we are ready to pin our chart to the Azure shared dashboard.

15 Chart Count By Channel

Order Total by Date and Channel

The company also want to have a chart with the order total summarised by date and channel for the last 7 days in their Dashboard. The query below returns those details.

Once we get the results, we need to select Chart and then Stacked Columns. After that, we are ready to pin our chart to the Azure shared dashboard.

16 Chart Total By Date

Business Activity Monitoring Dashboard

Once we have pinned our charts, we would be able to see them in our Azure shared dashboard. These charts are very handy and allow us to dynamically visualise the data as shown below.

20 BAM Dashboard

Wrapping-up

In this post, we’ve seen how to easily track and monitor business information flowing through our Logic Apps, using Logic App native integration with OMS and Azure Log Analytics. Additionally, we’ve seen how friendly and cool the Log Analytics charts are. This gives Logic Apps another great competitive advantage as an enterprise-grade integration Platform as a Service (iPaaS) in the market.

I hope you’ve learned something useful and enjoyed this post! Feel free to post your comments or questions below

Happy clouding!

Paco

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

Preparing Azure Logic Apps for CI/CD to Multiple Environments

 Introduction

Logic Apps can be created from the Azure Portal, or using Visual Studio. This works well if you want to create one Logic App at a time. However, if you want to deploy the same Logic App in multiple environments, e.g. Dev, Test, or Production, you want to do it in an automated way. Azure Resource Manager (ARM) Templates allow you to define Azure Resources, including Logic Apps, for automated deployment to multiple environments in a consistent and repeatedly way. ARM Templates can be tailored for each environment using a Parameters file.

The deployment of Logic Apps using ARM Templates and Parameters can be automated with different tools, such as, PowerShell, Azure CLI, or VSTS. In my projects, I normally use a VSTS release definition for this.

You probably have noticed that the Logic App Workflow Definition Language (the JSON code behind) has many similarities with the ARM Templates structure, including the use of expressions and functions, variables, and parameters.

ARM Template expressions and functions are written within JSON string literals wrapped with square brackets []. ARM expressions and functions can appear in different sections of the ARM template, including the resources member, which might contain Logic Apps. The value of these expressions is evaluated at deployment time. More information here.

Logic App expressions and functions are defined within the Logic App definition and might appear anywhere in a JSON string value. Logic Apps expressions and functions are evaluated at execution time. These are declared using the @ sign. More information here.

These similarities can be confusing by themselves. I’ve seen that it’s a quite common practice in ARM Templates with Logic Apps, to use ARM template expressions inside the Logic App definition. For example, using ARM parameters, ARM variables or ARM functions (like concat), within the definition of a Logic App. This might seem OK, as this is what you would normally do to tailor your deployment for any other Azure resources. However, in Logic Apps, this can be quite cumbersome. If you’ve done it, I’m almost sure that you know what I’m talking about.

In this post, I’ll share some practices that I use to ease the preparation of Logic Apps for Continuous Integration / Continuous Delivery (CI/CD) to multiple environments using ARM Templates, when values inside the Logic App definition have to be customised per environment. If you don’t have to change values within the Logic App definition, then you might not need to follow every step of this post.

Why it’s not a good idea to use ARM template expressions inside a Logic App definition?

As I mentioned above, if when preparing you Logic Apps for CI/CD with ARM Templates, you have used ARM template expressions or functions inside a Logic App definition, you most probably have realised that it’s quite troublesome. I personally don’t like to do it that way for two reasons:

  1. Editing the Logic App definition to include ARM Template expressions or functions is not intuitive. Adding ARM Template expressions and functions to be resolved at deployment time in a way that results in Logic Apps expressions and functions to be evaluated at execution time can be messy. Things can become harder when you have string functions in a Logic Apps, like @concat() that accept values that are to be obtained from ARM template expressions, like [parameters()] or [variables()]. I’ve heard and read of many people complaining about it.
  2. Updating your Logic App after you have your ARM Template ready, requires more work. It’s not unlikely that you would need to update your Logic App after you’ve prepared the ARM Template for it. Whether you need to fix a little bug found at testing, or you are required to change or add some functionality, the chances are that you would need to update the ARM template without the help of the Logic App Editor; and if you are unlucky, changes would touch those complex ARM template expressions inside your Logic App definition. Not very fun!

So, the question is, is it possible to create ARM Templates for Logic Apps that can be parameterised for multiple environments while avoiding using ARM template expressions inside the Logic App definition? Fortunately, it is :). Below, I describe how.

Scenario

For this post, I will work with a rather simple scenario: A Logic App that is triggered when a message in a Service Bus queue is received and posts the message to an https endpoint using basic auth. The endpoint url, the username and password will be different for each environment. Additionally, the Service Bus API Connection will have to be defined per environment.

This very simple workflow created using the Logic App editor is shown below:

And the code behind this workflow is as follows:

The code is very straight forward, but the endpoint, username and password are yet static. Not ideal for CI/CD!

Preparing the Logic App for CI/CD to be deployed to multiple environments

In this section, I’ll show how you can prepare your Logic App for CI/CD to be deployed to multiple environments using ARM Templates, without having to use any ARM Template expressions or functions inside a Logic App definition.

1. Add Logic Apps parameters to the workflow for every value that is to be changed for each environment.

Similarly to ARM Templates, the Logic App workflow definition language accepts parameters. We can use these Logic Apps parameters to prepare our Logic App definition for CI/CD. We need to add a Logic App parameter for every value that is to be tailored for each environment. Unfortunately, at the time of writing, adding Logic App parameters can only be done via the code view.

Using the code view, we need to:

  • Add the parameters definition with a default value, you should follow the same principles of parameters for ARM templates, but in this case, they are defined within the Logic App definition. The default value is the one you would use otherwise as static value at development time.
  • Update the workflow definition to use those parameters instead of the fixed values.

I’ve done this using the code view of the workflow shown above. The updated workflow definition is as follows.

After this update, at this point in time, the workflow should work just as before, but now, instead of having fixed values, you are using Logic Apps parameters with default values. If you are doing it for yours, you can test it yourself 🙂

2. Get the Logic App ARM Template for CI/CD.

Once the Logic App is ready, we can get the ARM Template for CI/CD. One easy way to do it is to use the Visual Studio Tools for Logic Apps. This requires Visual Studio 2015 or 2017, the latest Azure SDK and the Cloud Explorer. You can also use the Logic App Template Creator PowerShell module. More information on how to create ARM Templates for Logic Apps here.

The Cloud Explorer will allow you to log in to your Azure Subscription and see the supported Azure resources, including Logic Apps. When you expand the Logic Apps menu, you will see all the Logic Apps available for that subscription.

Once you’ve found the Logic App you want to export, right click on it, and click on Open with Logic App Editor. This will open the Logic App Editor on Visual Studio.

In addition to allowing to edit Logic Apps on Visual Studio, the Visual Studio Logic App Tools let you to download the ARM Template that includes the Logic App. You just need to click the Download button, and
you will get an almost ready-to-deploy ARM Template. This functionality exports the Logic App API Connections as well.

For this workflow, I got an ARM Template as follows:

As you can see, this ARM Template includes

  • ARM Template parameters definition. This is where we define the ARM Template parameters. We can set a default value. The actual value for each environment is to be set on the ARM Parameters file.
  • Logic App parameters definition: These are declared within the definition of the Logic App. These are the ones we can define using the code view of the Logic App, as we did above.
  • Logic App parameters value set: Here is where we set the values for the parameters for the Logic App. This section is declared outside of the definition property of the Logic Apps.

The structure of the ARM Template can be seen in the picture below.

3. Set the Logic App parameters values with ARM Template expressions and functions.

Once we have the ARM Template, we can set the Logic App parameters values with ARM expressions and functions, including ARM parameters or ARM variables. I’ve done it with my ARM Template as shown below.

Before you check the updated ARM Template, some things to note:

  • I added comments to the ARM Template only to make it easier to read and understand, but I don’t recommend it. Comments are not supposed to be supported in JSON documents, however, Visual Studio and ARM Templates allow it.
  • I used the “-armparam” and “-armvar” suffixes on the ARM Template parameters and variables correspondingly. I did it only to show a clear distinction between ARM Template parameters and variables and Logic Apps parameters and variables. But the notation is sufficient (Using square brackets [] for ARM Template expressions and functions, and @ sign for those of Logic Apps).
  • I just used ARM Template parameters and variables to set the values of Logic App parameters, but you can use any other ARM Template function or expression that you might require to set Logic App parameter values.

As you can see, now we are only using ARM Template expressions and functions outside the Logic App definition. This is much easier to read and maintain. Don’t you think?

4. Prepare your ARM Parameters file for each environment.

Now that we have the ARM Template ready, we can prepare an ARM Parameters file for our deployment to each environment. Below I show an example of this.

5. Work on your CI/CD Pipeline.

Once we have the ARM Template and the ARM Parameter files, we can automate the deployment using our preferred tool. If you want to use VSTS, this is a good video that shows you how.

6. Deploy and enjoy.

Once you have deployed the ARM Template, you will be able to see the deployed Logic App. The Logic App parameters value set section is hidden, but if you execute it, you will see how the values have been set accordingly.

Do you want this to be easier?

You might be thinking, just as I am, that this process is not as intuitive as it should be, and is a bit time consuming. If you wish to ask the product team to improve this, you might want to vote for these user voice requests on the links below:

Wrapping Up.

In this post, I’ve shown how to prepare your Logic Apps for CI/CD to multiple environments using ARM Templates in a more convenient way, i.e. without using ARM Template expressions or functions inside the Logic App definition. I believe that this approach makes the ARM Template of a Logic App much easier to read and to maintain.

This method not only avoids the need of writing complex ARM Template expressions inside a Logic App definition, but also allows you to update your Logic App in the Designer, after this has been deployed using ARM Templates, and later update the ARM Template by simply updating the Logic App definition section. That’s much better, isn’t it?

I hope you’ve found this post handy, and it has helped you to streamline the configuration of your CI/CD pipelines when using Logic Apps.

Do you have a different preferred way of preparing your Logic Apps for CI/CD? Feel free to leave your comments or questions below,

Happy clouding and automating!

P.S. And remember: “I will never use ARM Template expressions inside a Logic App definition” 😉

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

Monitoring Configuration Drifts on Azure with Event Grid and Logic Apps

Introduction

Azure Event Grid is a first-class and hyperscale eventing platform with intelligent filtering that has recently been released in preview and is a real game changer to build event-driven serverless apps on Azure. There have been many other posts, including this one from my colleague Dan Toomey, which highlights all the magic, features and benefits of this new offering on Azure. Thus, I don’t pretend to reiterate over these on this post. My goal is, however, to try to show how to solve a requirement that I have heard more than a couple of times.

As mentioned here, there are three typical scenarios where Azure Event Grid comes quite handy:

  1. Serverless Applications
  2. Application Integration, and
  3. Ops Automation

In this post, I will show how to build an Azure Ops Automation workflow to monitor configuration drifts on Azure resources using Event Grid and Logic Apps.

User Story

  • As an Op, I want to be notified whenever there is a configuration drift on my Azure Resources.

Many organisations and teams have implemented Continuous Integration / Continuous Delivery (CI/CD), and they want to keep all their infrastructure and solution configuration as code, e.g. in a VSTS Git Repo. This has become a quite common practice, and the source of truth for all infrastructure and configuration as code must be in source control. The Role-Based Access Control (RBAC) on Azure allows us to restrict changes to Azure resources to certain roles or users. Furthermore, Azure provides a way to lock resources at different levels (subscription, resources group or resource) to prevent users from deleting or modifying critical resources, thus avoiding configuration drifts.

However, in some exceptional cases, Ops or Admins might need to update their configuration without having the time to go through the process of updating the repo first and then triggering the CI/CD pipeline. These configuration drifts, will make the Git repo to be out-of-sync, which results in a very high risk of subsequent releases overwriting changes in the environment with unintended side effects. Thus, there is a need to monitor the Azure resources for configuration drifts, so the source of truth can be always kept in-sync.

Scenario

As you probably may have thought, the user story above is quite broad, so let’s reduce its scope for demonstration purposes to:

  • As an Op, I want to be notified whenever there is a configuration drift on my Azure Web App app settings.

For this scenario, we want to receive a notification whenever the app settings of an Azure App Service (Web App) are updated and are no longer aligned to the “desired state”.

To show how this can be achieved with Azure Event Grid (and the Resource Groups Publisher) and Logic Apps, I will build a Logic App workflow that is triggered whenever the app settings of an Azure Web App are modified, and validate if these settings are different to a desired state.

Solution Prerequisites

This solution requires the following:

  1. An Azure App Service (Web App) with some app settings configured, in my case, I configured the app settings as follows:

  2. A JSON definition of the “Desired State” of the app settings stored in an Azure Storage Blob Container, in my scenario this is as below:
    {
      "Setting-01": "expected-value-01",
      "Setting-02": "expected-value-02",
      "Setting-03": "expected-value-03"
    }
    

Solution: A Logic App Workflow with an Event Grid Trigger

My solution implemented as a Logic App workflow with an Event Grid Trigger will follow the algorithm described below:

  1. Trigger the workflow when the app settings of the Web App are updated, using the Resource Groups Event Grid Event Publisher.
  2. Check the status of the Event, if it was not “Succeeded”, then Terminate the Workflow. If it was “Succeeded”, continue the workflow.
  3. Get the Updated State of the app settings of the Web App using the Azure Resource Connector of Logic Apps.
  4. Get the Desired State from a Blob container.
  5. Compare the New State with the Desired State. If the New State is different to the Desired State, then send a notification with the details of the event.

Below I described the two main steps of the workflow in more details

1. Configuring the Logic Apps Event Grid Trigger

To configure the trigger, we need to specify:

  1. Azure Subscription
  2. Select the Resource Type, in this case Microsoft.Resources.resourceGroups, as we are monitoring Azure Resource Group changes.
  3. In the Resource Name, we enter the Resource Group name.
  4. In the Prefix Filter, we specify the ResourceId, in my case we are monitoring the App Settings of an Azure App Service.
  5. In this case, we don’t need to set a Suffix Filter.
  6. And finally, we give a name to the topic subscription we are creating.

Once we execute a Logic App with this trigger, we should get a payload similar to the one shown below

2. Configuring the Logic App Azure Resource Manager Connector

Logic Apps provide an Azure Resource Manager connector, which allows us to do CRUD operations on Azure via Azure Resource Manager. In our scenario, we are going to use the Invoke Resource Operation to List the App Settings of a Web App. This will return the current (new) state of the Azure Resource, so we can compare it to the Desired State later on the workflow. In your own scenario, you can make use of other operations, like List Resources by Resource Group, Read a Resource, or Read a Resource Group to get the state of your Azure resources. The configuration applied for our scenario is as follows.

The Logic App Workflow

The implemented solution as a Logic App workflow is shown below. I hope it is self-explanatory. I included comments on each action to make it easier to follow.

Quite straightforward, isn’t?

And in case you are wondering about the code behind, below is the same workflow showing the code view of the relevant actions.

If you want to have a look at the ARM template, including the full code behind of this Logic App, you can check it out here.

Wrapping Up

In this post, I’ve shown how to monitor configuration drifts on Azure resources using Event Grid and Logic Apps. We’ve seen the Resource Groups Event Publisher of Event Grid in action and how it comes in very handy for Ops Automation scenarios. Now, you can start monitoring changes on your Azure resources by just creating subscriptions with the corresponding prefix and suffix filters on Logic Apps. What other useful Ops Automation scenarios can you think of using Event Grids and Logic Apps?

Please feel free to add your comments and questions below,

Happy eventing!

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

When to Use an Azure App Service Environment v2 (App Service Isolated)

Introduction

The Azure App Service Environment (ASE) is a premium feature offering of the Azure App Services which is fully isolated, highly scalable, and runs on a customer’s virtual network. On an ASE you can host Web Apps, API Apps, Mobile Apps and Azure Functions. The first generation of the App Service Environment (ASE v1) was released in late 2015. Two significant updates were launched after that; in July 2016, they added the option of having an Internal Load Balancer, and in August of that year, an Application Gateway with an Web Application Firewall could be configured for the ASE. After all the feedback Microsoft had been receiving on this offering, they started working on the second generation of the App Service Environment (ASE v2); and on July of 2017, it’s been released as Generally Available.

In my previous job, I wrote the post “When to Use an App Service Environment“, which referred to the first generation (ASE v1). I’ve decided to write an updated version of that post, mainly because that one has been one of my posts with more comments and questions and I know the App Service Environment, also called App Service Isolated, will continue to grow in popularity. Even though the ASE v2 has been simplified, I still believe many people would have questions about it or would want to make sure that they have no other option but to pay for this premium feature offering when they have certain needs while deploying their solutions on the Azure PaaS.

When you are planning to deploy Azure App Services, you have the option of creating them on a multi-tenant environment or on your own isolated (single-tenant) App Service Environment. If you want to understand in detail what they mean by “multi-tenant environment” for Azure App Services, I recommend you to read this article. When they refer to a “Scale-Unit” in that article, they are talking about this multi-tenant shared infrastructure. You could picture an App Service Environment having a very similar architecture, but with all the building blocks dedicated to you, including the Front-End, File Servers, API Controllers, Publisher, Data Roles, Database, Web Workers, etc.

In this post, I will try to summarise when is required to use an App Service Environment (v2), and in case you have an App Service Environment v1, why it makes a lot of sense to migrate it to the second generation.

App Service Environment v2 Pricing

Before getting too excited about the functionality and benefits of the ASE v2 or App Service Isolated, it’s important to understand its pricing model.

Even though, they have abstracted much of the complexity of the ASE in the second generation, we still need to be familiar with the architecture of this feature offering to properly calculate the costs of the App Service Environment v2. To calculate the total cost of your ASE, you need to consider an App Service Environment Base Fee and the cost of the Isolated Workers.

The App Service Environment Base Fee covers the cost of the of all the infrastructure required to run your single-tenant and isolated Azure App Services; including load balancing, high-availability, publishing, continuous delivery, app settings shared across all instances, deployment slots, management APIs, etc. None of your assemblies or code are executed in the instances which are part of this layer. Then, the Isolated Workers are the ones executing your Web Apps, API Apps, Mobile Apps or Functions. You decide the size and how many Isolated workers you want to spin up, thus the cost of the worker layer. Both layers are charged by the hour. Below, the prices for the Australian Regions in Australian Dollars are shown.

In Australia, the App Service Environment base fee is above $ 1700 AUD per month, and the Isolated I1 instance is close to $ 500 AUD per month. This means that the entry-level of an ASE v2 with one Isolated Worker costs around $ 2,200 AUD per month or above $ 26,000 AUD per year, which is very similar to the price of the ASE v1 in this region. This cost can easily escalate by scaling up or scaling out the ASE. It’s noteworthy that prices vary from region to region. For instance, according to the Azure pricing calculator, at the time of writing, the prices for the Australian Regions are around 35% more expensive than those in the West US 2. To calculate your own costs, in your region and in your currency, check the pricing calculator.

Moreover, the App Service Environment Base Fee is calculated based on the default configuration, which uses I1 instances for the ASE Front-End, and with the scaling rule of adding one Front End instance for every 15 worker instances, as described in the Front End scale configuration page shown below. If you keep this configuration, the App Service Environment Base Fee will stay the same, regardless of the number and size of workers. However, we can scale-up the Front End instances to I2 or I3 or reduce the number of workers per Front End instance. This would have an impact on the App Service Environment Base Fee. To calculate the extra cost, you would need to add the cost of every additional core on top of the base configuration. Before changing the Front-End scaling configuration, bear in mind that the Front End instances act only as a layer seven-load balancer (round robin) and perform SSL termination. All the compute of your App Services is executed in the worker instances.

With these price tag, the value and benefits of the ASE must be clear enough so that we can justify the investment to the business.

The benefits of the Azure App Service Isolated or App Service Environment v2.

To understand the benefits and advance features of an App Service Environment v2, it’s worth comparing what we get with this premium offering with what we get by deploying an Azure App Service on a multi-tenant environment. This comparison is shown in the table below.

Multi-tenant environment App Service Isolated /
App Service Environment v2
Virtual Network (VNET) Integration Yes.

Azure App Services can be integrated to an Azure Virtual Network.

Yes.

An ASE is always deployed in the customer’s Virtual Network.

Static Inbound IP Address Yes.

By default, Azure App Services get assigned a virtual IP addresses. However, this is shared with other App Services in that region.

You can bind a IP-based SSL certificate to your App Service, which will give you a dedicated public inbound IP address.

Yes.

ASEs provide a static virtual inbound IP address (VIP). This VIP can be public or private, depending on whether configured with and Internal Load Balancer (ILB) or not.

More information on the ASE network architecture here.

Static Outbound IP Address No.

The outbound IP address of an App Service is not static, but it can be any address within a certain range, which is not static either.

Yes.

ASEs provide a static public outbound IP address. More information here.

Connecting to Resources On-Premises Yes.

Azure App Service VNET integration provides the capability to access resources on-premises via a VPN over the public Internet.

Additionally, Azure Hybrid Connections can be used to connect to resources on-premises without requiring major firewall or network configurations.

Yes.

In addition to VPN over the public internet and Hybrid Connections support, an ASE provides the ability to connect to resources on-premises via ExpressRoute, which provides a faster, more reliable and secure connectivity without going over the public Internet.

Note: ExpressRoute has its own pricing model.

Private access only No.

App Services are always accessible via the public internet.

One way to restrict access to your App Service is using IP and Domain restrictions, but the App Service is still reachable from the internet.

Yes.

An ASE can be deployed with an Internal Load Balancer, which will lock down your App Services to be accessible only from within your VNET or via ExpressRoute or Site-to-Site VPN.

Control over inbound and outbound traffic No. Yes.

An ASE is always deployed on a subnet within a VNET. Inbound and outbound traffic can be controlled using a network security group.

Web Application Firewall Yes.

Starting from mid-July 2017, Azure Application Gateway with Web Application Firewall supports App Services in a multi-tenant environment. More info on how to configure it here.

Yes.

An Azure Application Gateway with Web Application Firewall can be configured to protect App Services on an ASE by preventing SQL injections, session hijacks, cross-site scripting attacks, and other attacks.

Note: The Application Gateway with Web Application Firewall has its own pricing model.

SLA 99.95%

No SLA is provided for Free or Shared tiers.

App Services starting from the Basic tier provide an SLA of 99.95%.

99.95%

App Services deployed on an ASE provide an SLA of 99.95%.

Instance Sizes / Scale-Up Full range.

App Services can be deployed on almost the full range of tiers from Free to Premium v2.

3 sizes.

Workers on an ASE v2 support three sizes (Isolated)

Scalability / Scale-Out Maximum instances:

Basic: 3

Standard: 10

Premium: 20

ASE v2 supports up to 100 Isolated Worker instances.
Deployment Time Very fast.

The deployment of New App Services on the multi-tenant environment is rather fast, usually less than 2 minutes.

This can vary.

Slower.

The deployment of a New App Service Environment can take between 60 and 90 minutes (Tested on the Australian Regions)

This can vary.

This is important to consider, particularly in cold DR scenarios.

Scaling out Time Very fast.

Scaling out an App Service usually takes less than 2 minutes.

This can vary.

Slower.

Scaling out in an App Service Environment can take between 30 and 40 minutes (Tested on the Australian Regions).

This can vary.

This is something to consider when configuring auto-scaling.

Reasons to migrate your ASE v1 to an ASE v2

If you already have an App Service Environment v1, there are many reasons to migrate to the second generation, including:

  • More horsepower: With the ASE v2, you get Dv2-based machines, with faster cores, SSD storage, and twice the memory per core when compared to the ASE v1. You are practically getting double performance per core.
  • No stand-by workers for fault-tolerance: To provide fault-tolerance, the ASE v1 requires you to have one stand-by worker for every 20 active workers on each worker pool. You have to pay for those stand-by workers. ASE v2 has abstracted that for you, and you don’t need to pay for those.
  • Streamlined scaling: If you want to implement auto-scaling on an ASE v1, you have to manage scaling not only at the App Service Plan level, but at the Worker Pool level as well. For that, you have to use a complex inflation rate formula, which requires you to have some instances to be waiting and ready for whenever an auto-scale condition kicks off. This has its own cost implications. ASEs v2 allow you to auto-scale your App Service Plan the same way you do it with your multi-tenant App Services, without the complexity of managing worker pools and without paying for waiting instances.
  • Cost saving: Because you are getting an upgraded performance, you should be able to host the same workloads using half as much in terms of cores. Additionally, you don’t need to pay for fault-tolerance or auto-scaling stand-by workers.
  • Better experience and abstraction: Deployment and scaling of the ASE v2 is much simpler and friendlier than it was with the first generation.

Wrapping Up

So coming back to original the question, when to use an App Service Environment? When is it required and would make sense to pay the premium price of the App Service Environment?

  • When we need to restrict the App Services to be accessible only from within the VNET or via Express Route or Site-to-Site VPN, OR
  • When we require to control inbound and outbound traffic to and from our App Services OR
  • When we need a connection between the App Services and resources on-premises via a secure and reliable channel (ExpressRoute) without going via the public Internet OR
  • When we require much more processing power, i.e. scaling out to more than 20 instances OR
  • When a static outbound IP Address for the App Service is required.

What else would you consider when deciding whether to use an App Service Environment for your workload or not? Feel free to post your comments or feedback below!

Happy clouding!

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.