Preparing Azure Logic Apps for CI/CD to Multiple Environments


Logic Apps can be created from the Azure Portal, or using Visual Studio. This works well if you want to create one Logic App at a time. However, if you want to deploy the same Logic App in multiple environments, e.g. Dev, Test, or Production, you want to do it in an automated way. Azure Resource Manager (ARM) Templates allow you to define Azure Resources, including Logic Apps, for automated deployment to multiple environments in a consistent and repeatedly way. ARM Templates can be tailored for each environment using a Parameters file.

The deployment of Logic Apps using ARM Templates and Parameters can be automated with different tools, such as, PowerShell, Azure CLI, or VSTS. In my projects, I normally use a VSTS release definition for this.

You probably have noticed that the Logic App Workflow Definition Language (the JSON code behind) has many similarities with the ARM Templates structure, including the use of expressions and functions, variables, and parameters.

ARM Template expressions and functions are written within JSON string literals wrapped with square brackets []. ARM expressions and functions can appear in different sections of the ARM template, including the resources member, which might contain Logic Apps. The value of these expressions is evaluated at deployment time. More information here.

Logic App expressions and functions are defined within the Logic App definition and might appear anywhere in a JSON string value. Logic Apps expressions and functions are evaluated at execution time. These are declared using the @ sign. More information here.

These similarities can be confusing by themselves. I’ve seen that it’s a quite common practice in ARM Templates with Logic Apps, to use ARM template expressions inside the Logic App definition. For example, using ARM parameters, ARM variables or ARM functions (like concat), within the definition of a Logic App. This might seem OK, as this is what you would normally do to tailor your deployment for any other Azure resources. However, in Logic Apps, this can be quite cumbersome. If you’ve done it, I’m almost sure that you know what I’m talking about.

In this post, I’ll share some practices that I use to ease the preparation of Logic Apps for Continuous Integration / Continuous Delivery (CI/CD) to multiple environments using ARM Templates, when values inside the Logic App definition have to be customised per environment. If you don’t have to change values within the Logic App definition, then you might not need to follow every step of this post.

Why it’s not a good idea to use ARM template expressions inside a Logic App definition?

As I mentioned above, if when preparing you Logic Apps for CI/CD with ARM Templates, you have used ARM template expressions or functions inside a Logic App definition, you most probably have realised that it’s quite troublesome. I personally don’t like to do it that way for two reasons:

  1. Editing the Logic App definition to include ARM Template expressions or functions is not intuitive. Adding ARM Template expressions and functions to be resolved at deployment time in a way that results in Logic Apps expressions and functions to be evaluated at execution time can be messy. Things can become harder when you have string functions in a Logic Apps, like @concat() that accept values that are to be obtained from ARM template expressions, like [parameters()] or [variables()]. I’ve heard and read of many people complaining about it.
  2. Updating your Logic App after you have your ARM Template ready, requires more work. It’s not unlikely that you would need to update your Logic App after you’ve prepared the ARM Template for it. Whether you need to fix a little bug found at testing, or you are required to change or add some functionality, the chances are that you would need to update the ARM template without the help of the Logic App Editor; and if you are unlucky, changes would touch those complex ARM template expressions inside your Logic App definition. Not very fun!

So, the question is, is it possible to create ARM Templates for Logic Apps that can be parameterised for multiple environments while avoiding using ARM template expressions inside the Logic App definition? Fortunately, it is :). Below, I describe how.


For this post, I will work with a rather simple scenario: A Logic App that is triggered when a message in a Service Bus queue is received and posts the message to an https endpoint using basic auth. The endpoint url, the username and password will be different for each environment. Additionally, the Service Bus API Connection will have to be defined per environment.

This very simple workflow created using the Logic App editor is shown below:

And the code behind this workflow is as follows:

The code is very straight forward, but the endpoint, username and password are yet static. Not ideal for CI/CD!

Preparing the Logic App for CI/CD to be deployed to multiple environments

In this section, I’ll show how you can prepare your Logic App for CI/CD to be deployed to multiple environments using ARM Templates, without having to use any ARM Template expressions or functions inside a Logic App definition.

1. Add Logic Apps parameters to the workflow for every value that is to be changed for each environment.

Similarly to ARM Templates, the Logic App workflow definition language accepts parameters. We can use these Logic Apps parameters to prepare our Logic App definition for CI/CD. We need to add a Logic App parameter for every value that is to be tailored for each environment. Unfortunately, at the time of writing, adding Logic App parameters can only be done via the code view.

Using the code view, we need to:

  • Add the parameters definition with a default value, you should follow the same principles of parameters for ARM templates, but in this case, they are defined within the Logic App definition. The default value is the one you would use otherwise as static value at development time.
  • Update the workflow definition to use those parameters instead of the fixed values.

I’ve done this using the code view of the workflow shown above. The updated workflow definition is as follows.

After this update, at this point in time, the workflow should work just as before, but now, instead of having fixed values, you are using Logic Apps parameters with default values. If you are doing it for yours, you can test it yourself ūüôā

2. Get the Logic App ARM Template for CI/CD.

Once the Logic App is ready, we can get the ARM Template for CI/CD. One easy way to do it is to use the Visual Studio Tools for Logic Apps. This requires Visual Studio 2015 or 2017, the latest Azure SDK and the Cloud Explorer. You can also use the Logic App Template Creator PowerShell module. More information on how to create ARM Templates for Logic Apps here.

The Cloud Explorer will allow you to log in to your Azure Subscription and see the supported Azure resources, including Logic Apps. When you expand the Logic Apps menu, you will see all the Logic Apps available for that subscription.

Once you’ve found the Logic App you want to export, right click on it, and click on Open with Logic App Editor. This will open the Logic App Editor on Visual Studio.

In addition to allowing to edit Logic Apps on Visual Studio, the Visual Studio Logic App Tools let you to download the ARM Template that includes the Logic App. You just need to click the Download button, and
you will get an almost ready-to-deploy ARM Template. This functionality exports the Logic App API Connections as well.

For this workflow, I got an ARM Template as follows:

As you can see, this ARM Template includes

  • ARM Template parameters definition. This is where we define the ARM Template parameters. We can set a default value. The actual value for each environment is to be set on the ARM Parameters file.
  • Logic App parameters definition: These are declared within the definition of the Logic App. These are the ones we can define using the code view of the Logic App, as we did above.
  • Logic App parameters value set: Here is where we set the values for the parameters for the Logic App. This section is declared outside of the definition property of the Logic Apps.

The structure of the ARM Template can be seen in the picture below.

3. Set the Logic App parameters values with ARM Template expressions and functions.

Once we have the ARM Template, we can set the Logic App parameters values with ARM expressions and functions, including ARM parameters or ARM variables. I’ve done it with my ARM Template as shown below.

Before you check the updated ARM Template, some things to note:

  • I added comments to the ARM Template only to make it easier to read and understand, but I don’t recommend it. Comments are not supposed to be supported in JSON documents, however, Visual Studio and ARM Templates allow it.
  • I used the “-armparam” and “-armvar” suffixes on the ARM Template parameters and variables correspondingly. I did it only to show a clear distinction between ARM Template parameters and variables and Logic Apps parameters and variables. But the notation is sufficient (Using square brackets [] for ARM Template expressions and functions, and @ sign for those of Logic Apps).
  • I just used ARM Template parameters and variables to set the values of Logic App parameters, but you can use any other ARM Template function or expression that you might require to set Logic App parameter values.

As you can see, now we are only using ARM Template expressions and functions outside the Logic App definition. This is much easier to read and maintain. Don’t you think?

4. Prepare your ARM Parameters file for each environment.

Now that we have the ARM Template ready, we can prepare an ARM Parameters file for our deployment to each environment. Below I show an example of this.

5. Work on your CI/CD Pipeline.

Once we have the ARM Template and the ARM Parameter files, we can automate the deployment using our preferred tool. If you want to use VSTS, this is a good video that shows you how.

6. Deploy and enjoy.

Once you have deployed the ARM Template, you will be able to see the deployed Logic App. The Logic App parameters value set section is hidden, but if you execute it, you will see how the values have been set accordingly.

Do you want this to be easier?

You might be thinking, just as I am, that this process is not as intuitive as it should be, and is a bit time consuming. If you wish to ask the product team to improve this, you might want to vote for these user voice requests on the links below:

Wrapping Up.

In this post, I’ve shown how to prepare your Logic Apps for CI/CD to multiple environments using ARM Templates in a more convenient way, i.e. without using ARM Template expressions or functions inside the Logic App definition. I believe that this approach makes the ARM Template of a Logic App much easier to read and to maintain.

This method not only avoids the need of writing complex ARM Template expressions inside a Logic App definition, but also allows you to update your Logic App in the Designer, after this has been deployed using ARM Templates, and later update the ARM Template by simply updating the Logic App definition section. That’s much better, isn’t it?

I hope you’ve found this post handy, and it has helped you to streamline the configuration of your CI/CD pipelines when using Logic Apps.

Do you have a different preferred way of preparing your Logic Apps for CI/CD? Feel free to leave your comments or questions below,

Happy clouding and automating!

P.S. And remember: “I will never use ARM Template expressions inside a Logic App definition” ūüėČ

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.


Monitoring Configuration Drifts on Azure with Event Grid and Logic Apps


Azure Event Grid is a first-class and hyperscale eventing platform with intelligent filtering that has recently been released in preview and is a real game changer to build event-driven serverless apps on Azure. There have been many other posts, including this one from my colleague Dan Toomey, which highlights all the magic, features and benefits of this new offering on Azure. Thus, I don’t pretend to reiterate over these on this post. My goal is, however, to try to show how to solve a requirement that I have heard more than a couple of times.

As mentioned here, there are three typical scenarios where Azure Event Grid comes quite handy:

  1. Serverless Applications
  2. Application Integration, and
  3. Ops Automation

In this post, I will show how to build an Azure Ops Automation workflow to monitor configuration drifts on Azure resources using Event Grid and Logic Apps.

User Story

  • As an Op, I want to be notified whenever there is a configuration drift on my Azure Resources.

Many organisations and teams have implemented Continuous Integration / Continuous Delivery (CI/CD), and they want to keep all their infrastructure and solution configuration as code, e.g. in a VSTS Git Repo. This has become a quite common practice, and the source of truth for all infrastructure and configuration as code must be in source control. The Role-Based Access Control (RBAC) on Azure allows us to restrict changes to Azure resources to certain roles or users. Furthermore, Azure provides a way to lock resources at different levels (subscription, resources group or resource) to prevent users from deleting or modifying critical resources, thus avoiding configuration drifts.

However, in some exceptional cases, Ops or Admins might need to update their configuration without having the time to go through the process of updating the repo first and then triggering the CI/CD pipeline. These configuration drifts, will make the Git repo to be out-of-sync, which results in a very high risk of subsequent releases overwriting changes in the environment with unintended side effects. Thus, there is a need to monitor the Azure resources for configuration drifts, so the source of truth can be always kept in-sync.


As you probably may have thought, the user story above is quite broad, so let’s reduce its scope for demonstration purposes to:

  • As an Op, I want to be notified whenever there is a configuration drift on my Azure Web App app settings.

For this scenario, we want to receive a notification whenever the app settings of an Azure App Service (Web App) are updated and are no longer aligned to the “desired state”.

To show how this can be achieved with Azure Event Grid (and the Resource Groups Publisher) and Logic Apps, I will build a Logic App workflow that is triggered whenever the app settings of an Azure Web App are modified, and validate if these settings are different to a desired state.

Solution Prerequisites

This solution requires the following:

  1. An Azure App Service (Web App) with some app settings configured, in my case, I configured the app settings as follows:

  2. A JSON definition of the “Desired State” of the app settings stored in an Azure Storage Blob Container, in my scenario this is as below:
      "Setting-01": "expected-value-01",
      "Setting-02": "expected-value-02",
      "Setting-03": "expected-value-03"

Solution: A Logic App Workflow with an Event Grid Trigger

My solution implemented as a Logic App workflow with an Event Grid Trigger will follow the algorithm described below:

  1. Trigger the workflow when the app settings of the Web App are updated, using the Resource Groups Event Grid Event Publisher.
  2. Check the status of the Event, if it was not “Succeeded”, then Terminate the Workflow. If it was “Succeeded”, continue the workflow.
  3. Get the Updated State of the app settings of the Web App using the Azure Resource Connector of Logic Apps.
  4. Get the Desired State from a Blob container.
  5. Compare the New State with the Desired State. If the New State is different to the Desired State, then send a notification with the details of the event.

Below I described the two main steps of the workflow in more details

1. Configuring the Logic Apps Event Grid Trigger

To configure the trigger, we need to specify:

  1. Azure Subscription
  2. Select the Resource Type, in this case Microsoft.Resources.resourceGroups, as we are monitoring Azure Resource Group changes.
  3. In the Resource Name, we enter the Resource Group name.
  4. In the Prefix Filter, we specify the ResourceId, in my case we are monitoring the App Settings of an Azure App Service.
  5. In this case, we don’t need to set a Suffix Filter.
  6. And finally, we give a name to the topic subscription we are creating.

Once we execute a Logic App with this trigger, we should get a payload similar to the one shown below

2. Configuring the Logic App Azure Resource Manager Connector

Logic Apps provide an Azure Resource Manager connector, which allows us to do CRUD operations on Azure via Azure Resource Manager. In our scenario, we are going to use the Invoke Resource Operation to List the App Settings of a Web App. This will return the current (new) state of the Azure Resource, so we can compare it to the Desired State later on the workflow. In your own scenario, you can make use of other operations, like List Resources by Resource Group, Read a Resource, or Read a Resource Group to get the state of your Azure resources. The configuration applied for our scenario is as follows.

The Logic App Workflow

The implemented solution as a Logic App workflow is shown below. I hope it is self-explanatory. I included comments on each action to make it easier to follow.

Quite straightforward, isn’t?

And in case you are wondering about the code behind, below is the same workflow showing the code view of the relevant actions.

If you want to have a look at the ARM template, including the full code behind of this Logic App, you can check it out here.

Wrapping Up

In this post, I’ve shown how to monitor configuration drifts on Azure resources using Event Grid and Logic Apps. We’ve seen the¬†Resource Groups Event Publisher of Event Grid in action and how it comes in very handy for Ops Automation scenarios. Now, you can start monitoring changes on your Azure resources by just creating subscriptions with the corresponding prefix and suffix filters on Logic Apps. What other useful Ops Automation scenarios can you think of using Event Grids and Logic Apps?

Please feel free to add your comments and questions below,

Happy eventing!

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

When to Use an Azure App Service Environment v2 (App Service Isolated)


The Azure App Service Environment (ASE) is a premium feature offering of the Azure App Services which is fully isolated, highly scalable, and runs on a customer’s virtual network. On an ASE you can host Web Apps, API Apps, Mobile Apps and Azure Functions. The first generation of the App Service Environment (ASE v1) was released in late 2015. Two significant updates were launched after that; in July 2016, they added the option of having an Internal Load Balancer, and in August of that year, an Application Gateway with an Web Application Firewall could be configured for the ASE. After all the feedback Microsoft had been receiving on this offering, they started working on the second generation of the App Service Environment (ASE v2); and on July of 2017, it’s been released as Generally Available.

In my previous job, I wrote the post “When to Use an App Service Environment“, which referred to the first generation (ASE v1). I’ve decided to write an updated version of that post, mainly because that one has been one of my posts with more comments and questions and I know the App Service Environment, also called App Service Isolated, will continue to grow in popularity. Even though the ASE v2 has been simplified, I still believe many people would have questions about it or would want to make sure that they have no other option but to pay for this premium feature offering when they have certain needs while deploying their solutions on the Azure PaaS.

When you are planning to deploy Azure App Services, you have the option of creating them on a multi-tenant environment or on your own isolated (single-tenant) App Service Environment. If you want to understand in detail what they mean by “multi-tenant environment” for Azure App Services, I recommend you to read this article. When they refer to a “Scale-Unit” in that article, they are talking about this multi-tenant shared infrastructure. You could picture an App Service Environment having a very similar architecture, but with all the building blocks dedicated to you, including the Front-End, File Servers, API Controllers, Publisher, Data Roles, Database, Web Workers, etc.

In this post, I will try to summarise when is required to use an App Service Environment (v2), and in case you have an App Service Environment v1, why it makes a lot of sense to migrate it to the second generation.

App Service Environment v2 Pricing

Before getting too excited about the functionality and benefits of the ASE v2 or App Service Isolated, it’s important to understand its pricing model.

Even though, they have abstracted much of the complexity of the ASE in the second generation, we still need to be familiar with the architecture of this feature offering to properly calculate the costs of the App Service Environment v2. To calculate the total cost of your ASE, you need to consider an App Service Environment Base Fee and the cost of the Isolated Workers.

The App Service Environment Base Fee covers the cost of the of all the infrastructure required to run your single-tenant and isolated Azure App Services; including load balancing, high-availability, publishing, continuous delivery, app settings shared across all instances, deployment slots, management APIs, etc. None of your assemblies or code are executed in the instances which are part of this layer. Then, the Isolated Workers are the ones executing your Web Apps, API Apps, Mobile Apps or Functions. You decide the size and how many Isolated workers you want to spin up, thus the cost of the worker layer. Both layers are charged by the hour. Below, the prices for the Australian Regions in Australian Dollars are shown.

In Australia, the App Service Environment base fee is above $ 1700 AUD per month, and the Isolated I1 instance is close to $ 500 AUD per month. This means that the entry-level of an ASE v2 with one Isolated Worker costs around $ 2,200 AUD per month or above $ 26,000 AUD per year, which is very similar to the price of the ASE v1 in this region. This cost can easily escalate by scaling up or scaling out the ASE. It’s noteworthy that prices vary from region to region. For instance, according to the Azure pricing calculator, at the time of writing, the prices for the Australian Regions are around 35% more expensive than those in the West US 2. To calculate your own costs, in your region and in your currency, check the pricing calculator.

Moreover, the App Service Environment Base Fee is calculated based on the default configuration, which uses I1 instances for the ASE Front-End, and with the scaling rule of adding one Front End instance for every 15 worker instances, as described in the Front End scale configuration page shown below. If you keep this configuration, the App Service Environment Base Fee will stay the same, regardless of the number and size of workers. However, we can scale-up the Front End instances to I2 or I3 or reduce the number of workers per Front End instance. This would have an impact on the App Service Environment Base Fee. To calculate the extra cost, you would need to add the cost of every additional core on top of the base configuration. Before changing the Front-End scaling configuration, bear in mind that the Front End instances act only as a layer seven-load balancer (round robin) and perform SSL termination. All the compute of your App Services is executed in the worker instances.

With these price tag, the value and benefits of the ASE must be clear enough so that we can justify the investment to the business.

The benefits of the Azure App Service Isolated or App Service Environment v2.

To understand the benefits and advance features of an App Service Environment v2, it’s worth comparing what we get with this premium offering with what we get by deploying an Azure App Service on a multi-tenant environment. This comparison is shown in the table below.

Multi-tenant environment App Service Isolated /
App Service Environment v2
Virtual Network (VNET) Integration Yes.

Azure App Services can be integrated to an Azure Virtual Network.


An ASE is always deployed in the customer’s Virtual Network.

Static Inbound IP Address Yes.

By default, Azure App Services get assigned a virtual IP addresses. However, this is shared with other App Services in that region.

You can bind a IP-based SSL certificate to your App Service, which will give you a dedicated public inbound IP address.


ASEs provide a static virtual inbound IP address (VIP). This VIP can be public or private, depending on whether configured with and Internal Load Balancer (ILB) or not.

More information on the ASE network architecture here.

Static Outbound IP Address No.

The outbound IP address of an App Service is not static, but it can be any address within a certain range, which is not static either.


ASEs provide a static public outbound IP address. More information here.

Connecting to Resources On-Premises Yes.

Azure App Service VNET integration provides the capability to access resources on-premises via a VPN over the public Internet.

Additionally, Azure Hybrid Connections can be used to connect to resources on-premises without requiring major firewall or network configurations.


In addition to VPN over the public internet and Hybrid Connections support, an ASE provides the ability to connect to resources on-premises via ExpressRoute, which provides a faster, more reliable and secure connectivity without going over the public Internet.

Note: ExpressRoute has its own pricing model.

Private access only No.

App Services are always accessible via the public internet.

One way to restrict access to your App Service is using IP and Domain restrictions, but the App Service is still reachable from the internet.


An ASE can be deployed with an Internal Load Balancer, which will lock down your App Services to be accessible only from within your VNET or via ExpressRoute or Site-to-Site VPN.

Control over inbound and outbound traffic No. Yes.

An ASE is always deployed on a subnet within a VNET. Inbound and outbound traffic can be controlled using a network security group.

Web Application Firewall Yes.

Starting from mid-July 2017, Azure Application Gateway with Web Application Firewall supports App Services in a multi-tenant environment. More info on how to configure it here.


An Azure Application Gateway with Web Application Firewall can be configured to protect App Services on an ASE by preventing SQL injections, session hijacks, cross-site scripting attacks, and other attacks.

Note: The Application Gateway with Web Application Firewall has its own pricing model.

SLA 99.95%

No SLA is provided for Free or Shared tiers.

App Services starting from the Basic tier provide an SLA of 99.95%.


App Services deployed on an ASE provide an SLA of 99.95%.

Instance Sizes / Scale-Up Full range.

App Services can be deployed on almost the full range of tiers from Free to Premium v2.

3 sizes.

Workers on an ASE v2 support three sizes (Isolated)

Scalability / Scale-Out Maximum instances:

Basic: 3

Standard: 10

Premium: 20

ASE v2 supports up to 100 Isolated Worker instances.
Deployment Time Very fast.

The deployment of New App Services on the multi-tenant environment is rather fast, usually less than 2 minutes.

This can vary.


The deployment of a New App Service Environment can take between 60 and 90 minutes (Tested on the Australian Regions)

This can vary.

This is important to consider, particularly in cold DR scenarios.

Scaling out Time Very fast.

Scaling out an App Service usually takes less than 2 minutes.

This can vary.


Scaling out in an App Service Environment can take between 30 and 40 minutes (Tested on the Australian Regions).

This can vary.

This is something to consider when configuring auto-scaling.

Reasons to migrate your ASE v1 to an ASE v2

If you already have an App Service Environment v1, there are many reasons to migrate to the second generation, including:

  • More horsepower: With the ASE v2, you get Dv2-based machines, with faster cores, SSD storage, and twice the memory per core when compared to the ASE v1. You are practically getting double performance per core.
  • No stand-by workers for fault-tolerance: To provide fault-tolerance, the ASE v1 requires you to have one stand-by worker for every 20 active workers on each worker pool. You have to pay for those stand-by workers. ASE v2 has abstracted that for you, and you don’t need to pay for those.
  • Streamlined scaling: If you want to implement auto-scaling on an ASE v1, you have to manage scaling not only at the App Service Plan level, but at the Worker Pool level as well. For that, you have to use a complex inflation rate formula, which requires you to have some instances to be waiting and ready for whenever an auto-scale condition kicks off. This has its own cost implications. ASEs v2 allow you to auto-scale your App Service Plan the same way you do it with your multi-tenant App Services, without the complexity of managing worker pools and without paying for waiting instances.
  • Cost saving: Because you are getting an upgraded performance, you should be able to host the same workloads using half as much in terms of cores. Additionally, you don’t need to pay for fault-tolerance or auto-scaling stand-by workers.
  • Better experience and abstraction: Deployment and scaling of the ASE v2 is much simpler and friendlier than it was with the first generation.

Wrapping Up

So coming back to original the question, when to use an App Service Environment? When is it required and would make sense to pay the premium price of the App Service Environment?

  • When we need to restrict the App Services to be accessible only from within the VNET or via Express Route or Site-to-Site VPN, OR
  • When we require to control inbound and outbound traffic to and from our App Services OR
  • When we need a connection between the App Services and resources on-premises via a secure and reliable channel (ExpressRoute) without going via the public Internet OR
  • When we require much more processing power, i.e. scaling out to more than 20 instances OR
  • When a static outbound IP Address for the App Service is required.

What else would you consider when deciding whether to use an App Service Environment for your workload or not? Feel free to post your comments or feedback below!

Happy clouding!

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

Implementing the Correlation Identifier Pattern on Stateful Logic Apps using the Webhook Action


In many business scenarios, there is the need to implement long-running processes which first send a message to a second process and then pause and wait for an asynchronous response before they continue. Being this an asynchronous communication, the challenge is to correlate the response to the original request. The Correlation Identifier enterprise integration pattern targets this scenario.

Azure Logic Apps provides a stateful workflow engine that allow us to implement robust integration workflows quite easily. One of the workflow actions in Logic Apps is the webhook action, which can be used to implement the Correlation Identifier pattern. One typical scenario in which this pattern can be used is when an approval step with a custom API (with a similar behaviour to the Send Approval Email connector) is required in a workflow.

In this post, I will show how to implement the Correlation Identifier enterprise integration pattern on Logic Apps leveraging the webhook action.

Some background information

The Correlation Identifier Pattern

Adapted from Enterprise Integration Patterns

The Correlation Identifier enterprise integration pattern proposes to add a unique id to the request message on the requestor end and return it as the correlation identifier in the asynchronous response message. This way, when the requestor receives the asynchronous response, it knows which request that response corresponds to. Depending on the functional and non-functional requirements, this pattern can be implemented in a stateless or stateful manner.

Understanding webhooks

A webhook is a service that will be triggered on a particular event and will result on an Http call to a RESTful subscriber. A much more comprehensive definition can be found here. You might be familiar with the configuration of webhooks with static subscribers. In a previous post, I showed how to trigger a Logic App by an SMS message with a Twilio webhook. This webhook will sends all events to the same Http endpoint, i.e. a static subscriber.

The Correlation Identifier pattern on Logic Apps

If you have used the Send Approval Email Logic App Connector, this implements the Correlation Identifier pattern out-of-the-box in a stateful manner. When this connector is used in a Logic App workflow, an email is sent, and the workflow instance waits for a response. Once the email recipient clicks on a button in the email, the particular workflow instance receives an asynchronous callback with a payload containing the user selection; and it continues to the next step. This approval email comes in very handy in many cases; however, a custom implementation of this pattern might be required in different business scenarios. The webhook action allow us to have a custom implementation of the Correlation Identifier pattern.

The Logic Apps Webhook Action

To implement the Correlation Identifier pattern, it’s important that you have a basic understanding of the Logic Apps webhook action. Justin wrote some handy notes about it here. The webhook action of Logic Apps works with an instance-based, i.e. dynamic webhook subscription. Once executed, the webhook action generates an instance-based callback URL for the dynamic subscription. This URL is to be used to send a correlated response to trigger the continuation of the corresponding workflow. This applies¬†the Return Address integration pattern.

We can implement the Correlation Identifier pattern by building a Custom API Connector for Logic Apps following the webhook subscribe and unsubscribe pattern of Logic Apps. However, it’s also possible to implement this pattern without the need of writing a Custom API Connector, as I’ll show below.


To illustrate the pattern, I’ll be using a fictitious company called FarmToTable. FarmToTable provides delivery of fresh produce by drone. Consumers subscribe to the delivery service by creating their personalised list of produce to be delivered on a weekly basis. FarmToTable requires to implement an SMS confirmation service so that an SMS message is sent to each consumer the day before the scheduled delivery date. After receiving the text message, the customer must confirm within 12 hours whether they want the delivery or not, so that the delivery is arranged.

The Solution Architecture

As mentioned above, the scenario requires sending an SMS text message and waiting for an SMS response. For sending and receiving the SMS, we will be using Twilio. More details on working with Logic Apps and Twilio on one of my previous posts. Twilio provides webhooks that are triggered when SMS messages are received. The Twilio webhooks only allow static subscriptions, i.e. calling one single Http endpoint. Nevertheless, the webhook action of Logic Apps requires the webhook subscribe and unsubscribe pattern, which works with an instance-based subscription. Thus, we need to implement a wrapper for the required subscribe/unsubscribe pattern.

The architecture of this pattern is shown in the figure below and explain after.

Components of the solution:

  1. Long-running stateful workflow. This is the Logic App that controls the main workflow, sends a request, pauses and waits for an asynchronous response. This is implememented by using the webhook action.
  2. Subscribe/Unsubscribe Webhook Wrapper. In our scenario, we are working with a third-party service (Twilio) that only supports webhooks with static subscriptions; thus, we need to create this wrapper. This wrapper is composed by 4 different parts.
  • Subscription store: A database to store the unique message Id and the instance-based callback URL provided by the webhook action. In my implementation, I’m using Azure Cosmos DB for this. Nevertheless, you can use any other suitable alternative. Because the only message id we can send to Twilio and get back is the phone number, I’m using this as my correlation identifier. We can assume that for this scenario the phone number is unique during the day.
  • Subscribe and Start Request Processing API: this is a RESTful API that is in charge of starting the processing of the request and storing the subscription. I’m implementing this API with a Logic App, but you can use an Azure Function, an API App or a Custom Api App connector for Logic App.
  • Unsubscribe and Cancel Request Processing API: this is another RESTful API that is only going to be called if the webhook action on the main workflow times out. This API is in charge of cancelling the processing and deleting the subscription from the store. The unsubscribe step has a similar purpose to the CancellationToken¬†structure used in C# async programming. In our scenario, there is nothing to cancel though. Like the previous API, I’m implementing this with a Logic App, but you can use different technologies.
  • Instance-based webhook: this webhook is to be triggered by the third-party webhook with a static subscription. Once triggered, this Logic App is in charge of getting the instance-based callback URL from the store and invoking it. After making the call back to the main workflow instance, the subscription is to be deleted.

The actual solution

To implement this solution, I’m going to follow the steps described below:

1. Configure my Twilio account to be able to send and receive SMS messages. More details here.

2. Create a Service Bus Namespace and 2 queues. For my scenario, I’m using one inbound queue (ScheduledDeliveriesToConfirm) and one outbound queue (ConfirmedScheduledDeliveries). For your own scenarios, you can use other triggers and outbound protocols.

3. Create a Cosmos Db collection to store the instance-based webhook subscriptions. More details on how to work with Cosmos Db here.

  • Create Cosmos Db account (with the Document DB API).
  • Create database
  • Create collection.

4. Create the “Subscribe and Start Request Processing API”. I’m using a Logic App workflow to implement this API as shown below. I hope the steps with their comments are self-explanatory.

  • The workflow is Http triggered. It expects, as the request body, the scheduled delivery details and the instance-based callback URL of the calling webhook action.
  • The provided Http trigger URL is to be configured later in the webhook action subscribe Uri of the main Logic App.
  • It stores the correlation on Cosmos Db. More information on the Cosmos Db connector here.
  • It starts the request processing by calling the Twilio connector to send the SMS message.

The expected payload for this API is as the one below. This payload is to be sent by the webhook action subscribe call on the main Logic App:

    "callbackUrl": "",
    "scheduledDelivery": {
        "deliveryId": "2c5c8390-b6c8-4274-b785-33121b01e219",
        "customer": "Paco de la Cruz",
        "customerPreferredName": "Paco",
        "phone": "+61000000000",
        "orderName": "Seasonal leafy greens and fruits",
        "deliveryAddressName": "Home",
        "deliveryDate": "2017-07-20",
        "deliveryTime": "07:30",
        "createdDateTime": "2017-07-19T09:10:03.209"

You can have a look at the code behind here. Please use it just as a reference, as it hasn’t been refactored for deployment.

5. Create the “Unsubscribe and Cancel Request Processing API”. I used another Logic App workflow to implement this API. This API is only going to be called if the webhook action on the main workflow times out. The workflow is show below.

  • The workflow is Http triggered. It expects as the request body the message id so the corresponding subscription can be deleted.
  • The provided Http trigger URL is to be configured later in the webhook action unsubscribe Uri of the main Logic App.
  • It deletes the subscription from Cosmos Db. More information on the Cosmos Db connector here.

The expected payload for this API is quite simple, as the one shown below. This payload is to be sent by the webhook action unsubscribe call on the main Logic App:

    "id": "+61000000000"

The code behind is published here. Please use it just as a reference, as it hasn’t been refactored to be deployed.

6. Create the Instance-based Webhook. I’m using another Logic App to implement the instance-based webhook as shown below.

  • The workflow is Http triggered. It’s to be triggered by the Twilio webhook.
  • The provided Http trigger URL is to be configured later in the Twilio webhook.
  • It gets the message Id (phone number) from the Twilio message.
  • It then gets the instance-based subscription (callback URL) from Cosmos Db.
  • Then, it posts the received message to the corresponding instance of the main Logic App workflow by using the correlated callback URL.
  • After making the callback, it deletes the subscription from Cosmos Db.

The code behind for this workflow is here. Please use it just as a reference, as it is not ready to be deployed.

7. Configure the Twilio static webhook. Now, we have to configure the Twilio webhook to call the Logic App created above when an SMS message is received. Detailed instructions in my previous post.

8. Create the long-running stateful workflow. Once we have the implemented the subscribe/unsubscribe webhook wrapper required for the Logic App webhook action, we can start creating the long-running stateful workflow. This is shown below.

In order to trigger the Unsubscription API, the timeout property of the webhook action must be configured. This can be specified under the settings of the action. The Duration is to be configured the in ISO 8601 duration format. If you don’t want to resend the request after the time out, you should turn off the retry policy.

  • The workflow is triggered by messages on the ScheduledDeliveriesToConfirm Service Bus queue.
  • Then the webhook action:
    • Sends the scheduled delivery message and the corresponding instance-based callback URL to the Subscribe and Start Request Processing Logic App.
    • Waits for the callback from the Instance-based webhook. This would receive as an Http post the response send by the customer. If a response is received before the time out limit, the action will succeed and continue to the next action.
    • If the webhook action times out, it calls the Unsubscribe and Cancel Request Processing Logic App and sends the message id (phone number); and the action fails so the workflow does not continue. However, if required, you could continue the workflow by configuring the RunAfter property of the subsequent action.
  • If a response is received, the workflow continues assessing the response. If the response is ‘YES’, it sends the original message to the ConfirmedScheduledDeliveries queue.

The code behind of this workflow is available here. Please use it just as a reference only, as it hasn’t been refactored for deployment.

Now, we have finished implementing the whole solution! ūüôā¬†You can have a look at all the Logic Apps JSON definitions¬†in this repository.


In this post, I’ve shown how to implement the Correlation Identifier pattern using a stateful Logic App. To illustrate the pattern, I implemented an approval step in a Logic App workflow with a custom API. For this, I used Twilio, a third-party service, that offers a webhook with a static subscription; and created a wrapper to implement the subscribe/unsubscribe pattern, including an instance-based webhook to meet the Logic Apps webhook action requirements.

I hope you find this post useful whenever you have to add a custom approval step or correlate asynchronous messages using Logic Apps, or that I’ve given you an overview of how to enable the correlation of asynchronous messages in your own workflow or integration scenarios.

Feel free to add your comments or questions below, and happy clouding!

Cross-posted on Mexia Blog.
Follow me on @pacodelacruz

Azure Functions or WebJobs? Where to run my background processes on Azure?

Originally posted on Kloud’s blog.

Kloud Blog



Azure WebJobs have been a quite popular way of running background processes on Azure. They have been around since early 2014. When they were released, they were a true PaaS alternative to Cloud Services Worker Roles bringing many benefits like the WebJobs SDK, easy configuration of scalability and availability, a dashboard, and more recently all the advantages of Azure Resource Manager and a very flexible continuous delivery model. My colleague Namit previously compared WebJobs to Worker Roles.

Meanwhile, Azure Functions were announced earlier this year (march 2016). Azure Functions, or ‚ÄúFunctions Apps‚ÄĚ as they appear on the Azure Portal, are Microsoft‚Äôs Function as a Service (FaaS) offering. With them, you can create microservices or small pieces of code which can run synchronously or asynchronously as part of composite and distributed cloud solutions. Even though they are still in the making (at the time of this writing they‚Ķ

View original post 1,584 more words

Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Originally posted on Kloud’s blog.

Kloud Blog


Azure Web Apps or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy…

View original post 733 more words

Monitoring Azure WebJobs Health with Application Insights

Originally posted on Kloud’s blog.

Kloud Blog


Azure WebJobs have been available for quite some time and have become very popular for running background tasks with programs or scripts. WebJobs are deployed as part of Azure App Services (Web Apps), which include their companion site Kudu. Kudu provides a lot of features, including a REST API, which provides operations for source code management (SCM), virtual file system, deployments, accessing logs, and for WebJob management as well. The Kudu WebJobs API provides different operations including listing WebJobs, uploading a WebJob, or triggering it. One of the operations of this API allows to get the status of a specific WebJob by name.

Another quite popular Azure service is Application Insights. This provides functionality to monitor and diagnose application issues and to analyse usage and performance as well. One of these features are web tests, which provide a way to monitor the availability and health…

View original post 1,342 more words

When to use an Azure App Service Environment?

Originally posted on Kloud’s blog.

Kloud Blog


An Azure App Service Environment (ASE) is a premium Azure App Service hosting environment which is dedicated, fully isolated, and highly scalable. It clearly brings advanced features for hosting Azure App Services which might be required in different enterprise scenarios. But being this a premium service, it comes with a premium price tag. Due to its cost, a proper business case and justification are to be prepared before architecting a solution based on this interesting PaaS offering on Azure.

When planning to deploy Azure App Services, an organisation has the option of creating an Azure Service Plan and hosting them there. This might be good enough for most of the cases. However, when higher demands of scalability and security are present, a dedicated and fully isolated App Service Environment might be necessary.

Below, I will summarise the information required to make a decision regarding the need of using an App…

View original post 1,534 more words