Azure Durable Functions vs Logic Apps: How to choose?

01 Feature


Azure currently has two service offerings of serverless compute: Azure Logic Apps and Azure Functions. Until recently, one could argue that Azure Functions were code triggered by events while Logic Apps were event-triggered workflows. However, that changed after the release of Azure Durable Functions which have reached General Availability very recently. Durable Functions is an extension of Azure Functions that allows you to build stateful and serverless code-based workflows. With Azure Logic Apps you can create stateful and serverless workflows through a visual designer.

If you are architecting a solution that requires serverless and stateful workflows on Azure, you might be wondering how to choose between Azure Durable Functions and Logic Apps. This post aims to shed some light to select the platform that better suits your needs.


For some people the development experience might be a very key factor when deciding a platform over the other. The development experience of both platforms is quite different as described below:

Durable Functions Logic Apps
Paradigm Imperative code Declarative code
Languages At the time of writing only C# is officially supported. However, you can make them work with F# and JavaScript support is currently in preview. Workflows are implemented using a visual designer on the Azure Portal or Visual Studio. Behind the visual representation of the workflow, there is the JSON-based Workflow Definition Language.
Offline Development Can be developed offline with the local runtime and Storage emulator. You need to be online with access to the Azure to be able to develop your workflows.

Durable Functions allow you to use imperative code you might already be familiar with, but you still need to understand the constraints of this extension. Logic Apps might require you to learn to use a new development environment, but which is relatively straight forward and quite handy for scenarios where less coding is the preference.


Logic Apps is an integration platform, thus, it truly offers better connectivity than Azure Durable Functions. Some details to consider are described in the table as follows.

Durable Functions Logic Apps
Connectors or Bindings The list of supported bindings is here. Some of these bindings support triggering a function, or are inputs or outputs. The list of bindings is growing, especially for the Functions runtime version 2.

Additionally, as Azure Functions can be triggered by Event Grid events, any Event Grid Publishers can potentially become a trigger of Azure Functions.

Logic Apps provide more than 200 connectors, and the list just keeps growing. Among these, there are protocol connectors, Azure Services connectors, Microsoft SaaS connectors, and third-Party SaaS Connectors.

Some of these connectors can trigger Logic App workflows, while others support getting and pushing data as part of the workflow.

Custom Connectors You can create custom input and output bindings for Azure Functions. Logic Apps allow you to build custom connectors.
Hybrid Connectivity Azure Functions hosted on a App Service Plan (not consumption plan) support Hybrid Connections. Hybrid connections allows to have a TCP tunnel to access on-premises systems and services securely.

Additionally, Azure Functions deployed on an App Service Plan can be integrated to a VNET or deployed on a dedicated App Service Environment to access resources and services on-premises.

Logic Apps offers the On-Premises Data Gateway, which, through an agent installed on-premises, allows you to connect to a list of supported protocols and applications.

It’s worth mentioning that the Product Team is currently working on Isolated Logic Apps, which will **in the future** be deployed on your own VNET, thus will have access to resources on-premises, which will unlock many scenarios.



Both workflow engines are quite different. Even though the underlying implementation is abstracted for us, it’s important to know how they work internally when architecting enterprise-grade solutions. How both engines work and how some workflow patterns are supported is described below.

Durable Functions Logic Apps
Trigger A workflow instance can be instantiated by any Azure Function implementing the DurableOrchestrationClient. Can be initiated by the many different available triggers offered by the connectors.
Actions being orchestrated Can orchestrate Activity Functions (with the ActivityTrigger attribute). However, those Activity Functions could call other services, using any of the supported bindings.

Additionally, orchestrations can call sub-orchestrations.

At the time of writing, an orchestration function can only call activity functions that are defined in the same Function App. This could potentially hinder reusability of services.

Many different workflow actions can be orchestrated. Logic Apps workflows can be calling actions of the more than 200 connectors, workflow steps, other Azure Functions, other Logic Apps, etc.
Flow Control The workflow’s flow is controlled using the standard code constructs. E.g. conditions, switch case statements, loops, try-catch blocks, etc. You can control the flow with conditional statementsswitch statements, loopsscopes and controlling the activity chaining with the runAfter property.
Chaining Pattern Functions can be executed in a sequence and outputs of one can be inputs of subsequent ones. Actions can easily be chained in a workflow. Additionally the runAfter property allows to execute actions based on the status of a previous action or scope.
Fan-Out / Fan-In Pattern Functions can be executed in parallel and the workflow can continue when all or any of the branches finish. You can fan-out and fan-in actions in a workflow by simply implementing parallel branches, or ForEach loops running in parallel.
Async HTTP APIs and Get Status Pattern Client applications or services can invoke Durable Functions orchestration via HTTP APIs asynchronously and later get the orchestration status to learn when the operation completes. Additionally, you can set a custom status value that could be query by external clients. Client applications or services could call Logic Apps Management API to get the instance run status. However, either the client has to have access to this API or you would need to implement a wrapper of this.

Custom Status value is not currently supported out-of-the-box. If required, you would need to persist it in a separate store and expose it with a custom API.

Approval Workflow (Human Interaction) Pattern The Human Interaction (Approval Workflow) Pattern can be implemented as described here. Approval Workflows can be implemented with the out-of-the box connectors or custom as described here.
Correlation Pattern The Correlation Pattern can be implemented not only when there is human interaction, but for broader scenarios in the same way as described above. The Correlation Pattern can easily be implemented using the webhook action or with Service Bus sessions.
Programmatic instance management Client applications or services can monitor and terminate instances of Durable Functions orchestrations via the API. Client applications or services could call Logic Apps Management API to monitor and terminate instances of Logic App Workflows. However, either the client has to have access to this API or you would need to implement a wrapper.
Shared State across instances Durable Functions support what they call “eternal orchestrations” which is a way to implement flexible loops with a state across loops without the need to store the complete iteration run history. However, this implementation has some important limitations, and the product team suggests to use only it for monitoring scenarios that require flexible recurrence and lifetime management and when the lost of messages is acceptable. Logic Apps does not support eternal orchestrations. However, different strategies can be used to implement endless loops with a state across instances. E.g. making use of a trigger state or storing the state in an external store to pass it from one instance to the next one in a singleton workflow.
Concurrency Control Concurrency throttling is supported. Concurrency control can be configured at workflow level or loop level.
Lifespan One instance can run without defined time limits. One instance of Logic Apps can run for up to 90 days.
Error Handling Implemented with the constructs of the language used in the orchestration. Retry policies and catch strategies can be implemented.
Orchestration Engine Orchestration functions and activity functions may be running on different VMs. However, Durable Functions ensures reliable execution of orchestrations. To support this, check-pointing is implemented at each await statement. Additionally, the orchestration replays every time after resuming from an await call until it reaches the last activity check-pointed to rebuild the in-memory state of the instance. For high throughput scenarios, you could enable extended sessions. In Logic Apps the runtime engine breaks down the different tasks based on the workflow definition. These tasks are distributed among different workers. The engine makes sure that each task is executed at least once, and that tasks are not executed until their dependencies have finished with the expected status.
Some additional constraints and considerations The orchestration function has to be implemented with some constraints in mind, such as, the code must be deterministic and non-blocking, async calls can only be done using the DurableOrchestrationContext and infinite loops must be avoided. To control the workflow execution flow sometimes we need advanced constructs and operations, that can be complex to implement in Logic Apps.

The Worfklow definition language offers some functions that we can leverage, but sometimes, we need to make use of Azure Functions to perform advanced operations required as part of the workflow.

Additionally, you need to consider some limits of Logic Apps.


The deployment of these two platforms also has its differences, as detailed below. 

Durable Functions Logic Apps
CI/CD Durable Functions builds and deployments can be automated using VSTS build and release pipelines. Additionally, other build and release management tools can be used. Logic Apps are deployed using ARM Templates as described here.
 Versioning Versioning strategy is very important in Durable Functions. If you introduce breaking changes in a new version of your workflow, in-flight instances will break and fail.

You can find more information and mitigation strategies here.

Logic Apps keep version history of all workflows saved or deployed. Running instances will continue running based on the active version when they started.
Runtime Azure Functions can not only run on Azure, but be deployed on-premises, on Azure Stack, and can run on containers as well. Logic Apps can only run on Azure.


Management and Monitoring

How you manage and monitor each your solutions on platform is quite different. Some of the features are described in the table as follows.

Durable Functions Logic Apps
Tracing and Logging The orchestration activity is tracked by default in Application Insights. Furthermore, you can implement logging to App Insights.   The run history and trigger history are logged by default. Additionally, you can enable diagnostic logging to send additional details to Log Analytics. You can also make use of trackedProperties to enrich your logging.
Monitoring To monitor workflow instances, you need to use Application Insights Query Language to build your custom queries and dashboards. The Logic Apps blade and Log Analytics workspace solution for Logic Apps provide very rich and friendly visual tools for monitoring.

Furthermore, you can build your own monitoring dashboards and queries.

Resubmitting There is no out-of-the-box functionality to resubmit failed messages. Failed instances can easily be resubmitted from the Logic Apps blades or the Log Analytics workspace.


Another important consideration when choosing the right platform is pricing. Even though both options offer a serverless option where you only pay for what you use, there are some differences to consider as described below.

Durable Functions Logic Apps
 Serverless In the consumption plan, you pay per-second of resource consumption and the number of executions. More details described here. For workflows you pay per-action and trigger (skipped, failed or succeeded). There is also a marginal cost for storage.

In case you need B2B integration, XML Schemas and Maps or Liquid Templates, you would need to pay for an Integration Account.

More details here.

Instance Based Durable Functions can also be deployed on App Service Plans or App Service Environments where you pay per instance. At the moment there is no option to run Logic Apps on your dedicated instances. However, this will change in the future.


This post contrasts in detail the capabilities and features of both serverless workflow platforms available on Azure. The platform better suited really depends on the functional and non-functional requirements and also on your preferences. As a wrap-up, we could say that:

Logic Apps are better suited when

  • Building integration solutions and leveraging the very extensive list of connectors would reduce the time-to-market and ease connectivity,
  • Visual tools to manage and troubleshoot workflows are required,
  • It’s ok to run only on Azure, and
  • A visual designer and less coding are preferred.

And Durable Functions are a better fit if

  • The list of available bindings is sufficient to meet the requirements,
  • The logging and troubleshooting capabilities are sufficient, and you can build your custom monitoring tools,
  • You require them to run not only on Azure, but on Azure Stack or Containers, and
  • You prefer to have all the power and flexibility of a robust programming language.

It’s also worth mentioning that in most cases, the operation costs of Logic Apps tend be higher than those of Durable Functions, but that would depend case by case. And for enterprise-grade solutions, you should not decide on a platform based on price only, but you have to consider all the requirements and the value provided by the platform.

Having said all this, you can always mix and match Logic Apps and Azure Functions in the same solution so you can get the best of both worlds. Hopefully this post has given you enough information to better choose the platform for your next cloud solution.

Happy clouding!

Cross-posted on Deloitte Platform Engineering Blog
Follow me on @pacodelacruz


When to Use an Azure App Service Environment v2 (App Service Isolated)


The Azure App Service Environment (ASE) is a premium feature offering of the Azure App Services which is fully isolated, highly scalable, and runs on a customer’s virtual network. On an ASE you can host Web Apps, API Apps, Mobile Apps and Azure Functions. The first generation of the App Service Environment (ASE v1) was released in late 2015. Two significant updates were launched after that; in July 2016, they added the option of having an Internal Load Balancer, and in August of that year, an Application Gateway with an Web Application Firewall could be configured for the ASE. After all the feedback Microsoft had been receiving on this offering, they started working on the second generation of the App Service Environment (ASE v2); and on July of 2017, it’s been released as Generally Available.

In my previous job, I wrote the post “When to Use an App Service Environment“, which referred to the first generation (ASE v1). I’ve decided to write an updated version of that post, mainly because that one has been one of my posts with more comments and questions and I know the App Service Environment, also called App Service Isolated, will continue to grow in popularity. Even though the ASE v2 has been simplified, I still believe many people would have questions about it or would want to make sure that they have no other option but to pay for this premium feature offering when they have certain needs while deploying their solutions on the Azure PaaS.

When you are planning to deploy Azure App Services, you have the option of creating them on a multi-tenant environment or on your own isolated (single-tenant) App Service Environment. If you want to understand in detail what they mean by “multi-tenant environment” for Azure App Services, I recommend you to read this article. When they refer to a “Scale-Unit” in that article, they are talking about this multi-tenant shared infrastructure. You could picture an App Service Environment having a very similar architecture, but with all the building blocks dedicated to you, including the Front-End, File Servers, API Controllers, Publisher, Data Roles, Database, Web Workers, etc.

In this post, I will try to summarise when is required to use an App Service Environment (v2), and in case you have an App Service Environment v1, why it makes a lot of sense to migrate it to the second generation.

App Service Environment v2 Pricing

Before getting too excited about the functionality and benefits of the ASE v2 or App Service Isolated, it’s important to understand its pricing model.

Even though, they have abstracted much of the complexity of the ASE in the second generation, we still need to be familiar with the architecture of this feature offering to properly calculate the costs of the App Service Environment v2. To calculate the total cost of your ASE, you need to consider an App Service Environment Base Fee and the cost of the Isolated Workers.

The App Service Environment Base Fee covers the cost of the of all the infrastructure required to run your single-tenant and isolated Azure App Services; including load balancing, high-availability, publishing, continuous delivery, app settings shared across all instances, deployment slots, management APIs, etc. None of your assemblies or code are executed in the instances which are part of this layer. Then, the Isolated Workers are the ones executing your Web Apps, API Apps, Mobile Apps or Functions. You decide the size and how many Isolated workers you want to spin up, thus the cost of the worker layer. Both layers are charged by the hour. Below, the prices for the Australian Regions in Australian Dollars are shown.

In Australia, the App Service Environment base fee is above $ 1700 AUD per month, and the Isolated I1 instance is close to $ 500 AUD per month. This means that the entry-level of an ASE v2 with one Isolated Worker costs around $ 2,200 AUD per month or above $ 26,000 AUD per year, which is very similar to the price of the ASE v1 in this region. This cost can easily escalate by scaling up or scaling out the ASE. It’s noteworthy that prices vary from region to region. For instance, according to the Azure pricing calculator, at the time of writing, the prices for the Australian Regions are around 35% more expensive than those in the West US 2. To calculate your own costs, in your region and in your currency, check the pricing calculator.

Moreover, the App Service Environment Base Fee is calculated based on the default configuration, which uses I1 instances for the ASE Front-End, and with the scaling rule of adding one Front End instance for every 15 worker instances, as described in the Front End scale configuration page shown below. If you keep this configuration, the App Service Environment Base Fee will stay the same, regardless of the number and size of workers. However, we can scale-up the Front End instances to I2 or I3 or reduce the number of workers per Front End instance. This would have an impact on the App Service Environment Base Fee. To calculate the extra cost, you would need to add the cost of every additional core on top of the base configuration. Before changing the Front-End scaling configuration, bear in mind that the Front End instances act only as a layer seven-load balancer (round robin) and perform SSL termination. All the compute of your App Services is executed in the worker instances.

With these price tag, the value and benefits of the ASE must be clear enough so that we can justify the investment to the business.

The benefits of the Azure App Service Isolated or App Service Environment v2.

To understand the benefits and advance features of an App Service Environment v2, it’s worth comparing what we get with this premium offering with what we get by deploying an Azure App Service on a multi-tenant environment. This comparison is shown in the table below.

Multi-tenant environment App Service Isolated /
App Service Environment v2
Virtual Network (VNET) Integration Yes.

Azure App Services can be integrated to an Azure Virtual Network.


An ASE is always deployed in the customer’s Virtual Network.

Static Inbound IP Address Yes.

By default, Azure App Services get assigned a virtual IP addresses. However, this is shared with other App Services in that region.

You can bind a IP-based SSL certificate to your App Service, which will give you a dedicated public inbound IP address.


ASEs provide a static virtual inbound IP address (VIP). This VIP can be public or private, depending on whether configured with and Internal Load Balancer (ILB) or not.

More information on the ASE network architecture here.

Static Outbound IP Address No.

The outbound IP address of an App Service is not static, but it can be any address within a certain range, which is not static either.


ASEs provide a static public outbound IP address. More information here.

Connecting to Resources On-Premises Yes.

Azure App Service VNET integration provides the capability to access resources on-premises via a VPN over the public Internet.

Additionally, Azure Hybrid Connections can be used to connect to resources on-premises without requiring major firewall or network configurations.


In addition to VPN over the public internet and Hybrid Connections support, an ASE provides the ability to connect to resources on-premises via ExpressRoute, which provides a faster, more reliable and secure connectivity without going over the public Internet.

Note: ExpressRoute has its own pricing model.

Private access only No.

App Services are always accessible via the public internet.

One way to restrict access to your App Service is using IP and Domain restrictions, but the App Service is still reachable from the internet.


An ASE can be deployed with an Internal Load Balancer, which will lock down your App Services to be accessible only from within your VNET or via ExpressRoute or Site-to-Site VPN.

Control over inbound and outbound traffic No. Yes.

An ASE is always deployed on a subnet within a VNET. Inbound and outbound traffic can be controlled using a network security group.

Web Application Firewall Yes.

Starting from mid-July 2017, Azure Application Gateway with Web Application Firewall supports App Services in a multi-tenant environment. More info on how to configure it here.


An Azure Application Gateway with Web Application Firewall can be configured to protect App Services on an ASE by preventing SQL injections, session hijacks, cross-site scripting attacks, and other attacks.

Note: The Application Gateway with Web Application Firewall has its own pricing model.

SLA 99.95%

No SLA is provided for Free or Shared tiers.

App Services starting from the Basic tier provide an SLA of 99.95%.


App Services deployed on an ASE provide an SLA of 99.95%.

Instance Sizes / Scale-Up Full range.

App Services can be deployed on almost the full range of tiers from Free to Premium v2.

3 sizes.

Workers on an ASE v2 support three sizes (Isolated)

Scalability / Scale-Out Maximum instances:

Basic: 3

Standard: 10

Premium: 20

ASE v2 supports up to 100 Isolated Worker instances.
Deployment Time Very fast.

The deployment of New App Services on the multi-tenant environment is rather fast, usually less than 2 minutes.

This can vary.


The deployment of a New App Service Environment can take between 60 and 90 minutes (Tested on the Australian Regions)

This can vary.

This is important to consider, particularly in cold DR scenarios.

Scaling out Time Very fast.

Scaling out an App Service usually takes less than 2 minutes.

This can vary.


Scaling out in an App Service Environment can take between 30 and 40 minutes (Tested on the Australian Regions).

This can vary.

This is something to consider when configuring auto-scaling.

Reasons to migrate your ASE v1 to an ASE v2

If you already have an App Service Environment v1, there are many reasons to migrate to the second generation, including:

  • More horsepower: With the ASE v2, you get Dv2-based machines, with faster cores, SSD storage, and twice the memory per core when compared to the ASE v1. You are practically getting double performance per core.
  • No stand-by workers for fault-tolerance: To provide fault-tolerance, the ASE v1 requires you to have one stand-by worker for every 20 active workers on each worker pool. You have to pay for those stand-by workers. ASE v2 has abstracted that for you, and you don’t need to pay for those.
  • Streamlined scaling: If you want to implement auto-scaling on an ASE v1, you have to manage scaling not only at the App Service Plan level, but at the Worker Pool level as well. For that, you have to use a complex inflation rate formula, which requires you to have some instances to be waiting and ready for whenever an auto-scale condition kicks off. This has its own cost implications. ASEs v2 allow you to auto-scale your App Service Plan the same way you do it with your multi-tenant App Services, without the complexity of managing worker pools and without paying for waiting instances.
  • Cost saving: Because you are getting an upgraded performance, you should be able to host the same workloads using half as much in terms of cores. Additionally, you don’t need to pay for fault-tolerance or auto-scaling stand-by workers.
  • Better experience and abstraction: Deployment and scaling of the ASE v2 is much simpler and friendlier than it was with the first generation.

Wrapping Up

So coming back to original the question, when to use an App Service Environment? When is it required and would make sense to pay the premium price of the App Service Environment?

  • When we need to restrict the App Services to be accessible only from within the VNET or via Express Route or Site-to-Site VPN, OR
  • When we require to control inbound and outbound traffic to and from our App Services OR
  • When we need a connection between the App Services and resources on-premises via a secure and reliable channel (ExpressRoute) without going via the public Internet OR
  • When we require much more processing power, i.e. scaling out to more than 20 instances OR
  • When a static outbound IP Address for the App Service is required.

What else would you consider when deciding whether to use an App Service Environment for your workload or not? Feel free to post your comments or feedback below!

Happy clouding!

Follow me on @pacodelacruz

Cross-posted on Deloitte Platform Engineering Blog