Azure Durable Functions vs Logic Apps: How to choose?

01 Feature

Introduction

Azure currently has two service offerings of serverless compute: Azure Logic Apps and Azure Functions. Until recently, one could argue that Azure Functions were code triggered by events while Logic Apps were event-triggered workflows. However, that changed after the release of Azure Durable Functions which have reached General Availability very recently. Durable Functions is an extension of Azure Functions that allows you to build stateful and serverless code-based workflows. With Azure Logic Apps you can create stateful and serverless workflows through a visual designer.

If you are architecting a solution that requires serverless and stateful workflows on Azure, you might be wondering how to choose between Azure Durable Functions and Logic Apps. This post aims to shed some light to select the platform that better suits your needs.

Development

For some people the development experience might be a very key factor when deciding a platform over the other. The development experience of both platforms is quite different as described below:

Durable Functions Logic Apps
Paradigm Imperative code Declarative code
Languages At the time of writing only C# is officially supported. However, you can make them work with F# and JavaScript support is currently in preview. Workflows are implemented using a visual designer on the Azure Portal or Visual Studio. Behind the visual representation of the workflow, there is the JSON-based Workflow Definition Language.
Offline Development Can be developed offline with the local runtime and Storage emulator. You need to be online with access to the Azure to be able to develop your workflows.

Durable Functions allow you to use imperative code you might already be familiar with, but you still need to understand the constraints of this extension. Logic Apps might require you to learn to use a new development environment, but which is relatively straight forward and quite handy for scenarios where less coding is the preference.

Connectivity

Logic Apps is an integration platform, thus, it truly offers better connectivity than Azure Durable Functions. Some details to consider are described in the table as follows.

Durable Functions Logic Apps
Connectors or Bindings The list of supported bindings is here. Some of these bindings support triggering a function, or are inputs or outputs. The list of bindings is growing, especially for the Functions runtime version 2.

Additionally, as Azure Functions can be triggered by Event Grid events, any Event Grid Publishers can potentially become a trigger of Azure Functions.

Logic Apps provide more than 200 connectors, and the list just keeps growing. Among these, there are protocol connectors, Azure Services connectors, Microsoft SaaS connectors, and third-Party SaaS Connectors.

Some of these connectors can trigger Logic App workflows, while others support getting and pushing data as part of the workflow.

Custom Connectors You can create custom input and output bindings for Azure Functions. Logic Apps allow you to build custom connectors.
Hybrid Connectivity Azure Functions hosted on a App Service Plan (not consumption plan) support Hybrid Connections. Hybrid connections allows to have a TCP tunnel to access on-premises systems and services securely.

Additionally, Azure Functions deployed on an App Service Plan can be integrated to a VNET or deployed on a dedicated App Service Environment to access resources and services on-premises.

Logic Apps offers the On-Premises Data Gateway, which, through an agent installed on-premises, allows you to connect to a list of supported protocols and applications.

It’s worth mentioning that the Product Team is currently working on Isolated Logic Apps, which will **in the future** be deployed on your own VNET, thus will have access to resources on-premises, which will unlock many scenarios.

 

Workflow

Both workflow engines are quite different. Even though the underlying implementation is abstracted for us, it’s important to know how they work internally when architecting enterprise-grade solutions. How both engines work and how some workflow patterns are supported is described below.

Durable Functions Logic Apps
Trigger A workflow instance can be instantiated by any Azure Function implementing the DurableOrchestrationClient. Can be initiated by the many different available triggers offered by the connectors.
Actions being orchestrated Can orchestrate Activity Functions (with the ActivityTrigger attribute). However, those Activity Functions could call other services, using any of the supported bindings.

Additionally, orchestrations can call sub-orchestrations.

At the time of writing, an orchestration function can only call activity functions that are defined in the same Function App. This could potentially hinder reusability of services.

Many different workflow actions can be orchestrated. Logic Apps workflows can be calling actions of the more than 200 connectors, workflow steps, other Azure Functions, other Logic Apps, etc.
Flow Control The workflow’s flow is controlled using the standard code constructs. E.g. conditions, switch case statements, loops, try-catch blocks, etc. You can control the flow with conditional statementsswitch statements, loopsscopes and controlling the activity chaining with the runAfter property.
Chaining Pattern Functions can be executed in a sequence and outputs of one can be inputs of subsequent ones. Actions can easily be chained in a workflow. Additionally the runAfter property allows to execute actions based on the status of a previous action or scope.
Fan-Out / Fan-In Pattern Functions can be executed in parallel and the workflow can continue when all or any of the branches finish. You can fan-out and fan-in actions in a workflow by simply implementing parallel branches, or ForEach loops running in parallel.
Async HTTP APIs and Get Status Pattern Client applications or services can invoke Durable Functions orchestration via HTTP APIs asynchronously and later get the orchestration status to learn when the operation completes. Additionally, you can set a custom status value that could be query by external clients. Client applications or services could call Logic Apps Management API to get the instance run status. However, either the client has to have access to this API or you would need to implement a wrapper of this.

Custom Status value is not currently supported out-of-the-box. If required, you would need to persist it in a separate store and expose it with a custom API.

Approval Workflow (Human Interaction) Pattern The Human Interaction (Approval Workflow) Pattern can be implemented as described here. Approval Workflows can be implemented with the out-of-the box connectors or custom as described here.
Correlation Pattern The Correlation Pattern can be implemented not only when there is human interaction, but for broader scenarios in the same way as described above. The Correlation Pattern can easily be implemented using the webhook action or with Service Bus sessions.
Programmatic instance management Client applications or services can monitor and terminate instances of Durable Functions orchestrations via the API. Client applications or services could call Logic Apps Management API to monitor and terminate instances of Logic App Workflows. However, either the client has to have access to this API or you would need to implement a wrapper.
Shared State across instances Durable Functions support what they call “eternal orchestrations” which is a way to implement flexible loops with a state across loops without the need to store the complete iteration run history. However, this implementation has some important limitations, and the product team suggests to use only it for monitoring scenarios that require flexible recurrence and lifetime management and when the lost of messages is acceptable. Logic Apps does not support eternal orchestrations. However, different strategies can be used to implement endless loops with a state across instances. E.g. making use of a trigger state or storing the state in an external store to pass it from one instance to the next one in a singleton workflow.
Concurrency Control Concurrency throttling is supported. Concurrency control can be configured at workflow level or loop level.
Lifespan One instance can run without defined time limits. One instance of Logic Apps can run for up to 90 days.
Error Handling Implemented with the constructs of the language used in the orchestration. Retry policies and catch strategies can be implemented.
Orchestration Engine Orchestration functions and activity functions may be running on different VMs. However, Durable Functions ensures reliable execution of orchestrations. To support this, check-pointing is implemented at each await statement. Additionally, the orchestration replays every time after resuming from an await call until it reaches the last activity check-pointed to rebuild the in-memory state of the instance. For high throughput scenarios, you could enable extended sessions. In Logic Apps the runtime engine breaks down the different tasks based on the workflow definition. These tasks are distributed among different workers. The engine makes sure that each task is executed at least once, and that tasks are not executed until their dependencies have finished with the expected status.
Some additional constraints and considerations The orchestration function has to be implemented with some constraints in mind, such as, the code must be deterministic and non-blocking, async calls can only be done using the DurableOrchestrationContext and infinite loops must be avoided. To control the workflow execution flow sometimes we need advanced constructs and operations, that can be complex to implement in Logic Apps.

The Worfklow definition language offers some functions that we can leverage, but sometimes, we need to make use of Azure Functions to perform advanced operations required as part of the workflow.

Additionally, you need to consider some limits of Logic Apps.

Deployment

The deployment of these two platforms also has its differences, as detailed below. 

Durable Functions Logic Apps
CI/CD Durable Functions builds and deployments can be automated using VSTS build and release pipelines. Additionally, other build and release management tools can be used. Logic Apps are deployed using ARM Templates as described here.
 Versioning Versioning strategy is very important in Durable Functions. If you introduce breaking changes in a new version of your workflow, in-flight instances will break and fail.

You can find more information and mitigation strategies here.

Logic Apps keep version history of all workflows saved or deployed. Running instances will continue running based on the active version when they started.
Runtime Azure Functions can not only run on Azure, but be deployed on-premises, on Azure Stack, and can run on containers as well. Logic Apps can only run on Azure.

 

Management and Monitoring

How you manage and monitor each your solutions on platform is quite different. Some of the features are described in the table as follows.

Durable Functions Logic Apps
Tracing and Logging The orchestration activity is tracked by default in Application Insights. Furthermore, you can implement logging to App Insights.   The run history and trigger history are logged by default. Additionally, you can enable diagnostic logging to send additional details to Log Analytics. You can also make use of trackedProperties to enrich your logging.
Monitoring To monitor workflow instances, you need to use Application Insights Query Language to build your custom queries and dashboards. The Logic Apps blade and Log Analytics workspace solution for Logic Apps provide very rich and friendly visual tools for monitoring.

Furthermore, you can build your own monitoring dashboards and queries.

Resubmitting There is no out-of-the-box functionality to resubmit failed messages. Failed instances can easily be resubmitted from the Logic Apps blades or the Log Analytics workspace.

Pricing

Another important consideration when choosing the right platform is pricing. Even though both options offer a serverless option where you only pay for what you use, there are some differences to consider as described below.

Durable Functions Logic Apps
 Serverless In the consumption plan, you pay per-second of resource consumption and the number of executions. More details described here. For workflows you pay per-action and trigger (skipped, failed or succeeded). There is also a marginal cost for storage.

In case you need B2B integration, XML Schemas and Maps or Liquid Templates, you would need to pay for an Integration Account.

More details here.

Instance Based Durable Functions can also be deployed on App Service Plans or App Service Environments where you pay per instance. At the moment there is no option to run Logic Apps on your dedicated instances. However, this will change in the future.

Wrapping-Up

This post contrasts in detail the capabilities and features of both serverless workflow platforms available on Azure. The platform better suited really depends on the functional and non-functional requirements and also on your preferences. As a wrap-up, we could say that:

Logic Apps are better suited when

  • Building integration solutions and leveraging the very extensive list of connectors would reduce the time-to-market and ease connectivity,
  • Visual tools to manage and troubleshoot workflows are required,
  • It’s ok to run only on Azure, and
  • A visual designer and less coding are preferred.

And Durable Functions are a better fit if

  • The list of available bindings is sufficient to meet the requirements,
  • The logging and troubleshooting capabilities are sufficient, and you can build your custom monitoring tools,
  • You require them to run not only on Azure, but on Azure Stack or Containers, and
  • You prefer to have all the power and flexibility of a robust programming language.

It’s also worth mentioning that in most cases, the operation costs of Logic Apps tend be higher than those of Durable Functions, but that would depend case by case. And for enterprise-grade solutions, you should not decide on a platform based on price only, but you have to consider all the requirements and the value provided by the platform.

Having said all this, you can always mix and match Logic Apps and Azure Functions in the same solution so you can get the best of both worlds. Hopefully this post has given you enough information to better choose the platform for your next cloud solution.

Happy clouding!

Cross-posted on Mexia’s Blog. Follow me on @pacodelacruz.

Advertisements

Azure Durable Functions Pattern: Approval Workflow with Slack

00 Feature

Introduction

Recently, I published a post about implementing an Approval Workflow on Azure Durable Functions with SendGrid. In essence, this post is not very different to that one. However, I wanted to demonstrate the same pattern on Azure Durable Functions, but now using Slack as a means of approval. My aim is to show how easy it is to implement this pattern by using a Restful API instead of an Azure Functions binding. What you see here could easily be implemented with your own custom APIs as well :).

Scenario

In my previous post, I show how Furry Models Australia streamlined an approval process for aspiring cats to join the exclusive model agency by implementing a serverless solution on Azure Durable Functions and SendGrid. Now, after a great success, they’ve launched a new campaign targeting rabbits. However, for this campaign they need some customisation. The (rabbit) managers of this campaign have started to collaborate internally with Slack instead of email. Their aim is to significantly improve their current approval process based on phone and pigeon post by having an automated serverless workflow which leverages Slack as their internal messaging platform.

11 Sorry

Pre-requisites

To build this solution, we need:

  • Slack
    • Workspace: In case you don’t have one, you would need to create a workspace on Slack, and you will need permissions to manage apps in the workspace.
    • Channel: On that workspace, you need to create a channel where all approval requests will be sent to.
    • App: Once you have admin privileges on your Slack workspace, you should create a Slack App.
    • Incoming Webhook: On your Slack app, you would need to activate incoming webhooks and then activate a new webhook. The incoming webhook will post messages to the channel you have just created. For that, you must authorise the app to post messages to the channel. Once you have authorised it, you should be able to get the Webhook URL. You will need this URL to configure your Durable Function to post an approval request message every time an application has been received.
    • Message Template: To be able to send interactive button messages to Slack we need to have the appropriate message template.
    • Interactive Components: The webhook configured above enables you to post messages to Slack. Now you need a way to get the response from Slack, for this you can use interactive message buttons. To configure the interactive message button, you must provide a request URL. This request URL will be the URL of the HttpTrigger Azure function that will handle the approval selection.
  • Azure Storage Account: The solution requires a Storage Account with 3 blob containers: requestsapproved, and rejected. The requests container should have public access level so blobs can be viewed without a SAS token. For your own solution, you could make this more secure.

Solution Overview

The figure bellow, shows an overview of the solution we will build based on Durable Functions. As you can see, the workflow is very similar to the one implemented previously. Pictures of the aspiring rabbits are to be dropped in an Azure storage account blob container called requests. At the end of the approval workflow, pictures should be moved to the approved or rejected blob containers accordingly.

20 Solution Overview

The steps of the process are described as follows:

  1. The process is being triggered by an Azure Function with the BlobTrigger input binding monitoring the requests blob container. This function also implements the DurableOrchestrationClient attribute to instantiate a Durable Function orchestration
  2. The DurableOrchestrationClient starts the orchestration.
  3. Then, the Durable Function orchestration calls another function with the ActivityTrigger input binding, which is in charge of sending the approval request to Slack as a Slack interactive message.
  4. The interactive message is posted on Slack. This interactive message includes a callbackId field in which we send the orchestration instance id.
  5. Then, in the orchestration, a timer is created so that the approval workflow does not run forever, and in case no approval is received before a timeout, the request is rejected.
  6. The (rabbit) user receives the interactive message on Slack, and decides whether the aspiring rabbit deserves to join Furry Models, by clicking either the Approve or Reject button. The slack interactive message button will send the response to the configured URL on the Interactive Component of the Slack App (this is the URL of the HttpTrigger function which handles the Slack approval response). The response contains the callbackId field which will allow the correlation in the next step.
  7. The HttpTrigger function receives the response which contains the selection and the callbackId. This function gets the orchestration instance id from the callbackId and checks the status of that instance; if it’s not running, it returns an error message to the user. If it’s running, it raises an event to the corresponding orchestration instance.
  8. The corresponding orchestration instance receives the external event.
  9. The workflow continues when the external event is received or when the timer finishes; whatever happens first. If the timer finishes before a selection is received, the application is automatically rejected.
  10. The orchestration calls another ActivityTrigger function to move the blob to the corresponding container (approved or rejected).
  11. The orchestration finishes.

A sample of the Slack interactive message is shown below.

31 Sample Message

Then, when the user clicks on any of the buttons, it will call the HttpTrigger function described in the step 7 above. Depending on the selection and the status of the orchestration, it will receive the corresponding response:

32 Sample Response

The Solution

The implemented solution code can be found in this GitHub repo. I’ve used the Azure Functions Runtime v2. I will highlight some relevant bits of the code below, and I hope that the code is self-explanatory 😉:

TriggerApprovalByBlob.cs

This BlobTrigger function is triggered when a blob is created in a blob container and starts the Durable Function ochestration (Step 1 above)

OrchestrateRequestApproval.cs

This is the Durable Function orchestration which handles the workflow and is started by the step 2 above.

SendApprovalRequestViaSlack.cs

ActivityTrigger function which sends the approval request via Slack as an Interactive Message (Step 3 above).

ProcessSlackApprovals.cs

HttpTrigger function that handles the response of the interactive messages from Slack (Step 7 above).

MoveBlob.cs

ActivityTrigger function that moves the blob to the corresponding container (Step 10 above).

local.settings.json

These are the settings which configure the behaviour of the solution, including the storage account connection strings, the Slack incoming webhook URL, templates for the interactive message, among others.

You would need to implement these as app settings when deploying to Azure

Wrapping up

In this post, I’ve shown how to implement an Approval Workflow (Human Interaction pattern) on Azure Durable Functions with Slack. On the way, we’ve also seen how to create Slack Apps with interactive messages. What you read here can easily be implemented using your own custom APIs. What we’ve covered should allow you to build serverless approval workflows on Azure with different means of approval. I hope you’ve found the posts of this series useful.

Happy clouding!

Cross-posted on Mexia’s Blog. Follow me on @pacodelacruz.

Azure Durable Functions Pattern: Approval Workflow with SendGrid

Introduction

Durable Functions is a new (in preview at the time of writing) and very interesting extension of Azure Functions that allows you to build stateful and serverless code-based workflows. The Durable Functions extension abstracts all the state management, queueing, and checkpoint implementation commonly required for an orchestration engine. Thus, you just need to focus on your business logic without worrying much on the underlying complexities. Thanks to this extension, now you can:

  1. Implement long-running serverless code-based services beyond the current Azure Function limitation of 10 minutes (as long as you can break down your process into small nano-services which can be orchestrated);
  2. Chain Azure functions, i.e., call one function after the other and pass the output of the first one as an input to the next one (Function chaining pattern);
  3. Execute several functions asynchronously and then continue the workflow when any or all of the asynchronous tasks are completed (Fan-out and Fan-in pattern);
  4. Get the status of a long-running workflow from external clients (Async HTTP APIs Pattern);
  5. Implement the correlation identifier pattern to enable human interaction processes, such as an approval workflow (Human Interaction Pattern) and;
  6. Implement a flexible recurring process with lifetime management (Monitoring Pattern).

It’s worth noting that Azure Durable Functions is not the only way to implement stateful workflows in a serverless manner on Azure. Azure Logic Apps is another awesome platform, core component of the Microsoft Azure iPaaS, that allows you to build serverless and stateful workflows using a designer. In a previous post, I showed how to implement the approval workflow pattern on Logic Apps via SMS messages leveraging Twilio.

In this post, I will show how to implement the Human Interaction Pattern on Azure Durable Functions with SendGrid. You will see on the way that this implementation requires other Durable Functions patterns, such as, function chaining, fan-out and fan-in, and optionally the Async HTTP API Pattern.

Scenario

To illustrate this pattern on Durable Functions, I will be using a fictitious cat model agency called Furry Models Australia. Furry Models is running a campaign to attract the most glamorous, attractive, and captivating cats in Australia. They will be receiving photos of all aspiring cats and they need a streamlined approval process to accept or reject those applications. Furry Models want to implement this in an agile manner with a short time-to-market and with a very cost-effective solution. They know that serverless is the way to go!

11 Join Us

Pre-requisites

To build this solution, we will need:

  • SendGrid account. Given that Azure Functions provides an output binding for SendGrid to send emails, we will be relying on this service. In case you want to implement this solution, you would need a SendGrid account. Once you sign up, you need to get your API Key, which is required for the Azure binding. You can get more information about the SendGrid binding for Azure Functions and how to use it here.
  • An Azure Storage Account: The solution requires a Storage Account with 3 blob containers: requestsapproved, and rejected. The requests container should have public access level so blobs can be viewed without a SAS token. For your own solution, you might want to make this more secure.

Solution Overview

The picture below shows an overview of the approval workflow solution I’ve build based on Durable Functions.

Pictures of the aspiring cats are to be dropped in an Azure storage blob container called requests. At the end of the approval workflow, pictures should be moved to the approved or rejected blob containers accordingly.

20 Solution Overview

The steps of the process are described as follows:

  1. The process is being triggered by an Azure Function with the BlobTrigger input binding monitoring the requests blob container. This function also implements the DurableOrchestrationClient attribute to instantiate a Durable Function orchestration
  2. The DurableOrchestrationClient starts a new instance of the orchestration.
  3. Then, the Durable Function orchestration calls another function with the ActivityTrigger input binding, which is in charge of sending the approval request email using the SendGrid output binding.
  4. SendGrid sends the approval request email to the (cat) user.
  5. Then, in the orchestration, a timer is created so that the approval workflow does not run forever, and in case no approval is received before the timer finishes the request is rejected.
  6. The (cat) user receives the email, and decides whether the aspiring cat deserves to join Furry Models or not, by clicking the Approve or Reject button. Each button has a link to an HttpTrigger Azure Function which expects the selection and the orchestration instanceId as query params
  7. The HttpTrigger function receives the selection and the orchestration instanceId. The function checks the status of the orchestration instance, if it’s not running, it returns an error message to the user. If it’s running, it raises an event to the corresponding orchestration instance.
  8. The corresponding orchestration instance receives the external event.
  9. The workflow continues when the external event is received or when the timer finishes; whatever happens first. If the timer finishes before a selection is received, the application is automatically rejected.
  10. The orchestration calls another ActivityTrigger function to move the blob to the corresponding container (approved or rejected).
  11. The orchestration finishes.

A sample of the email implemented is shown below.

22b Sample Email

The Solution

The implemented solution code can be found in this GitHub repo. I’ve used the Azure Functions Runtime v2. I will highlight some relevant bits of the code below, and I hope that the code is self-explanatory 😉:

TriggerApprovalByBlob.cs

This BlobTrigger function is triggered when a blob is created in a blob container and starts the Durable Function ochestration (Step 1 above)

OrchestrateRequestApproval.cs

This is the Durable Function orchestration which handles the workflow and is started by the step 2 above.

SendApprovalRequestViaEmail.cs

ActivityTrigger function which sends the approval request via email with the SendGrid output binding (Step 3 above).

ProcessHttpGetApprovals.cs

HttpTrigger function that handles the Http Get request initiated by the user selection (click) on the email (Step 7 above).

MoveBlob.cs

ActivityTrigger function that moves the blob to the corresponding container (Step 10 above).

local.settings.json

These are the settings which configure the behaviour of the solution, including the storage account connection strings, the SendGrid API key, templates for the email, among others. You would need to implement these as app settings when deploying to Azure

Wrapping up

In this post, I’ve shown how to implement an Approval Workflow (Human Interaction pattern) on Azure Durable Functions with SendGrid. Whether you wanted to learn more about Durable Functions, to implement a serverless approval workflow or you run a cat model agency, I hope you have found it useful 🙂 Please feel free to ask any questions or add your comments below.

Happy clouding!

Cross-posted on Mexia’s Blog. Follow me on @pacodelacruz.

Implementing the Correlation Identifier Pattern on Stateful Logic Apps using the Webhook Action

Introduction

In many business scenarios, there is the need to implement long-running processes which first send a message to a second process and then pause and wait for an asynchronous response before they continue. Being this an asynchronous communication, the challenge is to correlate the response to the original request. The Correlation Identifier enterprise integration pattern targets this scenario.

Azure Logic Apps provides a stateful workflow engine that allow us to implement robust integration workflows quite easily. One of the workflow actions in Logic Apps is the webhook action, which can be used to implement the Correlation Identifier pattern. One typical scenario in which this pattern can be used is when an approval step with a custom API (with a similar behaviour to the Send Approval Email connector) is required in a workflow.

In this post, I will show how to implement the Correlation Identifier enterprise integration pattern on Logic Apps leveraging the webhook action.

Some background information

The Correlation Identifier Pattern

Adapted from Enterprise Integration Patterns

The Correlation Identifier enterprise integration pattern proposes to add a unique id to the request message on the requestor end and return it as the correlation identifier in the asynchronous response message. This way, when the requestor receives the asynchronous response, it knows which request that response corresponds to. Depending on the functional and non-functional requirements, this pattern can be implemented in a stateless or stateful manner.

Understanding webhooks

A webhook is a service that will be triggered on a particular event and will result on an Http call to a RESTful subscriber. A much more comprehensive definition can be found here. You might be familiar with the configuration of webhooks with static subscribers. In a previous post, I showed how to trigger a Logic App by an SMS message with a Twilio webhook. This webhook will sends all events to the same Http endpoint, i.e. a static subscriber.

The Correlation Identifier pattern on Logic Apps

If you have used the Send Approval Email Logic App Connector, this implements the Correlation Identifier pattern out-of-the-box in a stateful manner. When this connector is used in a Logic App workflow, an email is sent, and the workflow instance waits for a response. Once the email recipient clicks on a button in the email, the particular workflow instance receives an asynchronous callback with a payload containing the user selection; and it continues to the next step. This approval email comes in very handy in many cases; however, a custom implementation of this pattern might be required in different business scenarios. The webhook action allow us to have a custom implementation of the Correlation Identifier pattern.

The Logic Apps Webhook Action

To implement the Correlation Identifier pattern, it’s important that you have a basic understanding of the Logic Apps webhook action. Justin wrote some handy notes about it here. The webhook action of Logic Apps works with an instance-based, i.e. dynamic webhook subscription. Once executed, the webhook action generates an instance-based callback URL for the dynamic subscription. This URL is to be used to send a correlated response to trigger the continuation of the corresponding workflow. This applies the Return Address integration pattern.

We can implement the Correlation Identifier pattern by building a Custom API Connector for Logic Apps following the webhook subscribe and unsubscribe pattern of Logic Apps. However, it’s also possible to implement this pattern without the need of writing a Custom API Connector, as I’ll show below.

Scenario

To illustrate the pattern, I’ll be using a fictitious company called FarmToTable. FarmToTable provides delivery of fresh produce by drone. Consumers subscribe to the delivery service by creating their personalised list of produce to be delivered on a weekly basis. FarmToTable requires to implement an SMS confirmation service so that an SMS message is sent to each consumer the day before the scheduled delivery date. After receiving the text message, the customer must confirm within 12 hours whether they want the delivery or not, so that the delivery is arranged.

The Solution Architecture

As mentioned above, the scenario requires sending an SMS text message and waiting for an SMS response. For sending and receiving the SMS, we will be using Twilio. More details on working with Logic Apps and Twilio on one of my previous posts. Twilio provides webhooks that are triggered when SMS messages are received. The Twilio webhooks only allow static subscriptions, i.e. calling one single Http endpoint. Nevertheless, the webhook action of Logic Apps requires the webhook subscribe and unsubscribe pattern, which works with an instance-based subscription. Thus, we need to implement a wrapper for the required subscribe/unsubscribe pattern.

The architecture of this pattern is shown in the figure below and explain after.

Components of the solution:

  1. Long-running stateful workflow. This is the Logic App that controls the main workflow, sends a request, pauses and waits for an asynchronous response. This is implememented by using the webhook action.
  2. Subscribe/Unsubscribe Webhook Wrapper. In our scenario, we are working with a third-party service (Twilio) that only supports webhooks with static subscriptions; thus, we need to create this wrapper. This wrapper is composed by 4 different parts.
  • Subscription store: A database to store the unique message Id and the instance-based callback URL provided by the webhook action. In my implementation, I’m using Azure Cosmos DB for this. Nevertheless, you can use any other suitable alternative. Because the only message id we can send to Twilio and get back is the phone number, I’m using this as my correlation identifier. We can assume that for this scenario the phone number is unique during the day.
  • Subscribe and Start Request Processing API: this is a RESTful API that is in charge of starting the processing of the request and storing the subscription. I’m implementing this API with a Logic App, but you can use an Azure Function, an API App or a Custom Api App connector for Logic App.
  • Unsubscribe and Cancel Request Processing API: this is another RESTful API that is only going to be called if the webhook action on the main workflow times out. This API is in charge of cancelling the processing and deleting the subscription from the store. The unsubscribe step has a similar purpose to the CancellationToken structure used in C# async programming. In our scenario, there is nothing to cancel though. Like the previous API, I’m implementing this with a Logic App, but you can use different technologies.
  • Instance-based webhook: this webhook is to be triggered by the third-party webhook with a static subscription. Once triggered, this Logic App is in charge of getting the instance-based callback URL from the store and invoking it. After making the call back to the main workflow instance, the subscription is to be deleted.

The actual solution

To implement this solution, I’m going to follow the steps described below:

1. Configure my Twilio account to be able to send and receive SMS messages. More details here.

2. Create a Service Bus Namespace and 2 queues. For my scenario, I’m using one inbound queue (ScheduledDeliveriesToConfirm) and one outbound queue (ConfirmedScheduledDeliveries). For your own scenarios, you can use other triggers and outbound protocols.

3. Create a Cosmos Db collection to store the instance-based webhook subscriptions. More details on how to work with Cosmos Db here.

  • Create Cosmos Db account (with the Document DB API).
  • Create database
  • Create collection.

4. Create the “Subscribe and Start Request Processing API”. I’m using a Logic App workflow to implement this API as shown below. I hope the steps with their comments are self-explanatory.

  • The workflow is Http triggered. It expects, as the request body, the scheduled delivery details and the instance-based callback URL of the calling webhook action.
  • The provided Http trigger URL is to be configured later in the webhook action subscribe Uri of the main Logic App.
  • It stores the correlation on Cosmos Db. More information on the Cosmos Db connector here.
  • It starts the request processing by calling the Twilio connector to send the SMS message.

The expected payload for this API is as the one below. This payload is to be sent by the webhook action subscribe call on the main Logic App:

{
    "callbackUrl": "https://prod-00.australiasoutheast.logic.azure.com/workflows/guid/runs/guid/actions/action/run?params",
    "scheduledDelivery": {
        "deliveryId": "2c5c8390-b6c8-4274-b785-33121b01e219",
        "customer": "Paco de la Cruz",
        "customerPreferredName": "Paco",
        "phone": "+61000000000",
        "orderName": "Seasonal leafy greens and fruits",
        "deliveryAddressName": "Home",
        "deliveryDate": "2017-07-20",
        "deliveryTime": "07:30",
        "createdDateTime": "2017-07-19T09:10:03.209"
    }
} 

You can have a look at the code behind here. Please use it just as a reference, as it hasn’t been refactored for deployment.

5. Create the “Unsubscribe and Cancel Request Processing API”. I used another Logic App workflow to implement this API. This API is only going to be called if the webhook action on the main workflow times out. The workflow is show below.

  • The workflow is Http triggered. It expects as the request body the message id so the corresponding subscription can be deleted.
  • The provided Http trigger URL is to be configured later in the webhook action unsubscribe Uri of the main Logic App.
  • It deletes the subscription from Cosmos Db. More information on the Cosmos Db connector here.

The expected payload for this API is quite simple, as the one shown below. This payload is to be sent by the webhook action unsubscribe call on the main Logic App:

{
    "id": "+61000000000"
}

The code behind is published here. Please use it just as a reference, as it hasn’t been refactored to be deployed.

6. Create the Instance-based Webhook. I’m using another Logic App to implement the instance-based webhook as shown below.

  • The workflow is Http triggered. It’s to be triggered by the Twilio webhook.
  • The provided Http trigger URL is to be configured later in the Twilio webhook.
  • It gets the message Id (phone number) from the Twilio message.
  • It then gets the instance-based subscription (callback URL) from Cosmos Db.
  • Then, it posts the received message to the corresponding instance of the main Logic App workflow by using the correlated callback URL.
  • After making the callback, it deletes the subscription from Cosmos Db.

The code behind for this workflow is here. Please use it just as a reference, as it is not ready to be deployed.

7. Configure the Twilio static webhook. Now, we have to configure the Twilio webhook to call the Logic App created above when an SMS message is received. Detailed instructions in my previous post.

8. Create the long-running stateful workflow. Once we have the implemented the subscribe/unsubscribe webhook wrapper required for the Logic App webhook action, we can start creating the long-running stateful workflow. This is shown below.

In order to trigger the Unsubscription API, the timeout property of the webhook action must be configured. This can be specified under the settings of the action. The Duration is to be configured the in ISO 8601 duration format. If you don’t want to resend the request after the time out, you should turn off the retry policy.

  • The workflow is triggered by messages on the ScheduledDeliveriesToConfirm Service Bus queue.
  • Then the webhook action:
    • Sends the scheduled delivery message and the corresponding instance-based callback URL to the Subscribe and Start Request Processing Logic App.
    • Waits for the callback from the Instance-based webhook. This would receive as an Http post the response send by the customer. If a response is received before the time out limit, the action will succeed and continue to the next action.
    • If the webhook action times out, it calls the Unsubscribe and Cancel Request Processing Logic App and sends the message id (phone number); and the action fails so the workflow does not continue. However, if required, you could continue the workflow by configuring the RunAfter property of the subsequent action.
  • If a response is received, the workflow continues assessing the response. If the response is ‘YES’, it sends the original message to the ConfirmedScheduledDeliveries queue.

The code behind of this workflow is available here. Please use it just as a reference only, as it hasn’t been refactored for deployment.

Now, we have finished implementing the whole solution! 🙂 You can have a look at all the Logic Apps JSON definitions in this repository.

Conclusion

In this post, I’ve shown how to implement the Correlation Identifier pattern using a stateful Logic App. To illustrate the pattern, I implemented an approval step in a Logic App workflow with a custom API. For this, I used Twilio, a third-party service, that offers a webhook with a static subscription; and created a wrapper to implement the subscribe/unsubscribe pattern, including an instance-based webhook to meet the Logic Apps webhook action requirements.

I hope you find this post useful whenever you have to add a custom approval step or correlate asynchronous messages using Logic Apps, or that I’ve given you an overview of how to enable the correlation of asynchronous messages in your own workflow or integration scenarios.

Feel free to add your comments or questions below, and happy clouding!

Cross-posted on Mexia Blog.
Follow me on @pacodelacruz

Implementing a WCF Client with Certificate-Based Mutual Authentication without using Windows Certificate Store

Originally posted on Kloud’s blog.

Kloud Blog

Windows Communication Foundation (WCF) provides a relatively simple way to implement Certificate-Based Mutual Authentication on distributed clients and services. Additionally, it supports interoperability as it is based on WS-Security and X.509 certificate standards. This blog post briefly summarises mutual authentication and covers the steps to implement it with an IIS hosted WCF service.

Even though WCF’s out-of-the-box functionality removes much of the complexity of Certificate-Based Mutual Authentication in many scenarios, there are cases in which this is not what we need. For example, by default, WCF relies on the Windows Certificate Store for accessing the own private key and the counterpart’s public key when implementing Certificate-Based Mutual Authentication.

Having said so, there are scenarios in which using the Windows Certificate Store is not an option. It can be a deployment restriction or a platform limitation. For example, what if you want to create an Azure WebJob which calls a SOAP…

View original post 678 more words