When to Use an Azure App Service Environment v2 (App Service Isolated)

Introduction

The Azure App Service Environment (ASE) is a premium feature offering of the Azure App Services which is fully isolated, highly scalable, and runs on a customer’s virtual network. On an ASE you can host Web Apps, API Apps, Mobile Apps and Azure Functions. The first generation of the App Service Environment (ASE v1) was released in late 2015. Two significant updates were launched after that; in July 2016, they added the option of having an Internal Load Balancer, and in August of that year, an Application Gateway with an Web Application Firewall could be configured for the ASE. After all the feedback Microsoft had been receiving on this offering, they started working on the second generation of the App Service Environment (ASE v2); and on July of 2017, it’s been released as Generally Available.

In my previous job, I wrote the post “When to Use an App Service Environment“, which referred to the first generation (ASE v1). I’ve decided to write an updated version of that post, mainly because that one has been one of my posts with more comments and questions and I know the App Service Environment, also called App Service Isolated, will continue to grow in popularity. Even though the ASE v2 has been simplified, I still believe many people would have questions about it or would want to make sure that they have no other option but to pay for this premium feature offering when they have certain needs while deploying their solutions on the Azure PaaS.

When you are planning to deploy Azure App Services, you have the option of creating them on a multi-tenant environment or on your own isolated (single-tenant) App Service Environment. If you want to understand in detail what they mean by “multi-tenant environment” for Azure App Services, I recommend you to read this article. When they refer to a “Scale-Unit” in that article, they are talking about this multi-tenant shared infrastructure. You could picture an App Service Environment having a very similar architecture, but with all the building blocks dedicated to you, including the Front-End, File Servers, API Controllers, Publisher, Data Roles, Database, Web Workers, etc.

In this post, I will try to summarise when is required to use an App Service Environment (v2), and in case you have an App Service Environment v1, why it makes a lot of sense to migrate it to the second generation.

App Service Environment v2 Pricing

Before getting too excited about the functionality and benefits of the ASE v2 or App Service Isolated, it’s important to understand its pricing model.

Even though, they have abstracted much of the complexity of the ASE in the second generation, we still need to be familiar with the architecture of this feature offering to properly calculate the costs of the App Service Environment v2. To calculate the total cost of your ASE, you need to consider an App Service Environment Base Fee and the cost of the Isolated Workers.

The App Service Environment Base Fee covers the cost of the of all the infrastructure required to run your single-tenant and isolated Azure App Services; including load balancing, high-availability, publishing, continuous delivery, app settings shared across all instances, deployment slots, management APIs, etc. None of your assemblies or code are executed in the instances which are part of this layer. Then, the Isolated Workers are the ones executing your Web Apps, API Apps, Mobile Apps or Functions. You decide the size and how many Isolated workers you want to spin up, thus the cost of the worker layer. Both layers are charged by the hour. Below, the prices for the Australian Regions in Australian Dollars are shown.

In Australia, the App Service Environment base fee is above $ 1700 AUD per month, and the Isolated I1 instance is close to $ 500 AUD per month. This means that the entry-level of an ASE v2 with one Isolated Worker costs around $ 2,200 AUD per month or above $ 26,000 AUD per year, which is very similar to the price of the ASE v1 in this region. This cost can easily escalate by scaling up or scaling out the ASE. It’s noteworthy that prices vary from region to region. For instance, according to the Azure pricing calculator, at the time of writing, the prices for the Australian Regions are around 35% more expensive than those in the West US 2. To calculate your own costs, in your region and in your currency, check the pricing calculator.

Moreover, the App Service Environment Base Fee is calculated based on the default configuration, which uses I1 instances for the ASE Front-End, and with the scaling rule of adding one Front End instance for every 15 worker instances, as described in the Front End scale configuration page shown below. If you keep this configuration, the App Service Environment Base Fee will stay the same, regardless of the number and size of workers. However, we can scale-up the Front End instances to I2 or I3 or reduce the number of workers per Front End instance. This would have an impact on the App Service Environment Base Fee. To calculate the extra cost, you would need to add the cost of every additional core on top of the base configuration. Before changing the Front-End scaling configuration, bear in mind that the Front End instances act only as a layer seven-load balancer (round robin) and perform SSL termination. All the compute of your App Services is executed in the worker instances.

With these price tag, the value and benefits of the ASE must be clear enough so that we can justify the investment to the business.

The benefits of the Azure App Service Isolated or App Service Environment v2.

To understand the benefits and advance features of an App Service Environment v2, it’s worth comparing what we get with this premium offering with what we get by deploying an Azure App Service on a multi-tenant environment. This comparison is shown in the table below.

Multi-tenant environment App Service Isolated /
App Service Environment v2
Virtual Network (VNET) Integration Yes.

Azure App Services can be integrated to an Azure Virtual Network.

Yes.

An ASE is always deployed in the customer’s Virtual Network.

Static Inbound IP Address Yes.

By default, Azure App Services get assigned a virtual IP addresses. However, this is shared with other App Services in that region.

You can bind a IP-based SSL certificate to your App Service, which will give you a dedicated public inbound IP address.

Yes.

ASEs provide a static virtual inbound IP address (VIP). This VIP can be public or private, depending on whether configured with and Internal Load Balancer (ILB) or not.

More information on the ASE network architecture here.

Static Outbound IP Address No.

The outbound IP address of an App Service is not static, but it can be any address within a certain range, which is not static either.

Yes.

ASEs provide a static public outbound IP address. More information here.

Connecting to Resources On-Premises Yes.

Azure App Service VNET integration provides the capability to access resources on-premises via a VPN over the public Internet.

Additionally, Azure Hybrid Connections can be used to connect to resources on-premises without requiring major firewall or network configurations.

Yes.

In addition to VPN over the public internet and Hybrid Connections support, an ASE provides the ability to connect to resources on-premises via ExpressRoute, which provides a faster, more reliable and secure connectivity without going over the public Internet.

Note: ExpressRoute has its own pricing model.

Private access only No.

App Services are always accessible via the public internet.

One way to restrict access to your App Service is using IP and Domain restrictions, but the App Service is still reachable from the internet.

Yes.

An ASE can be deployed with an Internal Load Balancer, which will lock down your App Services to be accessible only from within your VNET or via ExpressRoute or Site-to-Site VPN.

Control over inbound and outbound traffic No. Yes.

An ASE is always deployed on a subnet within a VNET. Inbound and outbound traffic can be controlled using a network security group.

Web Application Firewall Yes.

Starting from mid-July 2017, Azure Application Gateway with Web Application Firewall supports App Services in a multi-tenant environment. More info on how to configure it here.

Yes.

An Azure Application Gateway with Web Application Firewall can be configured to protect App Services on an ASE by preventing SQL injections, session hijacks, cross-site scripting attacks, and other attacks.

Note: The Application Gateway with Web Application Firewall has its own pricing model.

SLA 99.95%

No SLA is provided for Free or Shared tiers.

App Services starting from the Basic tier provide an SLA of 99.95%.

99.95%

App Services deployed on an ASE provide an SLA of 99.95%.

Instance Sizes / Scale-Up Full range.

App Services can be deployed on almost the full range of tiers from Free to Premium v2.

3 sizes.

Workers on an ASE v2 support three sizes (Isolated)

Scalability / Scale-Out Maximum instances:

Basic: 3

Standard: 10

Premium: 20

ASE v2 supports up to 100 Isolated Worker instances.
Deployment Time Very fast.

The deployment of New App Services on the multi-tenant environment is rather fast, usually less than 2 minutes.

This can vary.

Slower.

The deployment of a New App Service Environment can take between 60 and 90 minutes (Tested on the Australian Regions)

This can vary.

This is important to consider, particularly in cold DR scenarios.

Scaling out Time Very fast.

Scaling out an App Service usually takes less than 2 minutes.

This can vary.

Slower.

Scaling out in an App Service Environment can take between 30 and 40 minutes (Tested on the Australian Regions).

This can vary.

This is something to consider when configuring auto-scaling.

Reasons to migrate your ASE v1 to an ASE v2

If you already have an App Service Environment v1, there are many reasons to migrate to the second generation, including:

  • More horsepower: With the ASE v2, you get Dv2-based machines, with faster cores, SSD storage, and twice the memory per core when compared to the ASE v1. You are practically getting double performance per core.
  • No stand-by workers for fault-tolerance: To provide fault-tolerance, the ASE v1 requires you to have one stand-by worker for every 20 active workers on each worker pool. You have to pay for those stand-by workers. ASE v2 has abstracted that for you, and you don’t need to pay for those.
  • Streamlined scaling: If you want to implement auto-scaling on an ASE v1, you have to manage scaling not only at the App Service Plan level, but at the Worker Pool level as well. For that, you have to use a complex inflation rate formula, which requires you to have some instances to be waiting and ready for whenever an auto-scale condition kicks off. This has its own cost implications. ASEs v2 allow you to auto-scale your App Service Plan the same way you do it with your multi-tenant App Services, without the complexity of managing worker pools and without paying for waiting instances.
  • Cost saving: Because you are getting an upgraded performance, you should be able to host the same workloads using half as much in terms of cores. Additionally, you don’t need to pay for fault-tolerance or auto-scaling stand-by workers.
  • Better experience and abstraction: Deployment and scaling of the ASE v2 is much simpler and friendlier than it was with the first generation.

Wrapping Up

So coming back to original the question, when to use an App Service Environment? When is it required and would make sense to pay the premium price of the App Service Environment?

  • When we need to restrict the App Services to be accessible only from within the VNET or via Express Route or Site-to-Site VPN, OR
  • When we require to control inbound and outbound traffic to and from our App Services OR
  • When we need a connection between the App Services and resources on-premises via a secure and reliable channel (ExpressRoute) without going via the public Internet OR
  • When we require much more processing power, i.e. scaling out to more than 20 instances OR
  • When a static outbound IP Address for the App Service is required.

What else would you consider when deciding whether to use an App Service Environment for your workload or not? Feel free to post your comments or feedback below!

Happy clouding!

Cross-posted on Mexia Blog. Follow me on @pacodelacruz.

Implementing the Correlation Identifier Pattern on Stateful Logic Apps using the Webhook Action

Introduction

In many business scenarios, there is the need to implement long-running processes which first send a message to a second process and then pause and wait for an asynchronous response before they continue. Being this an asynchronous communication, the challenge is to correlate the response to the original request. The Correlation Identifier enterprise integration pattern targets this scenario.

Azure Logic Apps provides a stateful workflow engine that allow us to implement robust integration workflows quite easily. One of the workflow actions in Logic Apps is the webhook action, which can be used to implement the Correlation Identifier pattern. One typical scenario in which this pattern can be used is when an approval step with a custom API (with a similar behaviour to the Send Approval Email connector) is required in a workflow.

In this post, I will show how to implement the Correlation Identifier enterprise integration pattern on Logic Apps leveraging the webhook action.

Some background information

The Correlation Identifier Pattern

Adapted from Enterprise Integration Patterns

The Correlation Identifier enterprise integration pattern proposes to add a unique id to the request message on the requestor end and return it as the correlation identifier in the asynchronous response message. This way, when the requestor receives the asynchronous response, it knows which request that response corresponds to. Depending on the functional and non-functional requirements, this pattern can be implemented in a stateless or stateful manner.

Understanding webhooks

A webhook is a service that will be triggered on a particular event and will result on an Http call to a RESTful subscriber. A much more comprehensive definition can be found here. You might be familiar with the configuration of webhooks with static subscribers. In a previous post, I showed how to trigger a Logic App by an SMS message with a Twilio webhook. This webhook will sends all events to the same Http endpoint, i.e. a static subscriber.

The Correlation Identifier pattern on Logic Apps

If you have used the Send Approval Email Logic App Connector, this implements the Correlation Identifier pattern out-of-the-box in a stateful manner. When this connector is used in a Logic App workflow, an email is sent, and the workflow instance waits for a response. Once the email recipient clicks on a button in the email, the particular workflow instance receives an asynchronous callback with a payload containing the user selection; and it continues to the next step. This approval email comes in very handy in many cases; however, a custom implementation of this pattern might be required in different business scenarios. The webhook action allow us to have a custom implementation of the Correlation Identifier pattern.

The Logic Apps Webhook Action

To implement the Correlation Identifier pattern, it’s important that you have a basic understanding of the Logic Apps webhook action. Justin wrote some handy notes about it here. The webhook action of Logic Apps works with an instance-based, i.e. dynamic webhook subscription. Once executed, the webhook action generates an instance-based callback URL for the dynamic subscription. This URL is to be used to send a correlated response to trigger the continuation of the corresponding workflow. This applies the Return Address integration pattern.

We can implement the Correlation Identifier pattern by building a Custom API Connector for Logic Apps following the webhook subscribe and unsubscribe pattern of Logic Apps. However, it’s also possible to implement this pattern without the need of writing a Custom API Connector, as I’ll show below.

Scenario

To illustrate the pattern, I’ll be using a fictitious company called FarmToTable. FarmToTable provides delivery of fresh produce by drone. Consumers subscribe to the delivery service by creating their personalised list of produce to be delivered on a weekly basis. FarmToTable requires to implement an SMS confirmation service so that an SMS message is sent to each consumer the day before the scheduled delivery date. After receiving the text message, the customer must confirm within 12 hours whether they want the delivery or not, so that the delivery is arranged.

The Solution Architecture

As mentioned above, the scenario requires sending an SMS text message and waiting for an SMS response. For sending and receiving the SMS, we will be using Twilio. More details on working with Logic Apps and Twilio on one of my previous posts. Twilio provides webhooks that are triggered when SMS messages are received. The Twilio webhooks only allow static subscriptions, i.e. calling one single Http endpoint. Nevertheless, the webhook action of Logic Apps requires the webhook subscribe and unsubscribe pattern, which works with an instance-based subscription. Thus, we need to implement a wrapper for the required subscribe/unsubscribe pattern.

The architecture of this pattern is shown in the figure below and explain after.

Components of the solution:

  1. Long-running stateful workflow. This is the Logic App that controls the main workflow, sends a request, pauses and waits for an asynchronous response. This is implememented by using the webhook action.
  2. Subscribe/Unsubscribe Webhook Wrapper. In our scenario, we are working with a third-party service (Twilio) that only supports webhooks with static subscriptions; thus, we need to create this wrapper. This wrapper is composed by 4 different parts.
  • Subscription store: A database to store the unique message Id and the instance-based callback URL provided by the webhook action. In my implementation, I’m using Azure Cosmos DB for this. Nevertheless, you can use any other suitable alternative. Because the only message id we can send to Twilio and get back is the phone number, I’m using this as my correlation identifier. We can assume that for this scenario the phone number is unique during the day.
  • Subscribe and Start Request Processing API: this is a RESTful API that is in charge of starting the processing of the request and storing the subscription. I’m implementing this API with a Logic App, but you can use an Azure Function, an API App or a Custom Api App connector for Logic App.
  • Unsubscribe and Cancel Request Processing API: this is another RESTful API that is only going to be called if the webhook action on the main workflow times out. This API is in charge of cancelling the processing and deleting the subscription from the store. The unsubscribe step has a similar purpose to the CancellationToken structure used in C# async programming. In our scenario, there is nothing to cancel though. Like the previous API, I’m implementing this with a Logic App, but you can use different technologies.
  • Instance-based webhook: this webhook is to be triggered by the third-party webhook with a static subscription. Once triggered, this Logic App is in charge of getting the instance-based callback URL from the store and invoking it. After making the call back to the main workflow instance, the subscription is to be deleted.

The actual solution

To implement this solution, I’m going to follow the steps described below:

1. Configure my Twilio account to be able to send and receive SMS messages. More details here.

2. Create a Service Bus Namespace and 2 queues. For my scenario, I’m using one inbound queue (ScheduledDeliveriesToConfirm) and one outbound queue (ConfirmedScheduledDeliveries). For your own scenarios, you can use other triggers and outbound protocols.

3. Create a Cosmos Db collection to store the instance-based webhook subscriptions. More details on how to work with Cosmos Db here.

  • Create Cosmos Db account (with the Document DB API).
  • Create database
  • Create collection.

4. Create the “Subscribe and Start Request Processing API”. I’m using a Logic App workflow to implement this API as shown below. I hope the steps with their comments are self-explanatory.

  • The workflow is Http triggered. It expects, as the request body, the scheduled delivery details and the instance-based callback URL of the calling webhook action.
  • The provided Http trigger URL is to be configured later in the webhook action subscribe Uri of the main Logic App.
  • It stores the correlation on Cosmos Db. More information on the Cosmos Db connector here.
  • It starts the request processing by calling the Twilio connector to send the SMS message.

The expected payload for this API is as the one below. This payload is to be sent by the webhook action subscribe call on the main Logic App:

{
    "callbackUrl": "https://prod-00.australiasoutheast.logic.azure.com/workflows/guid/runs/guid/actions/action/run?params",
    "scheduledDelivery": {
        "deliveryId": "2c5c8390-b6c8-4274-b785-33121b01e219",
        "customer": "Paco de la Cruz",
        "customerPreferredName": "Paco",
        "phone": "+61000000000",
        "orderName": "Seasonal leafy greens and fruits",
        "deliveryAddressName": "Home",
        "deliveryDate": "2017-07-20",
        "deliveryTime": "07:30",
        "createdDateTime": "2017-07-19T09:10:03.209"
    }
} 

You can have a look at the code behind here. Please use it just as a reference, as it hasn’t been refactored for deployment.

5. Create the “Unsubscribe and Cancel Request Processing API”. I used another Logic App workflow to implement this API. This API is only going to be called if the webhook action on the main workflow times out. The workflow is show below.

  • The workflow is Http triggered. It expects as the request body the message id so the corresponding subscription can be deleted.
  • The provided Http trigger URL is to be configured later in the webhook action unsubscribe Uri of the main Logic App.
  • It deletes the subscription from Cosmos Db. More information on the Cosmos Db connector here.

The expected payload for this API is quite simple, as the one shown below. This payload is to be sent by the webhook action unsubscribe call on the main Logic App:

{
    "id": "+61000000000"
}

The code behind is published here. Please use it just as a reference, as it hasn’t been refactored to be deployed.

6. Create the Instance-based Webhook. I’m using another Logic App to implement the instance-based webhook as shown below.

  • The workflow is Http triggered. It’s to be triggered by the Twilio webhook.
  • The provided Http trigger URL is to be configured later in the Twilio webhook.
  • It gets the message Id (phone number) from the Twilio message.
  • It then gets the instance-based subscription (callback URL) from Cosmos Db.
  • Then, it posts the received message to the corresponding instance of the main Logic App workflow by using the correlated callback URL.
  • After making the callback, it deletes the subscription from Cosmos Db.

The code behind for this workflow is here. Please use it just as a reference, as it is not ready to be deployed.

7. Configure the Twilio static webhook. Now, we have to configure the Twilio webhook to call the Logic App created above when an SMS message is received. Detailed instructions in my previous post.

8. Create the long-running stateful workflow. Once we have the implemented the subscribe/unsubscribe webhook wrapper required for the Logic App webhook action, we can start creating the long-running stateful workflow. This is shown below.

In order to trigger the Unsubscription API, the timeout property of the webhook action must be configured. This can be specified under the settings of the action. The Duration is to be configured the in ISO 8601 duration format. If you don’t want to resend the request after the time out, you should turn off the retry policy.

  • The workflow is triggered by messages on the ScheduledDeliveriesToConfirm Service Bus queue.
  • Then the webhook action:
    • Sends the scheduled delivery message and the corresponding instance-based callback URL to the Subscribe and Start Request Processing Logic App.
    • Waits for the callback from the Instance-based webhook. This would receive as an Http post the response send by the customer. If a response is received before the time out limit, the action will succeed and continue to the next action.
    • If the webhook action times out, it calls the Unsubscribe and Cancel Request Processing Logic App and sends the message id (phone number); and the action fails so the workflow does not continue. However, if required, you could continue the workflow by configuring the RunAfter property of the subsequent action.
  • If a response is received, the workflow continues assessing the response. If the response is ‘YES’, it sends the original message to the ConfirmedScheduledDeliveries queue.

The code behind of this workflow is available here. Please use it just as a reference only, as it hasn’t been refactored for deployment.

Now, we have finished implementing the whole solution! 🙂 You can have a look at all the Logic Apps JSON definitions in this repository.

Conclusion

In this post, I’ve shown how to implement the Correlation Identifier pattern using a stateful Logic App. To illustrate the pattern, I implemented an approval step in a Logic App workflow with a custom API. For this, I used Twilio, a third-party service, that offers a webhook with a static subscription; and created a wrapper to implement the subscribe/unsubscribe pattern, including an instance-based webhook to meet the Logic Apps webhook action requirements.

I hope you find this post useful whenever you have to add a custom approval step or correlate asynchronous messages using Logic Apps, or that I’ve given you an overview of how to enable the correlation of asynchronous messages in your own workflow or integration scenarios.

Feel free to add your comments or questions below, and happy clouding!

Cross-posted on Mexia Blog.
Follow me on @pacodelacruz

Azure Functions or WebJobs? Where to run my background processes on Azure?

Originally posted on Kloud’s blog.

Kloud Blog

functionsvswebjobs-icon

Introduction

Azure WebJobs have been a quite popular way of running background processes on Azure. They have been around since early 2014. When they were released, they were a true PaaS alternative to Cloud Services Worker Roles bringing many benefits like the WebJobs SDK, easy configuration of scalability and availability, a dashboard, and more recently all the advantages of Azure Resource Manager and a very flexible continuous delivery model. My colleague Namit previously compared WebJobs to Worker Roles.

Meanwhile, Azure Functions were announced earlier this year (march 2016). Azure Functions, or “Functions Apps” as they appear on the Azure Portal, are Microsoft’s Function as a Service (FaaS) offering. With them, you can create microservices or small pieces of code which can run synchronously or asynchronously as part of composite and distributed cloud solutions. Even though they are still in the making (at the time of this writing they…

View original post 1,584 more words

Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Originally posted on Kloud’s blog.

Kloud Blog

Introduction

Azure Web Apps or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy…

View original post 733 more words

Monitoring Azure WebJobs Health with Application Insights

Originally posted on Kloud’s blog.

Kloud Blog

Introduction

Azure WebJobs have been available for quite some time and have become very popular for running background tasks with programs or scripts. WebJobs are deployed as part of Azure App Services (Web Apps), which include their companion site Kudu. Kudu provides a lot of features, including a REST API, which provides operations for source code management (SCM), virtual file system, deployments, accessing logs, and for WebJob management as well. The Kudu WebJobs API provides different operations including listing WebJobs, uploading a WebJob, or triggering it. One of the operations of this API allows to get the status of a specific WebJob by name.

Another quite popular Azure service is Application Insights. This provides functionality to monitor and diagnose application issues and to analyse usage and performance as well. One of these features are web tests, which provide a way to monitor the availability and health…

View original post 1,342 more words

When to use an Azure App Service Environment?

Originally posted on Kloud’s blog.

Kloud Blog

Introduction

An Azure App Service Environment (ASE) is a premium Azure App Service hosting environment which is dedicated, fully isolated, and highly scalable. It clearly brings advanced features for hosting Azure App Services which might be required in different enterprise scenarios. But being this a premium service, it comes with a premium price tag. Due to its cost, a proper business case and justification are to be prepared before architecting a solution based on this interesting PaaS offering on Azure.

When planning to deploy Azure App Services, an organisation has the option of creating an Azure Service Plan and hosting them there. This might be good enough for most of the cases. However, when higher demands of scalability and security are present, a dedicated and fully isolated App Service Environment might be necessary.

Below, I will summarise the information required to make a decision regarding the need of using an App…

View original post 1,534 more words