Azure App Service

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite programming language or framework. Applications run and scale with ease on both Windows and Linux-based environments.

Key features

  • Built-in auto scale support The ability to scale up/down or scale out/in is baked into Azure App Service.
    • Scaling out/in is the ability to increase or decrease the number of machine instances that are running your web app.
    • Scaling up/down is scaling the resuorces of the underlying machine that is hosting your web app.
  • Container Support With azure App Service, you can deploy and run containerized web apps on Windows and Linux
  • Continuous integration/deployment support The Azure portal provides out-of-the-box cotinuous integration and deployment with Azure DevOps Services, Github, Bitbucket, FTP, or a local Git repository on your development machine.
  • Deployment Slots When deploying a web app, you can use a separate deployment slot instead of the default production slot when you’re running in the Standard App Service Plan tier or better.
  • App Service on Linux App Service can also host web apps natively on Linux for supported application Stacks. .Net Core, Java (Tomcat, JBoss EAP or Java SE with an embedded web server), Node.js, Python, and PHP.
    • az webapp list-runtimes --os-type linux
    • App Service on linux have some limitations
      • Shared pricing tier has no support
      • Portal only shows featurs that currently work for linux apps.
      • When deployed to built-in images, your code and content are allocated as a storage volume for web content, backy by Azure Storage.

App Service Plans

In App Service, an app always runs in an App Service Plan. An App Service plan defines a set of compute resources for a web app to run. One or more apps can be configured to run on the same computing resoures (or in the same App Service plan).

Each app service plan defines

  • Operating System ( Windows, Linux )
  • Region ( West US, East US, etc. )
  • Number of VM instances
  • Size of VM instances ( Small, Medium, Large )
  • Pricing Tier ( Free, Shared, Basic, Standard, Premium, Premium V2, PremiumV3, Isoleted, IsolatedV2 )

Pricing Tiers

  • Shared compute: Free and Shared, the two base tiers, runs an app on the same Azure VM as other App Service apps, including apps of other customers. These tiers are intended to be used only for development and testing purposes
  • Dedicated compute: The Basic, Standard, Premium, PremiumV2 and PremiumV3 tiers run apps on dedicated Azure VMs.
  • Isolated: The Isolated and Isolatedv2 tiers run dedicated Azure VMs on dedicated Azure Virtual Networks.

Deploy to App Service

App service supports both automated and manual deployment.

Automated deployment

Azure App Service supports automated deployment from several source control systems as part of a continuous integration and deployment (CI/CD) pipeline.

  • Azure DevOps services
  • Github
  • Bitbucket

Manual deployment

  • Git
  • CLI az webapp up
  • Zip Deploy
  • FTP/S

Sidecar containers enable deploying extra services and features without making them tightly coupled to your main application container.

Multitenant App Service networking features

  • Inbound Features App-assigned Address, Access restrictions, Service endpoints, Private endpoints
  • Outbound Features Hybrid connections, Gateway-required virtual network integration

Find outbound IPs

az webapp show --resource-group "<group_name>" --name "<app_name>" --query outboundIpAddresses --output tsv
az webapp show --resource-group "<group_name>" --name "<app_name>" --query possibleOutboundIpAddresses --output tsv

Web app settings

In App Service, app settings are variables passed as environment variables to the application code.

Configure application settings

In App Service, app settings are variables passed as environment variables to the application code, They’re injected into app environment at app startup. When you add, remove or edit app settings, App Service triggeres an app restart.

App service override settings in Web.config or appsettings.json. App settings are always encrypted when stored (encrypted-at-rest)

! In a default Linux app service or a custom linux container, any nested JSON key structure in the app settings name like ApplicationInsights:InstrumentationKey needs to be configured in App Service as Application__InstrumentationKey for the key name. (replace any : with __ double underscore)

At runtime, connection strings are available as environment variables, prefixed with:

  • SQLServer: SQLCONNSTR_
  • MySQL: MYSQLCONNSTR_
  • SQLAzure: SQLAZURECONNSTR_
  • Custom: CUSTOMCONNSTR_
  • PostgreSQL: POSTGRESQLCONNSTR_
  • Notification Hub: NOTIFICATIONHUBCONNSTR_
  • Service Bus: SERVICEBUSCONNSTR_
  • Event Hub: EVENTHUBCONNSTR_
  • Document DB: DOCDBCONNSTR_
  • Redis Cache REDISCACHECONNSTR_

Used like MYSQLCONNSTR_connectionstring1.

Configure environment variables for custom containers

az webapp config appsettings set --resource-group "<group-name>" --name "<app-name>" --settings key1=value1 key2=value2

In the Configuration > Path mappings section you can configure handler mappings, and virtual application directory mappings. Each app has the default root path (/) mapped to D:\home\site\wwwroot, you can edit or add virtual applications and directories.

Diagnostic logging

There are built-in diagnostics to assist with debugging an App Service app.

TypePlatformLocationDescription
Application LoggingWindows, LinuxApp service file system and/or azure storage blobsLogs messages generated by your application code.
Web server loggingWindowsApp service file system or Azure Storage blobsRaw HTTP request data in the W3C extended log file format
Detailed error messagesWindowsApp service file systemCopies of the .html error pages
Failed request tracingWindowsApp service file systemDetailed tracing information on failed requests.
Deployment LoggingWindows, LinuxApp service file systemDeployment Logging happens automatically
  • ASP.NET applications can use the System.Diagnostics.Trace class to log information to the application diagnostics log.
	System.Diagnosics.Trace.TraceError("some error");
  • Python applications can use the OpenCensus package to send logs to the application diagnostics log.

! Some types of logging buffer write to the log file, which can result in out of order events in the stream.

Stream logs:

	az webapp log tail --name "appname" --resource-group "myResourceGroup"

For logs stored in the App Service file system, the easiest way is to download the ZIP file in the browser at:

  • Linux/container apps: https://<app-name>.scm.azurewebsites.net/api/logs/docker/zip
  • Windows apps: https://<app-name>.scm.azurewebsites.net/api/dump

Add and manage TLS/SSL certificates in Azure App Service

Azure App Service has tools that let you create, upload or import private certificate or a public certificate into App Service.

Options to add certificates in App Service

  • Create a free App Service managed certificate
  • Import/Purchase an App Service certificate
  • Import a certificate from Azure Key Vault
  • Upload a private certificate
  • Upload a public certificate
Create a free App Service managed certificate

A private certificate that’s free of charge and easy to use if you just need to secure your custom domain in App Service.

Free certificate limitations

  • Doesn’t support wildcard certificates
  • Doesn’t support use as a client certificate by using certificate thumbprints, which is planned for deprecation and removal
  • Doesn’t support private DNS
  • Isn’t exportable
  • Isn’t supported in App Service Environment
  • Supports only alphanumeric characters, dashes(-), and periods (.)
  • Supporst custom domains of a length up to 64 characters

! Azure fully manages the certificates on your behalf, Make sure to avoid hard dependencies and “pinning” practice certificates to the managed certificate or any part of the certificate hierarchy.

Import an App Service Certificate

App Service Certificate combines the simplicity of automated certificate management and the flexibility of renewal options.

If you purchase an App Service Certificate from Azure, Azure manages the following tasks:

  • Takes care of the purchase process from certificate provider
  • Performs domain verification of the certificate
  • Maintains the certificate in Azure Key Vault
  • Manages certificate renewal
  • Synchronize the certificate automatically with the imported copies in App Service apps.
Certificates from Key Vault

If you use Key Vault to manage your certificates, you can import a PKCS12 certificate into App Service from Key Vault if you meet the requirements.

By Default, the App Service resources provider doesn’t have access to your key vault. To use a key vault for a certificate deployment, you must auhtorize read access for the resource provider (App Service) to the key vault. You can grant access with an access policy or role-based access control (RBAC)

az role assignment create --role "Key Vault Certificate User" --assignee "abfa0a7c-a6b6-4736-8310-5855508787cd" --scope "/subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name}"
    #Assign by Service Principal ApplicationId
    New-AzRoleAssignment -RoleDefinitionName "Key Vault Certificate User" -ApplicationId    "abfa0a7c-a6b6-4736-8310-5855508787cd" -Scope "/subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/   providers/Microsoft.KeyVault/vaults/{key-vault-name}"
Private certificate requirements
  • Exported as a password-protected PFX file, encrypted using triple DES.
  • Contains private key at lease 2048 bits long
  • Contains all intermediate certificates and the root certificate in the certificate chain.

To secure a custom domain in a TLS Binding, the certificate has other requirements:

  • Contains an Extended Key Usage for server authentication (OID = 1.3.6.1.5.5.7.3.1)
  • Signed by a trusted certificate authority

Autoscaling

Autoscaling is a cloud system or process that adjusts available resources based on the current demand. Autoscaling performs scaling in and out, as opposed to scaling up and down.

Autoscaling enables a system to adjust the resources required to meet the varying demand from users, while controlling the costs associated with these resources.

A scale-out action increases the number of instances, and a scale-in action reduces the instance count

Scale out options are;

  • Manual
  • Azure Autoscale. Makes decisions based on rules that you define.
  • Azure App Service automatic scaling. Makes decisions based on the parameters that you select.

! Autoscaling responds to changes in the environment by adding or removing web servers and balancing the load between them. Autoscaling doesn’t have any effect on the CPU power, memory, or storage capacity of the web servers powering the app, it only changes the number of web servers.

! If your web apps perform resource-intensive processing as part of each request, then autoscaling might not be an effective approach.

Autoscaling works by analyzing trends in metric values over time accross all instances.

Metrics for autoscale rules

  • CPU Percentage
  • Memory Percentage
  • Disk Queue length
  • Http Queue length
  • Data in
  • Data out

A single autoscale condition can contain several autoscale rules, the autoscale rules in an autoscale condition don’t have to be directly related.

! Not all pricing tiers support autoscaling. The development pricing tiers are either limited to a single instance (the F1 and D1 tiers), or they only provide manual scaling (the B1 tier). If you selected one of these tiers, you must first scale up to the S1 or any of the P level production tiers.

Best practices

If you’re not following good practices when creating autoscale settings, you can create conditions that lead to undesirable results. (like flapping where scale-in and scale out actions continually go back and forth. )

  • Ensure the maximum and minimum values are different and have an adequate margin between them.
  • Choose the appropriate statistic for your diagnostic metric
  • Choose the thresholds carefully for all metric types
  • Always select a safe default instance count
  • Configure autoscale notifications

Deployment Slots

The deployment slot functionality in App Service is a powerful tool that enables you to preview, manage, test and deploy different development environments. Deployment slots are live apps with their own host names.

Deployment slots are available in Standard, Premium, or Isolated App Service tier.

Benefits

  • Staging slot for testing
  • Warm-up before slot swapping
  • Swap immediatly back to get your “last known good site”

There is no extra charge for using deployment slots.

Each plan tier supports different number of slots;

  • Flex Consumption: n\a
  • Consumption Plan 2
  • Premium 3
  • Dedicated 1-20
  • Container Apps not supported

The slot’s URL has the format http://sitename-slotname.azurewebsites.net

Slot swapping

When you swap two slots, App Service completes the following processes;

1- Apply the following settings from the target

  • Slot-specific app settings and connection strings, if applicable
  • Continuous deployment settings, if enabled
  • App Service authentication settings, if enabled

During swap with preview, this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot’s settings.

2- Wait for every instance in the source slot to complete its restart.

3- Trigger local cache initialization.

4- If auto swap is enabled with custom warm-up

  • If applicationInitialization isn’t specified, trigger an HTTP request to the application root of the source slot on each instance
  • An instance is considered warmed up if it returns any HTTP response.

5- Swap the two slots by switching the routing rules for the two slots.

6- Perform the same operation to the target slot.

! Make sure that the production slot is always the target slot. This way, the swap operation doesn’t affect your production app.

Settings that are swappedSettings that aren’t swapped
General SettingsPublishing endpoints
App SettingsCustom domain names
Connection stringsNonpublic certificates and TLS/SSL Settings
Handler mappingsScale settings
Public certificatesWebjobs Schedulers
WebJobs contentIP Restrictions
Hybrid connectionsAlways On
Azure CDNDiagnostic log settings
Service endpointsCORS
Path mappingsVirtual network integration
Managed identities
Settings that end with the suffix _EXTENSION_VERSION

To make settings swappable, add the app setting WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS in every slot of the app and set its value to 0 or false. These settings are either all swappable or not at all. You can’t make just some settings swappable and not the others. Managed identities are never swapped and aren’t affected by this override app setting.

Deployment slot settings tells App Service that the setting isn’t swappable.

Auto swap

Auto swap streamlines Azure DevOps Services scenarios where you want to deploy your app continuously with zero cold starts and zero downtime for customers of the app.

! Auto swap isn’t currently supported in web apps on Linux and Web App for Containers

Custom warm-up

The applicationInitialization configuration element in web.config lets you specify custom initialization actions.

<system.webServer>
	<applicationInitialization>
        <add initializationPage="/" hostName="[app hostname]" />
        <add initializationPage="/Home/About" hostName="[app hostname]" />
    </applicationInitialization>
</system.webServer>

You can also customize the warm-up with app settings;

  • WEBSITE_SWAP_WARMUP_PING_PATH The path to warm up your site.
  • WEBSITE_SWAP_WARMUP_PING_STATUSES Valid HTTP response codes for the warm-up operation.
  • WEBSITE_WARMUP_PATH A relative path on the site that should be pinged whenever the site restarts.

Route traffic in App Service

You can route a portition of the traffic to another slot. Using the Traffic % (0-100) setting. Production slot has 100 by default.

To be able to redirect the traffic to different slots you can use x-ms-routing-name=staging cookie . A request routed to the production slot has the cookie x-ms-routing-name=self.

<a href="<webappname>.azurewebsites.net/?x-ms-routing-name=self">Go back to production app</a>

Azure Functions

Azure Functions is a serverless solution that allows you to write less code, maintain less infrastructure, and save on costs, you can write just the code you need for the problem at hand, without worrying about a whole application or the infrastrucre to run it.

Azure Functions supports triggers, which are ways to start execution of your code, and bindings, which are ways to simplify coding for input and output data.

Differences between Azure Functions and Logic Apps

Both Functions and Logic Apps are Azure Services that enable serverless workloads. Azure Functions is a serverless compute service, whereas Azure Logic Apps is a serverless workflow integration platform.

TopicAzure FunctionsLogic Apps
DevelopmentCode-first (imperative)Designer-first (declarative)
ConnectivityBuilt-in binding types, write code for custom bindingsConnectors, Enterprise Integration Pack for B2B scenarios, build custom connectors
ActionsEach activity is an Azure functionLarge collection of ready-made actions
MonitoringAzure Application InsightsAzure portal, Azure Monitor Logs
ManagementREST API, Visual StudioAzure portal, REST API, PowerShell, Visual Studio
Execution contextRuns in Azure, or locallyRuns in Azure, locally, or on premises

Differences between Azure Functions and WebJobs

Like Azure Functions, Azure App Service WebJobs with the WebJobs SDK is a code-first integration service that is designed for developers.

Azure Functions is built on the WebJobs SDK.

FactorFunctionsWebJobs with WebJobs SDK
Serverless app model with automatic scalingYesNo
Develop and test in browserYesNo
Pay-per-use pricingYesNo
Integration with Logic AppsYesNo
Trigger eventsTimer, Azure Storage queues and blobs, Azure Service Bus queues and topics, Azure Cosmos DB, Azure Event Hubs, HTTP/WebHook, Azure Event GridTimer, Azure Storage queues and blobs, Azure Service Bus queues and topics, Azure Cosmos DB, Azure Event Hubs, File system

Azure Functions hosting options

The hosting option you choose dictates;

  • How your function app is scaled
  • The resources available to each function app instance
  • Support for advanced functionality, such as Virtual Network Connectivity
  • Support for Linux containers
Hosting optionServiceAvailabilityContainer Support
Consumption planAzure FunctionsGANone
Flex Consumption planAzure FunctionsGANone
Premium planAzure FunctionsGALinux
Dedicated planAzure FunctionsGALinux
Container AppsAzure Container AppsGALinux

Overview of plans

Consumption plan

The Consumption plan is the default hosting plan. Pay for compute resources only when your functions are running (pay-as-you-go) with automatic scale. Instances are dynamically added and removed based on the number of incoming events.

Flex Consumption plan

Get high scalability with compute choices, virtual networking, and pay-as-you-go billing. Instances are dynamically added and removed based on the configured per instance concurrency and the number of incoming events.

Premium plan

Automatically scales based on demand using prewarmed workers, which run applications with no delay after being idle, runs on more powerful instances, and connect to virtual networks.

Consider this plan in the following situations;

  • Your function apps run continuously, or nearly continuously.
  • You want more control of your instances and want to deploy multiple function apps on the same plan with event-driven scaling.
  • You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.
  • You need more CPU or memory options than are provided by consumption plans.
  • Your code needs to run longer than the maximum execution time allowed on the Consumption plan.
  • You require virtual network connectivity.
  • You want to provide a custom Linux image in which to run your functions.
Dedicated plan

Run your functions within an App Service plan at regular App Service plan rates.

Consider this plan in the following situations;

  • You must have fully predictable billing, or you need to manually scale instances.
  • You want to run multiple web apps and function apps on the same plan
  • You need access to larger compute size choices.
  • Full compute isolation and secure network access provided by an App Service Environment (ASE).
  • High memory usage and high scale (ASE).
Container Apps

Create and deploy containerized function apps in a fully managed environment hosted by Azure Container Apps.

Consider this plan in the following situations;

  • You want to package custom libraries with your function code to support line-of-business apps.
  • You need to migrate code execution from on-premises or legacy apps to cloud native microservices running in containers.
  • You want to avoid the overhead and complexity of managing Kubernetes clusters and dedicated compute.
  • You need the high-end processing power provided by dedicated CPU compute resources for your functions.

Developing Azure Functions

A function app provides an execution context in Azure in which your function runs. A function app is composed of one or more individual functions that are managed, deployed, and scaled together.

All of the functions in a functions app share the same pricing plan, deployment method, and runtime version.

! In Functions 2.x, all functions in a function app must be authored in the same language. In previous versions of the Azure Functions runtime, this wasn’t required.

Local project files

  • host.json metadata file contains configuration options that affect all functions in a function app instance. Configurations in host.json related to bindings are equally to each function in the app.

  • local.settings.json file stores app settings, and settings used by local development tools. Because local.settings.json might contain secrets, such as connection strings, you should never store it in a remote repository.

Triggeres and bindings

Trigger and bindings let you avoid hardcoding access to other services.

A trigger defines how a function is invoked and a function must have exactly one trigger. Triggerse have assosicated data, which is often provided as the payload of the function.

Binding to a function is a way of declerativly connecting another resource to the function; bindings might be connected as input bindings, output bindings, or both. Bindings are optional and a function might have one or multiple input and/or output bindings.

Triggers and bindings are defined differently depending on the development language.

LanguageConfiguration
C# class librarydecorating methods and parameters with C# attributes
Javadecorating methods and parameters with Java annotions
JavaScript/Powershell/Python/TypeScriptupdating function.json schema

Since .NET class library and Java functions don’t rely on function.json for binding definations, they can’t be created and edited in the portal. C# portal editing is based on C# script, which uses function.json instead of attributes.

For languages that are dynamically typed such as JavaScript, use the dataType property in the function.json file. For example, to read the content of an HTTP request in binary format, set dataType to binary

	{
		"dataType": "binary",
		"type": "httpTrigger",
		"name": "req",
		"direction": "in"
	}

Other options for dataType are stream, binary and string.

Binding direction;

All triggers and bindings have a direction property in the function.json file:

  • For triggers, the direction is always in
  • Input and output bindings use in and out
  • Some bindings support a special direction inout

Trigger and binding example function.json

{
	"disabled": false,
	"bindings":	[
		{
			"type": "queueTrigger",
			"direction": "in",
			"name": "myQueueItem",
			"queueName": "myqueue-items",
			"connection": "MyStorageConnectionAppSetting"
		},
		{
			"tableName": "Person",
			"connection": "MyStorageConnectionAppSetting",
			"name": "tableBinding",
			"type": "table",
			"direction": "out"
		}
	]
}

C# function example


public static class QueueTriggerTableOutput
{
	[FunctionName("QueueTriggerTableOutput")]
	[return: Table("outTable", Connection = "MY_TABLE_STORAGE_ACCT_APP_SETTING")]
	public static Person Run(
		[QueueTrigger("myqueue-items", Connection = "MY_TABLE_STORAGE_ACCT_APP_SETTING")]JObject order,
		ILogger log)
	{
		
		return Person(){
			PartitionKey = "Orders",
			RowKey = Guid.NewGuid().ToString(),
            Name = order["Name"].ToString(),
            MobileNumber = order["MobileNumber"].ToString() 
		}
	}

}

Connect functions to Azure services

As a security best practice, Azure Functions takes advantage of the application settings functionality. For triggers and bindings that require a connection property, you set the application setting name instead of the actual connection string.

Some connections in Azure Functions are configured to use an identity instead of a secret. In some cases, a connection string might still be required in Functions even though the service to which you’re connecting supports identity-based connections.

! An app running in a Consumption or Elastic Premium plan, uses the WEBSITE_AZUREFILESCONNECTIONSTRING and WEBSITE_CONTENTSHARE settings when connecting to Azure Files on the storage account used by your function app. Azure Files doesn’t support using managed identity when accessing the file share.

When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-assigned identity is used by default, although a user-assigned idedinty can bve specified with the credential and clientID properties. Configuring a user-assigned identity with a resource ID is NOT supported.

Identities must have permissions to perform the intended actions. This is typically done by assigning a role in Azure role-based access control, or specifying the identity in an access policy depending on the service to which you’re connecting.

! The target service might expose some permissions that aren’t necessary for all contexts. Where possible, adhere to the principle of least privilege, granting the identity only required privileges.

Azure Blob Storage

Notable classes

BlobClient, BlobClientOptions, BlobContainerClient, BlobServiceClient, BlobUriBuilder

Nuget Packages

dotnet add package Azure.Storage.Blobs
dotnet add package Azure.Storage.Blobs.Specialized
dotnet add package Azure.Storage.Blobs.Models

Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data.

Blob storage is designed for:

  • Serving images or documents directly to a browser.
  • Storing files for distributed access.
  • Streaming video and audio.
  • Writing to log files.
  • Storing data for backup and restore, disaster recovery, and archiving.
  • Storing data for analysis by an on-premises or Azure-hosted service.

Users or client applications can access objects in Blob storage via HTTP/HTTPS, from anywhere in the world. Objects in Blob storage are accessible via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.

Azure Storage offers two performance levels of storage accounts, standard and premium.

  • Standard : This is the standard general purose v2 account and is recommended for most scenerios usign Azure Storage.
  • Premium : Premium accounts offer higher performance by using solid-state drives. If you create a premium account you can choose between three account types, block blob, page blobs or file shares.

Blob storage offers three types of resources:

  • The storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has an address that includes your unique account name. http://<storageAccountName>.blob.core.windows.net

  • A container in the storage Account organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimted number of blobs. http://<storageAccountName>.blob.core.windows.net/<myContainerName>. A container name must be a valid DNS name, as it forms part of the unique URI (Uniform resource identifier) used to address the container or it’s blobs.

    • Container names can be between 3 and 63 characters long
    • Container names must start with a letter or number, and can contain only lowercase letters, numbers, and the dash (-) character.
    • Two or more consecutive dash characters aren’t permitted in container names.
  • Azure storage supports three types of blobs : http://<storageAccountName>.blob.core.windows.net/<myContainerName>/<myBlobName>

    • Block blobs store text and binary data. Block blobs are made up of blocks of data that can be managed individually. Block blobs can store up to about 190.7 TiB
    • Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append blobs are ideal for scenarios such as logging data from virtual machines.
    • Page blobs store random access files up to 8 TB in size. Page blobs store virtual hard drive (VHD) files and serve as disks for Azure Virtual machines.

Azure Storage security features

Your data is secured by default.

Azure Storage uses service-side encryption (SSE) to automatically encrypt your data when it’s persisted to the cloud.

Encryption protects your data and helps you meet your organizational security and complience commitments. Data in Azure Storage is encrypted and decrypted transparently using 256-bit Advanced Encryption Standard (AES) encryption, one of the strongest block ciphers availabele, and is Federal Information Processsing Standards (FIPS) 104-2 compliant.

Azure Blob storage lifecycle

Data sets have unique lifecycles. Early in the lifecycle, people access some data often. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

There is no extra cost for Azure Storage encryption

Data in a new storage account is encrypted with Microsoft-managed keys by default. If you choose to manage encryption with your own keys, you have two options.

  • You can specify a customer-managed key to use for encrypting and decrypting data in Blob Storage and in Azure Files. Customer-managed keys must be stored in Azure Key Vault or Azure Key Vault Managed Hardware Security Model (HSM)
  • You can specify a customer-provided key on Blob Storage operations. A client can include an encryption key on a read/write request for granular control over how blob data is encrypted and decrypted.

Client-side encryption

The Azure Blob Storage client libraries for .NET, Java, and Python support encrypting data within client applications before uploading to Azure Storage, and decrypting data while downloading to the client. The Queue Storage client libraries for .NET and Python also support client-side encryption.

The Blob Storage and Queue Storage client libraries uses AES in order to encrypt user data. There are two versions of client-side encryption available in the client libraries:

  • Version 2 uses Galois/Counter Mode (GCM) mode with AES. The Blob Storage and Queue Storage SDKs support client-side encryption with v2.

  • Version 1 uses Cipher Block Chaining (CBC) mode with AES. The Blob Storage, Queue Storage, and Table Storage SDKs support client-side encryption with v1.

Delegate access with a shared access signature

A shared access singature (SAS) is a URI that grants restricted access right to Azure Storage resources.

  • Account SAS Account level access, Blobstorage ( including Data Lake Storage and dfs endpoints), Queue Storage, Table Storage, or Azure Files are supported. With account SAS, you can:

    • Delegate access to service-level operations that aren’t currently available with a service-specific SAS, such as the Get/Set Service Properties and Get Service Stats Operations
    • Delegate access more than one service, i.e: Blob storage and Azure Files
    • Delegate access to write and delete operations for containers, queues, tables and file shares, which are not available with an object-specific SAS
    • Specify an IP address or a range of IP addresses from which to accept requests.
    • Specify the HTTP protocol
  • Service SAS delegates access to a resource in just one of the storage services.

  • User Delegation SAS delegated by Microsoft Entra credentials or an account key. Blob Storage and Data Lake storage are supported.

  • Stored Access Policy provides an additional level of control over service-level shared access signatures (SASs) on the server side. You can use a stored access policy to change the start time, expiry time, or permissions for a signature. You can also use a stored access policy to revoke a signature after it has been issued. Blob Containers, File Shares, Queues and Tables are supported.

Access tiers

Data storage limits are set at the account level and not per access tier. You can choose to use all of your limit in one tier or across all three tiers.

  • Hot An online tier optimized for storing data that is accessed frequently.
  • Cool An online tier optimized for storing data that is infrequently accessed and stored for a minimum of 30 days.
  • Cold tier An online tier optimized for storing data that is infrequently accessed and stored for a minimum of 90 days. The cold tier has lower storage costs and higher access costs compared to the cool tier.
  • Archive An offline tier optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements, on the order of hours.
Manage the data lifecycle

Azure Blob Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.

With the lifecycle management policy, you can:

  • Transition blobs from cool to hot immediately when accessed, to optimize for performance.
  • Transition current versions of a blob, previous versions of a blob, or blob snapshots to a cooler storage tier if these objects aren’t accessed or modified for a period of time, to optimize for cost.
  • Delete current versions of a blob, previous versions of a blob, or blob snapshots at the end of their lifecycles.
  • Apply rules to an entire storage account, to select containers, or to a subset of blobs using name prefixes or blob index tags as filters.
Life cycle policy

A lifecycle management policy is a collection of rules in a JSON document.

Rules

Each rule definition includes a filter set and an action set. At least one rule is required in a policy. You can define up to 100 rules in a policy.

Parameter nameParameter typeNotesRequired
nameStringA rule name can include up to 256 alphanumeric characters. Rule name is case-sensitive. It must be unique within a policyTrue
enabledBooleanAn optional boolean to allow a rule to be temporarily disabled. Default value is true.False
typeAn enum valueThe current valid type is Lifecycle.True
definitionAn object that defines the lifecyle ruleEach definition is made up of a filter set and an action setTrue
{
  "rules": [
    {
      "enabled": true,
      "name": "sample-rule",
      "type": "Lifecycle",
      "definition": {
        "actions": {
          "version": {
            "delete": {
              "daysAfterCreationGreaterThan": 90
            }
          },
          "baseBlob": {
            "tierToCool": {
              "daysAfterModificationGreaterThan": 30
            },
            "tierToArchive": {
              "daysAfterModificationGreaterThan": 90,
              "daysAfterLastTierChangeGreaterThan": 7
            },
            "delete": {
              "daysAfterModificationGreaterThan": 2555
            }
          }
        },
        "filters": {
          "blobTypes": [
            "blockBlob"
          ],
          "prefixMatch": [
            "sample-container/blob1"
          ]
        }
      }
    }
  ]
}
Rule filters

Filters limit rule actions to a subset of blobs within the storage account. If more than one filter is defined, a logical AND runs on all filters.

Filter nameTypeRequired
blobTypesAn array of predefined enum valuesYes
prefixMatchAn array of strings for prefixes to be match.Each rule can define up to 10 prefixes. A prefix must start with a container name.No
blobIndexMatchAn array of dictionary values consisting of blob index tag key and value conditions to be matched. Each rule can define up to 10 blob index tag condition.No
Rule actions

Actions are applied to the filtered blobs when the run condition is met.

ActionCurrent VersionSnapshotPrevious Versions
tierToCoolSupported for blockBlobSupportedSupported
tierToColdSupported for blockBlobSupportedSupported
enableAutoTierToHotFromCoolSupported for blockBlobNot SupportedNot Supported
tierToArchiveSupported for blockBlobSupportedSupported
deleteSupported for blockBlob and appendBlobSupportedSupported

! If you define more than one action on the same blob, lifecyle management applies at least expensive action to the blob. For example, action delete is cheaper than action tierToArchive. Action tierToArchive is cheaper than action tierToCool.

Action run conditionCondition valueDescription
daysAfterModificationGreaterThanInteger value indicating the age in daysThe condition for base blob actions
daysAfterCreationGreaterThanInteger value indicating the age in daysThe condition for blob snapshot actions
daysAfterLastAccessTimeGreaterThanInteger value indicating the age in daysThe condition for a current version of a blob when access tracking is enabled
daysAfterLastTierChangeGreaterThanInteger value indicating the age in days after last blob tier change timeThe minimum duration in days that a rehydrated blob is kept in hot, cool, or cold tiers before being returned to the archive tier. This condition applies only to tierToArchive actions.
Implement Blob storage lifecycle policies

You can add, edit, or remove a policy by using any of the following methods:

  • Azure portal
  • Azure PowerShell
  • Azure CLI
  • REST APIs

policy.json

{
  "rules": [
    {
      "enabled": true,
      "name": "move-to-cool",
      "type": "Lifecycle",
      "definition": {
        "actions": {
          "baseBlob": {
            "tierToCool": {
              "daysAfterModificationGreaterThan": 30
            }
          }
        },
        "filters": {
          "blobTypes": [
            "blockBlob"
          ],
          "prefixMatch": [
            "sample-container/log"
          ]
        }
      }
    }
  ]
}
az storage account management-policy create --account-name "<storageAccount>" -policy @policy.json --resource-group "<resourceGroupName>"
Rehydrate blob data from the archive tier

While a blob is in the archive access tier, it’s considered to be offline and can’t be read or modified. In order to read or modify data in an archived blob, you must first rehydrate the blob to an online tier, either the hot or cool tier.

  • Copy an archived blob to an online tier: You can rehydrate an archived blob by copying it to a new blob in the hot or cool tier with the Copy Blob or Copy Blob from URL operation. Microsoft recommends this option for most scenarios.

When you copy an archived blob to a new blob in an online tier, the source blob remains unmodified in the archive tier.

Rehydrating an archived blob by copying it to an online destination tier is supported within the same storage account only for service versions earlier than 2021-02-12. Beginning with service version 2021-02-12, you can rehydrate an archived blob by copying it to a different storage account, as long as the destination account is in the same region as the source account.

  • Change a blob’s access tier to an online tier: You can rehydrate an archived blob to hot or cool by changing its tier using the Set Blob Tier operation.

The second option for rehydrating a blob from the archive tier to an online tier is to change the blob’s tier by calling Set Blob Tier. Once a Set Blob Tier request is initiated, it can’t be canceled.

! Changing a blob’s tier doesn’t affect its last modified time. If there is a lifecycle management policy in effect for the storage account, then rehydrating a blob with Set Blob Tier can result in a scenario where the lifecycle policy moves the blob back to the archive tier after rehydration because the last modified time is beyond the threshold set for the policy.

Rehydration priority

When you rehydrate a blob, you can set the priority for the rehydration operation via the optional x-ms-rehydrate-priority header on a Set Blob Tier or Copy Blob/Copy Blob From URL operation.

  • Standard priority: The rehydration request is processed in the order it was received and might take up to 15 hours.
  • High priority: The rehydration request is prioritized over standard priority requests and might complete in under one hour for objects under 10 GB in size.

To check the rehydration priority while the rehydration operation is underway, call Get Blob Properties to return the value of the x-ms-rehydrate-priority header. The rehydration priority property returns either Standard or High.

SDK

Create a BlobServiceClient object

An authorized BlobServiceClient object allows your app to interact with resources at the storage account level. BlobServiceClient provides methods to retrieve and configure account properties, as well as list, create and delete containers within the storage account.

The following example shows how to create a BlobServiceClient object:

using Azure.Identity;
using Azure.Storage.Blobs;

public BlobServiceClient GetBlobServiceClient(string accountName)
{

	BlobServiceClient client = new (
		new Uri($"https://{accountName}.blob.core.windows.net"),
		new DefaultAzureCredential());
		
	return client;
	
}

Create a BlobContainerClient object

You can use a BlobServiceClient object to create a new BlobContainerClient object. A BlobContanerClient object allows you to interact with specific container resource.


public BlobContainerClient GetBlobContainerClient(
		BlobServiceClient blobServiceClient,
		string containerName)
{

	BlobContainerClient client = blobServiceClient.GetBlobContainerClient(containerName);
	return client;
}

If your work is narrowly scoped to a single container, you might choose to create a BlobContainerClient object directly without using BlobServiceClient.


public BlobContainerClient GetBlobContainerClient(
		string accountName, 
		string containerName,
		BlobClientOptions clientOptions)
{

	BlobContainerClient client = new (
		new Uri($"https://{accountName}.blob.core.windows.net/{containerName}"),
		new DefaultAzureCredential(),
		clientOptions);
		
	return client;
}

Create a BlobClient object

To interact with a specific blob resource, create a BlobClient object from a service client or container client. A BlobClient object allows you to interact with a specific blob resource.


public BlobClient GetBlobClient(
	BlobServiceClient blobServiceClient, 
	string containerName,
	string blobName)
{

	BlobClient client =  
		blobServiceClient.GetBlobContainerClient(containerName).GetBlobClient(blobName);
	
	return client;
	
}

Manage container properties and metadata

Blob containers support system properties and user-defined metadata, in addition to the data they contain.

  • System properties: System properties exists on each Blob storage resource. SOme of them can be read or set, while other are read-only.
  • User-defined metadata: User-defined metadata consists of one or more name-value pairs that you specify for a Blob storage resource.
Retrieve and Set container properties

The following code example fetches a container’s system properties and writes some property values to a console window.


private static async Task ReadContainerPropertiesAsync( BlobContainerClient container)
{
	
	try
	{
		var properties = await container.GetPropertiesAsync();
		Console.WriteLine($"Properties for container {container.Uri}");
        Console.WriteLine($"Public access level: {properties.Value.PublicAccess}");
        Console.WriteLine($"Last modified time in UTC: {properties.Value.LastModified}");	
	} 
	catch(RequestFailedException e)
	{
		// ...
	}
 
}

The following code example sets metadata on a container.


public static async Task  AddContainerMetadataAsync( BlobContainerClient container)
{
	
	try
	{
		IDictionary<string, string> metadata = new Dictionary<string, string>();
		
		metadata.Add("docType", "textDocuments");
		metadata.Add("category", "guidance");
		
		await container.SetMetadataAsync(metadata);
		
	}
	catch(RequestFailedException e)
	{
		// ...
	}

}

The following code example retrieves metadata from a container


public static async Task ReadContainerMetadataAsync(BlobContainerClient container)
{
	try
	{
		var properties = await container.GetPropertiesAsync();
		foreach(var metaDataItem in properties.Value.Metadata)
		{
		   Console.WriteLine($"\tKey: {metadataItem.Key}");
           Console.WriteLine($"\tValue: {metadataItem.Value}");
		}
	}
	catch(RequestFailedException e)
	{
		// ...
	}
}

Metadata header format

Metadata headers are name/value pairs. The format for the header is:

x-ms-meta-name:string-value

Retrieving properties and metadata

The GET/HEAD operation retrieves metadata headers for the specified container or blob. These operations return headers only; they don’t return a response body. The URI syntax for retrieving metadata headers on a container is as follows:

GET/HEAD https://myaccount.blob.core.windows.net/mycontainer?restype=container  
GET/HEAD https://myaccount.blob.core.windows.net/mycontainer/myblob?comp=metadata

Setting Metadata Headers

The PUT operation sets metadata headers on the specified container or blob, overwriting any existing metadata on the resource. Calling PUT without any headers on the request clears all existing metadata on the resource.

PUT https://myaccount.blob.core.windows.net/mycontainer?comp=metadata&restype=container

The URI syntax for setting metadata headers on a blob is as follows:

PUT https://myaccount.blob.core.windows.net/mycontainer/myblob?comp=metadata

Standard HTTP properties for containers and blobs

Containers and blobs also support certain standard HTTP properties. Properties and metadata are both represented as standard HTTP headers; the difference between them is in the naming of the headers. Metadata headers are named with the header prefix x-ms-meta- and a custom name. Property headers use standard HTTP header names, as specified in the Header Field Definitions section 14 of the HTTP/1.1 protocol specification.

The standard HTTP headers supported on containers include:

  • ETag
  • Last-Modified

The standard HTTP headers supported on blobs include:

  • ETag
  • Last-Modified
  • Content-Length
  • Content-Type
  • Content-MD5
  • Content-Encoding
  • Content-Language
  • Cache-Control
  • Origin
  • Range

Change Feed support in Azure Blob Storage

The purpose of the change feed is to provide transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account.

Stored under $blobchangefeed

NuGet Packages
dotnet add package Azure.Storage.Blobs
dotnet add package Azure.Storage.Blobs.ChangeFeed
Notable Classes

BlobServiceClient, BlobChangeFeedClient, BlobChangeEvent

Components
  • Lease Container component serves as a storage mechanism to manage state accross multiple change feed consumers.
  • Delegate component is the code within the client application that implements business logic for each batch of changes.
  • Compute Instance is a client application instance that listens for changes from the change feed.
  • Monitored Container componenet is monited for any insert or update operations.

Azure Cosmos DB

Notable Classes

CosmosClient, Database, DatabaseResponse, ContainerResponse, Container, ContainerProperties, ItemResponse, QueryDefinition, FeedIterator

Nuget Packages

	dotnet add package Microsoft.Azure.Cosmos

Azure Cosmos DB is a fully managed NoSQL database designed to provide low latency, elastic scalabilirt of throughput, well-defined semantics for data consistency, and high availability.

With Azure Cosmos DB, you can add or remove the regions associated with your account at any time. Your application doesn’t need to be paused or redeployed to add or remove a region.

Key benefits of global distribution

With its novel multi-master replication protocol, every region supports both writes and reads. The multi-master capability also enables:

  • Unlimited elastic write and read scalability.
  • 99.999% read and write availability all around the world.
  • Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile.

Your application can perform near real-time reads and writes against all the regions you chose for your database. Azure Cosmos DB internally handles the data replication between regions with consistency level guarantees of the level you selected.

Azure Cosmos DB offers 99.999% read and write availability for multi-region databases.

Explore the resource hierarchy

The Azure Cosmos DB account is the fundemental unit of global distrubition and high availability. Your Azure Cosmos DB account contains a unique Domain Name Systems (DNS) name and you can manage an account by using the Azure portal or the Azure CLI, or by using different language-specific SDKs. For globally distributing your data and throughput accross multiple regions, you can add and remove Azure regions to your account at any time.

Elements in an Azure Cosmos DB account

An Azure Cosmos DB container is the fundemental unit of scalability. Currently, you can create a maximum of 50 Azure Cosmos DB accounts under an Azure subscription (can be increased via support request)

Azure Cosmos DB databases

You can create one or multiple Azure Cosmos DB databases under your account. A database is analogous to a namespace. A database is the unit of management for a set of Azure Cosmos DB containers.

Azure Cosmos DB containers

An Azure Cosmos DB container is where data is stored. Unlike most relational databases, which scale up with larger sizes of virtual machines, Azure Cosmos DB scales out.

Data is stored on one or more servers called partitions. To increase partitions, you increase throughput, or they grow automatically as storage increases. This releationship provides a virtually unlimited amount of throughput and storage for a container.

When you create a container, you need to supply a partition key. The partition key is a property that you select from your items to help Azure Cosmos DB distribute the data efficiently accross partitions. You can also use the partition key in the where clause in queries for efficient data retrieval.

The underlying storage mechanism for data in Azure Cosmos DB is called a physical partition. Physical partitions can have a throughput amount up to 10,000 Request Units per second, and they can store up to 50GB of data. Azure Cosmos DB abstracts this partitioning concept with a logical partition, whih can store up to 20GB of data.

When you create a container, you configure throughput in one of the following modes:

  • Dedicated throughput: The througput on a container is exculsivly reserved for that container. There are two types of dedicated throughput: standard and autoscale.

  • Shared throughput: Throughput is specificed at the database level and then shared with up to 25 containers within the database. Sharing of throughput excludes containers that are configured with their own dedicated throughput.

Azure Cosmos DB items
  • Azure Cosmos DB entity : Azure Cosmos DB item
  • API for NoSQL : Item
  • API for Cassandra : Row
  • API for MongoDB: Document
  • API for Gremlin: Node or edge
  • API for table: Item

Consistency levels

Azure Cosmos DB offers five well-defined levels. From strongest to weakest, the levels are:

  • Strong reads are guaranteed to retun the most recent committed version of an item.
  • Bounded staleness reads might lag behind the writes by at most “K” versions of an item or “T” time interval. This consistency level is used to manage the lag of data between any two regions based on an updated version of an item or the time intervals between read and write.
  • Session within a single client session, reads are guaranteed to honor the read-your-writes, and write-follows-reads guarantees.
  • Consistent Prefix different regions might see different versions or updates of the item, but they will never see an out of order write. This level ensures that updates made as a batch within a transaction are returned consistently with transaction in which they are committed.
  • Eventual clients in other regions might see older versions of an item, eventually the item will be of the most recent update. This consistency level is used when no ordering guarantee is required.

Each of the consistency models can be used for specific real-world scenarios. Each provides precise availability and performance tradeoffs backed by comprehensive SLAs.

Configure the default consistency level

You can configure the default consistency level on your Azure Cosmos DB account at any time.The default consistency level configured on your account applies to all Azure Cosmos DB databases and containers under that account. All reads and queries issued against a container or a database use the specified consistency level by default.

Read consistency applies to a single read operation scoped within a logical partition. The read operation can be issued by a remote client or a stored procedure.

Guarantees associated with consistency levels

Azure Cosmos DB guarantees that 100 percent of read requets meet the consistency guarantee for the consistency level chosen.

Consistency levels in Detail

Strong consistency

Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests concurently. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write.

Bounded staleness consistency

In bounded staleness consistency, the lag of data between any two regions is always less than a specified amount. The amount can be K versions ( that is, updates ) of an item or by T time intervals, whichever is reached first. In other words when you choose bounded staleness, the maximum “staleness” of the data in any region can be configured in two ways:

  • The number of versions (K) of the item
  • The time interval (T) reads might lag behind the writes

Bounded Stalesness is beneficial primarily to single-region write accounts with two or more regions. If the data lag in a region ( determined per physical partition) exceeds the configured staleness value, writes for that partition are throttled until stalenss is back within the configured upper bound.

For a single-region account, Bounded Staleness provides the same write consistency guarantees as Sessin and Eventual Consistency. With Bounded Stalenss, data is replicated to a local majority ( three replicas in a four replica set) in the single region.

Session Consistency

In session consistency, within a single client session, reads are guaranteed to honor the read-your-writes, and write-follows-reads guarantes. This guarantee assumes single “writer” session or sharing the session token for multiple writers.

Like all consistency levels weaker than Strong, writes are replicated to a minimum of three replicas ( in a four replica set) in the local region, with asynchronous replication to all other regions.

Consistent Prefix

In consistent prefix, updates made as single document writes see eventual consistency. Updates made as a batch within a transaction, are returned constent to the transaction in which they were committed. Write operations within a transaction of multiple documents are always visible together.

Assume two write operations are performed on documents Doc 1 and Doc 2, within transactions T1 and T2. When client does a read in any replica, the user sees either “Doc 1 v1 and Doc 2 v1” or “Doc 1 v2 and Doc 2 v2,” but never “Doc 1 v1 and Doc 2 v2” or “Doc 1 v2 and Doc 2 v1” for the same read or query operation.

Eventual consistency

In eventual consistency, there’s no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge.

Eventual consistency is the weakest form of consistency because a client might read the values that are older than the ones it read before. Eventual consistency is ideal where the application doesn’t require any ordering guarantees. Examples include count of Retweets, Likes, or nonthreaded comments

Supported APIs

Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, PostgreSQL, Cassandra, Gremlin, and Table.

Considerations when choosing an API

API for NoSQL is native to Azure Cosmos DB.

API for MongoDB, PostgreSQL, Cassandra, Gremlin, and Table implement the wire protocol of open-source database engines. These APIs are best suited if the following conditions are true:

  • If you have existing MongoDB, PostgreSQL, Cassandra, or Gremlin applications
  • If you don’t want to rewrite your entire data access layer
  • If you want to use the open-source developer ecosystem, client-drivers, expertise, and resources for your database
API for NoSQL

The Azure Cosmos DB API for NoSQL stores data in document format. It offers the best end-to-end experience as we have full control over the interface, service, and the SDK client libraries. Any new feature that is rolled out to Azure Cosmos DB is first available on API for NoSQL accounts. NoSQL accounts provide support for querying items using the Structured Query Language (SQL) syntax.

API for MongoDB

The Azure Cosmos DB API for MongoDB stores data in a document structure, via BSON format. It’s compatible with MongoDB wire protocol; however, it doesn’t use any native MongoDB related code. The API for MongoDB is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DB features.

API for PostgreSQL

Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the Citus open source superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.

API for Apache Cassandra

The Azure Cosmos DB API for Cassandra stores data in column-oriented schema. Apache Cassandra offers a highly distributed, horizontally scaling approach to storing large volumes of data while offering a flexible approach to a column-oriented schema. API for Cassandra in Azure Cosmos DB aligns with this philosophy to approaching distributed NoSQL databases. This API for Cassandra is wire protocol compatible with native Apache Cassandra.

API for Apache Gremlin

The Azure Cosmos DB API for Gremlin allows users to make graph queries and stores data as edges and vertices.

Use the API for Gremlin for scenarios:

  • Involving dynamic data
  • Involving data with complex relations
  • Involving data that is too complex to be modeled with relational databases
  • If you want to use the existing Gremlin ecosystem and skills
API for Table

The Azure Cosmos DB API for Table stores data in key/value format. If you’re currently using Azure Table storage, you might see some limitations in latency, scaling, throughput, global distribution, index management, low query performance. API for Table overcomes these limitations and the recommendation is to migrate your app if you want to use the benefits of Azure Cosmos DB. API for Table only supports OLTP scenarios.

Request units

With Azure Cosmos DB, you pay for the throughput you provision and the storage you consume on an hourly basis. Throughput must be provisioned to ensure that sufficient system resources are available for your Azure Cosmos database always.

The cost of all database operations is normalized in Azure Cosmos DB and expressed by request units (or RUs, for short). A request unit represents the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.

Whether the database operation is a write, point read, or query, costs are measured in RUs.

The type of Azure Cosmos DB account you’re using determines the way consumed RUs get charged. There are two modes for account creation:

  • Provisioned throughput mode: In this mode, you provision the number of RUs for your application on a per-second basis in increments of 100 RUs per second. To scale the provisioned throughput for your application, you can increase or decrease the number of RUs at any time in increments or decrements of 100 RUs. You can make your changes either programmatically or by using the Azure portal. You can provision throughput at container and database granularity level.

  • Serverless mode: In this mode, you don’t have to provision any throughput when creating resources in your Azure Cosmos DB account. At the end of your billing period, you get billed for the number of request units consumed by your database operations.

SDK

CosmosClient is thread-safe.

	CosmosClient client = new CosmosClient(endpoint, key);

Create a database throws an exception if a database with same name exists

	Database database1 = await client.CreateDatabaseAsync(
		id: "adventureworks-1"
		);

Create a database if not exists

	Database database2 = await client.CreateDatabaseIfNotExistsAsync(
		id: "adventureworks-2"
		);

Read a database by ID

	Database database = this.cosmosClient.GetDatabase(database_id);
	DatabaseResponse response = await database.ReadAsync();

Delete a database

	await database.DeleteAsync();

Create a container

The Database.CreateContainerIfNotExistsAsync method checks if a container exists, and if it doesn’t, it creates it. Only the container id is used to verify if there’s an existing container.


	ContainerResponse simpleContainer = await database.CreateContainerIfNotExistsAsync(
		id: containerId,
		partitionKeyPath: partitionKey,
		througput: 400 
		);

Get a container by ID


	Container container = database.GetContainer(containerId);
	ContainerProperties containerProperties = await container.ReadContainerAsync();

Delete a container


	await database.GetContainer(containerId).DeleteContainerAsync();

Create an item

Use the Container.CreateItemAsync method to create an item. The method requires a JSON serializable object that must contain an id property, and a partitionKey.


	ItemResponse<SalesOrder> response = await container.CreateItemAsync(salesOrder, new 			PartitionKey(salesOrder.AccountNumber));

Read an item

Use the Container.ReadItemAsync method to read an item. The method requires type to serialize the item to along with an id property, and a partitionKey.

string id = "[id]";
string accountNumber = "[partition-key]";
ItemResponse<SalesOrder> response = await container.ReadItemAsync(id, new PartitionKey(accountNumber));

Query an item

The Container.GetItemQueryIterator method creates a query for items under a container in an Azure Cosmos database using a SQL statement with parameterized values. It returns a FeedIterator.


QueryDefinition query = new QueryDefinition(
    "select * from sales s where s.AccountNumber = @AccountInput ")
    .WithParameter("@AccountInput", "Account1");

FeedIterator<SalesOrder> resultSet = container.GetItemQueryIterator<SalesOrder>(
    query,
    requestOptions: new QueryRequestOptions()
    {
        PartitionKey = new PartitionKey("Account1"),
        MaxItemCount = 1
    });

Stored Procedures

Azure Cosmos DB provides language-integrated, transactional execution of JavaScript that lets you write stored procedures, triggers, and user-defined functions (UDFs).

Stored procedures can create, update, read, query, and delete items inside an Azure Cosmos container. Stored procedures are registered per collection, and can operate on any document or an attachment present in that collection.


	var helloWorldStoredProc = {
		id: "helloWorld", 
		serverScript: function (){
		
			var context = getContext();
			var response = context.GetResponse();
			
			response.setBody("Hello, World");
		}
	}

The context object provide access to all operations that can be performed in Azure Cosmos DB, and acces to the request and response objects.

Create an item using stored procedure

	var createDocumentStoredProc = {
	
		id: "createMyDocument", 
		body: function createMyDocument ( documentToCreate ) {
		
			var context = getContext();
			var collection = context.GetCollection();
			var accepted = collection.createDocument(collection.getSelfLink(),
				documentToCreate,
				function( err, documentCreated ) {
				
					if (err) throw new Error('Error'+ err.message);
					context.getResponse().setBody(documentCreated.id);
				});
			if(!accepted) return;
		
		}
Arrays as input parameters for stored procedures

When defining a stored procedure in the Azure Portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to string and setn to the stored procedure. To work around this, you can define a function withib your stored procedure to parse the string as an array.


	function sample(arr){
	
		if(typeof arr === "string") arr = JSON.parse(arr);
		
		arr.forEach(function(a){
			console.log(a);
		});
	}
Bounded execution

All Azure Cosmos DB operations must complete within a limited amount of time. Stored procedures have a limited amount of time to run on the server. All collection functions return a Boolean value that represents whether that operation completes or not.

Transactions within stored procedures

You can implement transactions on items within a container by using a stored procedure. JavaScript functions can implement a continuation-based model to batch or resume execution. The continuation value can be any value of your choice and your applications can then use this value to resume a transaction from a new starting point.

How to run stored procedures

! For partitioned containers, when you run a stored procedure, you must provide a partition key value in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value aren’t visible to the stored procedure. This principle also applies to triggers.

The following example shows how to register a stored procedure by using the .NET SDK v3:


	string storedProcedureId = "spCreateToDoItems";
	StoredProcedureResponse storedProcedureRespones = 
		await client.GetContainer("database", "container")
			.Scripts
			.CreateStoredProcedureAsync(
				new StoredProcedureProperties 
					{
						Id = storedProcedureId,
						Body = File.ReadAllText($@"..\js\{storedProcedureId}.js")
					}
				);

The following code shows how to call a stored procedure by using the .NET SDK v3:


	dynamic[] newItems = new dynamic []
	{
		new
		{
			category = "Personal", 
			name = "Groceries", 
			description = "Pick up strawberries", 
			isComplete = false
		},
		new 
		{
			category = "Personal",
			name = "Doctor",
			description = "Make appointment for check up",
			isComplete = false
		}
	};
	
	var result = await client
		.GetContainer("database", "container")
		.Scripts
		.ExecuteStoredProcedureAsync<string>("spCreateToDoItem", new PartitionKey("Personal"), new [] { newItems });
	

Triggers and user-defined functions

Azure Cosmos DB supports pretriggers and post-triggers. Triggers aren’t automatically executed. They mut be specified for each database operation where you want them to execute. After you define a trigger, you should register it by using the Azure Cosmos DB SDKs.

Pretriggers

The following example shows how a pretrigger is used to validate the properties of an Azure Cosmos item that is being created. It adds a timestamp property to a newly added item if it doesn’t contain one.


	function validateToDoItemTimestamp(){
	
		var context = getContext();
		var request = context.getRequest();
		
		var itemToCreate = request.getBody();
		
		if(!("timestamp" in itemToCreate)){
			
			var ts = new Date();
			itemToCreate["timestamp"] = ts.getTime();
		
		}
		
		request.setBody(itemToCreate);
	
	}

When triggers are registered, you can specifiy the operations that it can run with. This triggers should be created with a TriggerOperation value of TriggerOperation.Create, using the trigger in a replace operation isn’t permitted.

How to run pre-triggers

The following code shows how to register a pre-trigger using the .NET SDK v3:


	await client.GetContainer("database", "container")
		.Scripts
		.CreateTriggerAsync( 
			new TriggerProperties 
				{
					Id = "trgPreValidateToDoItemTimestamp",
					Body = File.ReadAllText("@..\js\trgPreValidateToDoItemTimestamp.js"),
					TriggerOperation = TriggerOperation.Create,
					TriggerType = TriggerType.Pre
				}
			);
	

The following code shows how to call a pre-trigger using the .NET SDK v3:


	dynamic newItem = new
	{
		category = "Personal",
		name = "Groceries",
		description = "Pick up strawberries",
		isComplete = false
	};

	await client.GetContainer("database", "container")
		.CreateItemAsync(newItem, null, 
			new ItemRequestOptions 
				{ 
					PreTriggers = new List<string> 
						{ "trgPreValidateToDoItemTimestamp" } 
				}
			);
Post-triggers

The following example shows a post-trigger.



	function updateMetadata() {
		var context = getContext();
		var container = context.getCollection();
		var response = context.getResponse();
		
		// item that was created
		var createdItem = response.getBody();
		
		// query for metadata document
		var filterQuery = 'SELECT * FROM root r WHERE r.id = "_metadata"';
		var accept = container.queryDocuments(container.getSelfLink(), filterQuery,
			updateMetadataCallback);
		if(!accept) throw "Unable to update metadata, abort";
		
		function updateMetadataCallback(err, items, responseOptions) {
			if(err) throw new Error("Error" + err.message);
				if(items.length != 1) throw 'Unable to find metadata document';
		
				var metadataItem = items[0];
		
				// update metadata
				metadataItem.createdItems += 1;
				metadataItem.createdNames += " " + createdItem.id;
				var accept = container.replaceDocument(metadataItem._self,
					metadataItem, function(err, itemReplaced) {
							if(err) throw "Unable to update metadata, abort";
					});
				if(!accept) throw "Unable to update metadata, abort";
				return;
			}
	}

One thing that is important to note is the transactional execution of triggers in Azure Cosmos DB. The post-trigger runs as part of the same transaction for the underlying item itself. An exception during the post-trigger execution fails the whole transaction. Anything committed is rolled back and an exception returned.

How to run post-triggers

The following code shows how to register a post-trigger using the .NET SDK v3:



	await client.GetContainer("database", "container")
		.Scripts
		.CreateTriggerAsync(new TriggerProperties
				{
					Id = "trgPostUpdateMetadata",
					Body = File.ReadAllText(@"..\js\trgPostUpdateMetadata.js"),
					TriggerOperation = TriggerOperation.Create,
					TriggerType = TriggerType.Post
				}
			);

The following code shows how to call a post-trigger using the .NET SDK v3:


	var newItem = { 
		name: "artist_profile_1023",
		artist: "The Band",
		albums: ["Hellujah", "Rotators", "Spinning Top"]
	};

	await client.GetContainer("database", "container")
		.CreateItemAsync(newItem, null, 
			new ItemRequestOptions 
				{ 
					PostTriggers = new List<string> { "trgPostUpdateMetadata" } 
				}
			);
User-defined functions

The following sample creates a UDF to calculate income tax for various income brackets. This user-defined function would then be used inside a query. For the purposes of this example assume there’s a container called “Incomes” with properties as follows:

The following code sample is a function definition to calculate income tax for various income brackets:


function tax(income) {

        if(income == undefined)
            throw 'no input';

        if (income < 1000)
            return income * 0.1;
        else if (income < 10000)
            return income * 0.2;
        else
            return income * 0.4;
    }
	
How to work with user-defined functions

The following code shows how to register a user-defined function using the .NET SDK v3:


	await client.GetContainer("database", "container")
		.Scripts
		.CreateUserDefinedFunctionAsync(
			new UserDefinedFunctionProperties
				{
					Id = "Tax",
					Body = File.ReadAllText(@"..\js\Tax.js")
				}
			);

The following code shows how to call a user-defined function using the .NET SDK v3:



	var iterator = client.GetContainer("database", "container")
		.GetItemQueryIterator<dynamic>("SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000");
			while (iterator.HasMoreResults)
			{
				var results = await iterator.ReadNextAsync();
				foreach (var result in results)
				{
					//iterate over results
				}
			}

Change Feed

Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur. Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos DB container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.

You can’t filter the change feed for a specific type of operation. Currently change feed doesn’t log delete operations. As a workaround, you can add a soft marker on the items that are being deleted.

Reading Azure Cosmos DB change feed

You can work with the Azure Cosmos DB change feed using either a push model or a pull model. With a push model, the change feed processor pushes work to a client that has business logic for processing this work. However, the complexity in checking for work and storing state for the last processed work is handled within the change feed processor.

With a pull model, the client has to pull the work from the server. In this case, the client has business logic for processing work and also stores state for the last processed work. The client handles load balancing across multiple clients processing work in parallel, and handling errors.

! It’s recommended to use the push model because you won’t need to worry about polling the change feed for future changes, storing state for the last processed change, and other benefits.

However, there are some scenarios where you might want the extra low level control of the pull model. The extra low-level control includes:

  • Reading changes from a particular partition key
  • Controlling the pace at which your client receives changes for processing
  • Doing a one-time read of the existing data in the change feed (for example, to do a data migration)
Reading change feed with a push model

There are two ways you can read from the change feed with a push model:

  • Azure Functions Azure Cosmos DB triggers,
  • and the change feed processor library.

Azure Functions uses the change feed processor behind the scenes, so they’re both similar ways to read the change feed.

Change feed processor

The change feed processor is part of the Azure Cosmos DB .NET V3 and Java V4 SDKs. It simplifies the process of reading the change feed and distributes the event processing across multiple consumers effectively.

There are four main components of implementing the change feed processor:

1- The monitored container : The monitored container has the data from which the change feed is generated. Any inserts and updates to the monitored contaiiner are reflected in the change feed of the container.

2- The lease container : The lease container acts as a state storage and coordinates processing the change feed across multiple workers. The lease container can be stored in the same account as the monitored container or in a separate account.

3- The compute instance: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it might be represented by a VM, a kubernates pod or an Azure App Service instance, or an actual physcical machine.

4- The delagete: The delagete is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.

When implementing the change feed processor the point of entry is always the monitored container, from a Container instance you call GetChangeFeedProcessorBuilder:

	
	private static async Task<ChangeFeedProcessor> StartChangeFeedProcessorAsync(
		CosmosClient cosmosClient, 
		IConfiguration configuration)
		{
			
				string databaseName = configuration["SourceDatabaseName"];
				string sourceContainerName = configuration["SourceContainerName"];
				string leaseContainer = configuration["LeasesContainerName"];
				
				Container leaseContainer = cosmosClient.GetContainer(databaseName, leaseContainerName);
				ChangeFeedProcessor changeFeedProcessor = cosmosClient.GetContainer(databaseName, sourceContainerName)
					.GetChangeFeedProcessorBuilder<ToDoItem>(processName: "changeFeedSample", onChangesDelegate: HandleChangesAsync)
					.WithInstanceName("consoleHost")
					.WithLeaseContainer(leaseContainer)
					.Build();
				
			
				Console.WriteLine("Starting Change Feed Processor...");
				await changeFeedProcessor.StartAsync();
				Console.WriteLine("Change Feed Processor started.");
				return changeFeedProcessor;			
				
		}

Following is an example of a delegate:


static async Task HandleChangesAsync(
	ChangeFeedProcessorContext context,
	IReadOnlyCollection<ToDoItem> changes, 
	CancellationToken cancellationToken)
{
	
	Console.WriteLine($"Started handling changes for lease {context.LeaseToken}...");
    Console.WriteLine($"Change Feed request consumed {context.Headers.RequestCharge} RU.");
    // SessionToken if needed to enforce Session consistency on another client instance
    Console.WriteLine($"SessionToken ${context.Headers.Session}");
	
	
    // We may want to track any operation's Diagnostics that took longer than some threshold
    if (context.Diagnostics.GetClientElapsedTime() > TimeSpan.FromSeconds(1))
    {
        Console.WriteLine($"Change Feed request took longer than expected. Diagnostics:" + context.Diagnostics.ToString());
    }

    foreach (ToDoItem item in changes)
    {
        Console.WriteLine($"Detected operation for item with id {item.id}, created at {item.creationTime}.");
        // Simulate some asynchronous operation
        await Task.Delay(10);
    }

    Console.WriteLine("Finished handling changes.");
	
}

The normal life cycle of a host instance is:

1- Read the change feed.

2- If there are no changes, sleep for a predefined amount of time (customizable with WithPollInterval in the Builder) and go to #1.

3- If there are changes, send them to the delegate.

4- When the delegate finishes processing the changes successfully, update the lease store with the latest processed point in time and go to #1.

Azure Container Registry (ACR)

Azure Container Registry (ACR) is a managed registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registeries to store and manage your container images and related artifacts.

Use cases

Pull images from an Azure container registry to various deployment targets:

  • Scalable orchestration systems that manage containarized applications across clusters of hosts, including Kubernates, DC/OS, and Docker Swarm.
  • Azure services that support building and running applications at scale, including Azure Kubernates Service (AKS), App Service, Batch, and Service Fabric.

Developers can also push to a container registry as part of a container development workflow. For example, target a container registry from continous integration and delivery tools such as Azure Pipelines or Jenkins.

Azure Container Registry service tiers

  • Basic A cost-optimized entry point for developers learning about Azure Container Registry. Basis registiries have the same programmatic capabilities as Standard and Premium, however, the included storage and image throughput are most appropriete for lower usage scenerios.
  • Standard Standard regitries offer the same capabilities as Basic, with increased included storage and image throughput. Standard registeries should satisfy the needs of most production scenarios.
  • Premium Premium registeries provide the highest amount of included storage and concurrent operations, enabling high-volume scenarios. In addition to higher image throughput, Premium adds features such as: geo-replication for managing a single repository across multiple regions, content trust for image tag signing, and private link with private endpoints to restrict access to the registry.

Supported images and artifacts

When images are grouped in a repository, each image is a read-only snapshot of a Docker-compatible container. ACR can include both Windows and Linux images. In addition to Docker container images, Azure Container Registry stores related content formats such as Helm charts and images built to the Open Container Initiative (OCI) Image Format Specification.

Automated image builds

use Azure Container Registry Tasks (ACR Tasks) to streamline building, testing, pushing, and deploying imagesin Azure.

Storage capabilities

  • Encryption-at-rest All container images and other artifacts in your registry are encrypted at rest.
  • Regional storage Azure Container Registry stores data in the region where the registry is created, to help customers meet data residency and complience requirements.
  • Geo-replication For scenerios requireing high-availability assurance, consider using the geo-replicatantfeature of Premium registeries.
  • Zone redundancy A feature of the Premium service tier, zone redundancy uses Azure availabilty zones to replicate your registry to a minimum of three separate zones in each enabled region.
  • Scalable storage Azure Container Registry allows you to create as many repositories, images, layers, or tags as you need, up to the registry storage limit.

Build and manage containers with tasks

Azure Container Registry (ACR) tasks are a suite of features that:

  • Provide cloud-based container image building for platforms like Linux, Windows, and Advanced RISC Machines (Arm).
  • Extend the early parts of an application development cyle to the cloud with on-demand container image builds.
  • Enable automated builds triggered by source code updates, updates to a container’s base image, or timers.

Dockerfile

A Dockerfile is a script that contains a series of instructions that are used to build a Docker image. Dokcerfiles typically include the following information:

  • The base or parent image we use to create the new image
  • Commands to update the base OS and install other software
  • Build the artifacts to include, such as a developed Application
  • Services to expose, such a storage and network Configuration
  • Command to run when the container is launched
# Use the .NET 6 runtime as a base image
FROM mcr.microsoft.com/dotnet/runtime:6.0

# Set the working directory to /app
WORKDIR /app

# Copy the contents of the published app to the container's /app directory 
COPY bin/Release/net6.0/publish/ . 

# Expose port 80 to the outside world 
EXPOSE 80

# Set the command to run when the container starts 
CMD ["dotnet", "MyApp.dll"]

Azure Container Instances

Azure Container Instances (ACI) is a great solution for any scenerio that can operate in isolated containers, including simple applications task automation, and build jobs. Here are some of the benefits:

  • Fast startup ACI can start containers in Azure in seconds, without the need to create and manage a virtual machine (VM).
  • Container access ACI enables exposing your container groups directly to the internet with an IP address and a fully qualified domain name (FQDN)
  • Hypervisor-level security Isolate your applications as completely as it would be in a VM.
  • Customer data The ACI service stores the minimum customer data required to ensure your container groups are running as expected
  • Persistent storage Mount Azure Files shares directly to a container to retrieve and persist state
  • Linux and Windows Schedule both Windowa and Linux containers using the same API

For scenerios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, Azure Kubernates Service (AKS) is recommended

Container groups

The top-level resource in Azure Container Instances is the container group. A container group is a colection of containers that get scheduled on the same host machine. Containers in a container group share a lifecyle, resources, local network, and storage volumes. It’s similar concept to a pod in Kubernates.

! Multi-container groups currently support only Linux containers. For Windows containers, Azure Container Instances only supports deployment of a single instance.

Deployment

There are two common ways to deploy a multi-container group: use a Resource Manager template or a YAML file. A Resource Manager template is recommended when you need to deploy more Azure service resources when you deploy the container instances. Due to the YAML format’s more concise nature, a YAML file is recommended when your deployment includes only container instances.

Resource Allocation

Azure Container Instances allocates resources such as CPUs, memory, and optioanally GPUs (preview) to a container group by adding the resource request of the instances in the group. If you create a container group with two instances, each requesting one CPU, then the container group is allocated two CPUs.

Networking

Container groups share an IP address and a port namespace on that IP address. To enabled external clients to reach a container group withing the group, you must expose the port on the IP address and from the container. Because containers within the group share a port namespace, port mapping isn’t supported. Containers within a group can reach other via localhost on the ports that they exposed, even if those ports aren’t exposed externally on the group’s IP address.

Storage

You can specify external volumes to mount within a container group. You can map those volumes into specific paths within the individual containers in a group. Supported volumes include:

  • Azure file share
  • Secret
  • Empty directory
  • Cloned git repo

Common scenerios

Multi-container groups are useful in cases where you want to divide a single functional task into a few container images. An image might be delivered by different teams and have separate resource requirements.

Example usage could include

  • A container serving a wen application and a container pulling the latest content from the source control
  • An application container and a logging container
  • An application container and a monitoring container
  • A front-end container and a back-end container.

Container restart policy

When you create a container group in Azure Container Instances, you can specify one of three restart policy settings.

1- Always Containers in the container group are always restarted. This is the default setting applied when no restart policy is specified at container creation. 2- Never Container in the container group are never restarted. The containers run at most once. 3- OnFailure Containers in the container group are restarted only when the process executed in the container fails. The containers are run at least once.


	az container create --resource-group "MyResourceGroup" --name "myContainerName" --image "myContainerImage" --restart-policy OnFailure

Azure Container Instances starts the container, and then stops it when its application, or script, exits. When Azure Container Instances stops a container whose restart policy is Never or OnFailure, the container’s status is set to Terminated .

Environment variables

Setting environment variables in your container instances allows you to provide dynamic configuration of the application or script run by the container.

If you need to pass secrets as environment variables, Azure Container Instances supports secure values for both Windows and Linux containers.

az container create \
    --resource-group myResourceGroup \
    --name mycontainer2 \
    --image mcr.microsoft.com/azuredocs/aci-wordcount:latest 
    --restart-policy OnFailure \
    --environment-variables 'NumWords'='5' 'MinLength'='8'\

Secure values

Objects with secure values are intended to hold sensitive information like passwords or keys for your application. Using secure values for environment variables is both safer and more flexible than including it in your container’s image.

Environment variables with secure values aren’t visible in your container’s properties. Their values can be accessed only from within the container. For example, container properties viewed in the Azure portal or Azure CLI display only a secure variable’s name, not its value.

Set a secure environment variable by specifying the secureValue property instead of the regular value for the variable’s type. The two variables defined in the following YAML demonstrate the two variable types.

apiVersion: 2018-10-01
location: eastus
name: securetest
properties:
  containers:
  - name: mycontainer
    properties:
      environmentVariables:
        - name: 'NOTSECRET'
          value: 'my-exposed-value'
        - name: 'SECRET'
          secureValue: 'my-secret-value'
      image: nginx
      ports: []
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
  osType: Linux
  restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups

You would run the following command to deploy the container group with YAML:

az container create --resource-group myResourceGroup \
    --file secure-env.yaml \

Mount an Azure file share in Azure Container Instances

By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost. To persist the state beyond the lifetime of the container, you must mount a volume from an axternal storage.

As shown in this unit, Azure Container Instances can mount an Azure file share created with Azure Files. Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol. Using an Azure file share with Azure Container Instances provides file-sharing features similar to using an Azure file share with Azure virtual machines.

Limitations

  • You can only mount Azure Files shares to Linux Containers
  • Azure File share volume mount requres the Linux container run as root
  • Azure File share volume mounts are limited to CIFS support

Deploy container and mount volume

To mount an Azure file share as a volume in a container by using the Azure CLI, specify the share and volume mount point when you create the container with az container create.

az container create --resource-group $ACI_PERS_RESOURCE_GROUP --name hellofiles --image mcr.microsoft.com/azuredocs/aci-hellofiles --dns-name-label aci-demo --ports 80 --azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --azure-file-volume-account-key $STORAGE_KEY --azure-file-volume-share-name $ACI_PERS_SHARE_NAME --azure-file-volume-mount-path /aci/logs/

The --dns-name-label value must be unique within the Azure region where you create the container instance. Update the value in the preceding command if you receive a DNS name label error message when you execute the command.

Deploy container and mount volume - YAML

You can also deploy a container group and mount a volume in a container with the Azure CLI and a YAML template. Deploying by YAML template is the preferred method when deploying container groups consisting of multiple containers.

The following YAML template defines a container group with one container created with the aci-hellofiles image. The container mounts the Azure file share acishare created previously as a volume. Following is an example YAML file.

apiVersion: '2019-12-01'
location: eastus
name: file-share-demo
properties:
  containers:
  - name: hellofiles
    properties:
      environmentVariables: []
      image: mcr.microsoft.com/azuredocs/aci-hellofiles
      ports:
      - port: 80
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      volumeMounts:
      - mountPath: /aci/logs/
        name: filesharevolume
  osType: Linux
  restartPolicy: Always
  ipAddress:
    type: Public
    ports:
      - port: 80
    dnsNameLabel: aci-demo
  volumes:
  - name: filesharevolume
    azureFile:
      sharename: acishare
      storageAccountName: <Storage account name>
      storageAccountKey: <Storage account key>
tags: {}
type: Microsoft.ContainerInstance/containerGroups

Mount multiple volumes

To mount multiple volumes in a container instance, you must deploy using an Azure Resource Manager template or a YAML file. To use a template or YAML file, provide the share details and define the volumes by populating the volumes array in the properties section of the template.

"volumes": [{
  "name": "myvolume1",
  "azureFile": {
    "shareName": "share1",
    "storageAccountName": "myStorageAccount",
    "storageAccountKey": "<storage-account-key>"
  }
},
{
  "name": "myvolume2",
  "azureFile": {
    "shareName": "share2",
    "storageAccountName": "myStorageAccount",
    "storageAccountKey": "<storage-account-key>"
  }
}]

Next, for each container in the container group in which you’d like to mount the volumes, populate the volumeMounts array in the properties section of the container definition. For example, this mounts the two volumes, myvolume1 and myvolume2, previously defined:

"volumeMounts": [{
  "name": "myvolume1",
  "mountPath": "/mnt/share1/"
},
{
  "name": "myvolume2",
  "mountPath": "/mnt/share2/"
}]

Azure Container Apps

Azure Container Apps enables you to run microservices and containerized applications on a serverless platform that runs on top of Azure Kubernetes Service. Common uses of Azure Container Apps include:

  • Deploying API endpoints
  • Hosting background processing applications
  • Handling event-driven processing
  • Running microservices

Applications built on Azure Container Apps can dynamically scale based on: HTTP traffic, event-driven processing, CPU or memory load, any KEDA-supported scaler.

  • KEDA: Kubernates event-driven autoscaling *

With Azure Container Apps, you can:

  • Run multiple container revisions and maange the containe app’s application lifecyle
  • Autoscale your apps based on any KEDA-supported scale trigger. Most applications can scale to zero.
  • Enable HTTPS ingress without having to manage other Azure infrastructure.
  • Split traffic across multiple versions of an application for Blue/Green deployments and A/B testing scenarios.
  • Use internal ingress and service discovery for secure internal-only endpoints with built-in DNS-based service discovery.
  • Build microservices with Dapr and access its rich set of APIs.
  • Run containers from any registry, public or private, including Docker Hub and Azure Container Registry (ACR).
  • Use the Azure CLI extension, Azure portal, or ARM templates to manage your applications.
  • Provide an existing virtual network when creating an environment for your container apps.
  • Securely manage secrets directly in your application.
  • Monitor logs using Azure Log Analytics.

Azure Container Apps environments

Individual container apps are deployed to a single Container Apps environment, which acts as a secure boundary around groups of container apps. Container Apps in the same environment are deployed in the same virtual network and write logs to the same Log Analytics workspace. You might provide an existing virtual network when you create an environment.

Reasons to deploy container apps to the same environment include situations when you need to:

  • Manage related services
  • Deploy different applications to the same virtual network
  • Instrument Dapr applications that communicate via the Dapr service invocation API
  • Have applications to share the same Dapr configuration
  • Have applications share the same log analytics workspace

Reasons to deploy container apps to different environments include situations when you want to ensure:

  • Two applications never share the same compute resources
  • Two Dapr applications can’t communicate via the Dapr service invocation API
  • DAPR: Distributed Application Runtime *

Microservices with Azure Container Apps

Microservice architectures allow you to independently develop, upgrade, version, and scale core areas of functionality in an overall system. Azure Container Apps provides the foundation for deploying microservices featuring:

  • Independent scaling, versioning, and upgrades
  • Service discovery
  • Native Dapr integration

Dapr integration

When you implement a system composed of microservices, function calls are spread across the network. To support the distributed nature of microservices, you need to account for failures, retries, and timeouts. While Container Apps features the building blocks for running microservices, use of Dapr provides an even richer microservices programming model. Dapr includes features like observability, pub/sub, and service-to-service invocation with mutual TLS, retries, and more.

Containers in Azure Container Apps

Azure Container Apps manages the details of Kubernetes and container orchestration for you. Containers in Azure Container Apps can use any runtime, programming language, or development stack of your choice.

Azure Container Apps supports any Linux-based x86-64 (linux/amd64) container image. There’s no required base container image, and if a container crashes it automatically restarts.

Configuration

The following code is an example of the containers array in the properties.template section of a container app resource template. The excerpt shows some of the available configuration options when setting up a container when using Azure Resource Manager (ARM) templates. Changes to the template ARM configuration section trigger a new container app revision.

"containers": [
  {
       "name": "main",
       "image": "[parameters('container_image')]",
    "env": [
      {
        "name": "HTTP_PORT",
        "value": "80"
      },
      {
        "name": "SECRET_VAL",
        "secretRef": "mysecret"
      }
    ],
    "resources": {
      "cpu": 0.5,
      "memory": "1Gi"
    },
    "volumeMounts": [
      {
        "mountPath": "/myfiles",
        "volumeName": "azure-files-volume"
      }
    ]
    "probes":[
        {
            "type":"liveness",
            "httpGet":{
            "path":"/health",
            "port":8080,
            "httpHeaders":[
                {
                    "name":"Custom-Header",
                    "value":"liveness probe"
                }]
            },
            "initialDelaySeconds":7,
            "periodSeconds":3
// file is truncated for brevity

Multiple containers

You can define multiple containers in a single container app to implement the sidecar pattern. The containers in a container app share hard disk and network resources and experience the same application lifecycle.

Examples of sidecar containers include:

  • An agent that reads logs from the primary app container on a shared volume and forwards them to a logging service.
  • A background process that refreshes a cache used by the primary app container in a shared volume.

! Running multiple containers in a single container app is an advanced use case. In most situations where you want to run multiple containers, such as when implementing a microservice architecture, deploy each service as a separate container app.

To run multiple containers in a container app, add more than one container in the containers array of the container app template.

Limitations

Azure Container Apps has the following limitations:

  • Privileged containers: Azure Container Apps can’t run privileged containers. If your program attempts to run a process that requires root access, the application inside the container experiences a runtime error.
  • Operating system: Linux-based (linux/amd64) container images are required.

Authentication and Authorization in Azure Container Apps

Azure Container Apps provides built-in authentication and authorization features to secure your external ingress-enabled container app with minimal or no code. The built-in authentication feature for Container Apps can save you time and effort by providing out-of-the-box authentication with federated identity providers, allowing you to focus on the rest of your application.

  • Azure Container Apps provides access to various built-in authentication providers.
  • The built-in auth features don’t require any particular language, SDK, security expertise, or even any code that you have to write.

This feature should only be used with HTTPS. Ensure allowInsecure is disabled on your container app’s ingress configuration. You can configure your container app for authentication with or without restricting access to your site content and APIs.

  • To restrict app access only to authenticated users, set its Restrict access setting to Require authentication.
  • To authenticate but not restrict access, set its Restrict access setting to Allow unauthenticated access.

Identity providers

Container Apps uses federated identity, in which a third-party identity provider manages the user identities and authentication flow for you. The following identity providers are available by default:

  • Microsoft Identity Platform /.auth/login/aad
  • Facebook /.auth/login/facebook
  • GitHub /.auth/login/github
  • Google /.auth/login/google
  • X /.auth/login/twitter
  • Any OpenID Connect provider /.auth/login/

When you use one of these providers, the sign-in endpoint is available for user authentication and authentication token validation from the provider. You can provide your users with any number of these provider options

Feature architecture

The authentication and authorization middleware component is a feature of the platform that runs as a sidecar container on each replica in your application. When enabled, every incoming HTTP request passes through the security layer before being handled by your application.

The platform middleware handles several things for your app:

  • Authenticates users and clients with the specified identity providers
  • Manages the authenticated session
  • Injects identity information into HTTP request headers

The authentication and authorization module runs in a separate container, isolated from your application code. As the security container doesn’t run in-process, no direct integration with specific language frameworks is possible. However, relevant information your app needs is provided in request headers.

Authentication flow

The authentication flow is the same for all providers, but differs depending on whether you want to sign in with the provider’s SDK:

  • Without provider SDK (server-directed flow or server flow): The application delegates federated sign-in to Container Apps. Delegation is typically the case with browser apps, which presents the provider’s sign-in page to the user.
  • With provider SDK (client-directed flow or client flow): The application signs users in to the provider manually and then submits the authentication token to Container Apps for validation. This approach is typical for browser-less apps that don’t present the provider’s sign-in page to the user. An example is a native mobile app that signs users in using the provider’s SDK.

Manage revisions and secrets in Azure Container Apps

Azure Container Apps implements container app versioning by creating revisions. A revision is an immutable snapshot of a container app version. You can use revisions to release a new version of your app, or quickly revert to an earlier version of your app. New revisions are created when you update your application with revision-scope changes. You can also update your container app based on a specific revision.

You can control which revisions are active, and the external traffic that is routed to each active revision. Revision names are used to identify a revision, and in the revision’s URL. You can customize the revision name by setting the revision suffix.

By default, Container Apps creates a unique revision name with a suffix consisting of a semi-random string of alphanumeric characters. For example, for a container app named album-api, setting the revision suffix name to 1st-revision would create a revision with the name album-api–1st-revision. You can set the revision suffix in the ARM template, through the Azure CLI az containerapp create and az containerapp update commands, or when creating a revision via the Azure portal.

Updating your container app

With the az containerapp update command you can modify environment variables, compute resources, scale parameters, and deploy a different image. If your container app update includes revision-scope changes, a new revision is generated.

	az containerapp update --name <APPLICATION_NAME> --resource-group <RESOURCE_GROUP_NAME> --image <IMAGE_NAME>

You can list all revisions associated with your container app with the az containerapp revision list command.

az containerapp revision list --name <APPLICATION_NAME> --resource-group <RESOURCE_GROUP_NAME> 
  -o table

Manage secrets in Azure Container Apps

Azure Container Apps allows your application to securely store sensitive configuration values. Once secrets are defined at the application level, secured values are available to container apps. Specifically, you can reference secured values inside scale rules.

  • Secrets are scoped to an application, outside of any specific revision of an application.
  • Adding, removing, or changing secrets doesn’t generate new revisions.
  • Each application revision can reference one or more secrets.
  • Multiple revisions can reference the same secrets.

An updated or deleted secret doesn’t automatically affect existing revisions in your app. When a secret is updated or deleted, you can respond to changes in one of two ways:

1- Deploy a new revision. 2- Restart an existing revision.

Before you delete a secret, deploy a new revision that no longer references the old secret. Then deactivate all revisions that reference the secret.

! Container Apps doesn’t support Azure Key Vault integration. Instead, enable managed identity in the container app and use the Key Vault SDK in your app to access secrets.

Defining secrets

When you create a container app, secrets are defined using the --secrets parameter.

  • The parameter accepts a space-delimited set of name/value pairs.
  • Each pair is delimited by an equals sign (=).

In the following example, a connection string to a queue storage account is declared in the --secrets parameter. The value for queue-connection-string comes from an environment variable named $CONNECTION_STRING.

az containerapp create \
  --resource-group "my-resource-group" \
  --name queuereader \
  --environment "my-environment-name" \
  --image demos/queuereader:v1 \
  --secrets "queue-connection-string=$CONNECTION_STRING"

After declaring secrets at the application level, you can reference them in environment variables when you create a new revision in your container app. When an environment variable references a secret, its value is populated with the value defined in the secret. To reference a secret in an environment variable in the Azure CLI, set its value to secretref:, followed by the name of the secret.

The following example shows an application that declares a connection string at the application level. This connection is referenced in a container environment variable.

az containerapp create \
  --resource-group "my-resource-group" \
  --name myQueueApp \
  --environment "my-environment-name" \
  --image demos/myQueueApp:v1 \
  --secrets "queue-connection-string=$CONNECTIONSTRING" \
  --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"

Dapr integration with Azure Container Apps

The Distributed Application Runtime (Dapr) is a set of incrementally adoptable features that simplify the authoring of distributed, microservice-based applications. Dapr provides capabilities for enabling application intercommunication through messaging via pub/sub or reliable and secure service-to-service calls.

Dapr is an open source, Cloud Native Computing Foundation (CNCF) project. The CNCF is part of the Linux Foundation and provides support, oversight, and direction for fast-growing, cloud native projects. As an alternative to deploying and managing the Dapr OSS project yourself, the Container Apps platform:

  • Provides a managed and supported Dapr integration
  • Handles Dapr version upgrades seamlessly
  • Exposes a simplified Dapr interaction model to increase developer productivity

Dapr APIs

  • Service-to-service invocation: Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption.
  • State management: Provides state management capabilities for transactions and CRUD operations.
  • Pub/sub: Allows publisher and subscriber container apps to intercommunicate via an intermediary message broker.
  • Bindings: Trigger your applications based on events
  • Actors: Dapr actors are message-driven, single-threaded, units of work designed to quickly scale. For example, in burst-heavy workload situations.
  • Observability: Send tracing information to an Application Insights backend.
  • Secrets: Access secrets from your application code or reference secure values in your Dapr components.
  • Configuration: Retrieve and subscribe to application configuration items for supported configuration stores.

Dapr enablement

You can configure Dapr using various arguments and annotations based on the runtime context. Azure Container Apps provides three channels through which you can configure Dapr:

  • Container Apps CLI
  • Infrastructure as Code (IaC) templates, as in Bicep or Azure Resource Manager (ARM) templates
  • The Azure portal

Dapr components and scopes

Dapr uses a modular design where functionality is delivered as a component. The use of Dapr components is optional and dictated exclusively by the needs of your application.

Dapr components in container apps are environment-level resources that:

  • Can provide a pluggable abstraction model for connecting to supporting external services.
  • Can be shared across container apps or scoped to specific container apps.
  • Can use Dapr secrets to securely retrieve configuration metadata.

By default, all Dapr-enabled container apps within the same environment load the full set of deployed components. To ensure components are loaded at runtime by only the appropriate container apps, application scopes should be used.

Authentication and authorization

The microsoft identity platform helps you build applications your users and customers can sign in to using their Microsoft identities or social accounts, and provide authorized access to your own APIs or Microsoft APIs like Microsoft Graph.

There are several components that make up the Microsoft identity platform:

  • OAuth 2.0 and OpenID Connect standard-compliant authentication service enabling developrs to authenticate several identity types, including:
    • Work or school accounts, provisioned through Microsoft Entra ID
    • Personal Microsoft account,like Skype, Xbox, and Outlook.com
    • Social or local accounts, by using Azure Active Directory B2C
    • Social or local customer accounts, by using Microsoft Entra External ID
  • Open-source libraries Microsoft Authentication Libraries (MSAL) and support for othr standards-compliant libraries.
  • Microsoft identity platform endpoint Works with the Microsoft Authentication Librarires (MSAL) or any other standards-complient library. It implements hyman readable scopes, in accordance with indistury standards.
  • Application managent portal A registration and configuration experience in the Azure portal, along with the other Azure management capabilities.
  • Application configuration API and Powershell Programmatic configuration of your applications through the Microsoft Graph API and PowerShell so you can automate your DevOps tasks.

For developers, the Microsoft identity platform offers integration of modern innovations in the identity and security space like passwordless authentication, step-up authentication, and Conditional Access. You don’t need to implement such functionality yourself: applications integrated with the Microsoft identity platform natively take advantage of such innovations.

Service Principals

To delegate Identity and Access Management to Microsoft Entra ID, an application must be registered with a Microsoft Entra tenant. When you register your application that allows it to integrate with Microsoft Entra ID. When you register an app in the Azure Portal, you choose wheter it is:

  • Single tenant : only accessible in your tenant
  • Multi-tenant: accessible in other tenants

If you register an application in the portal, an application object ( the globally unique instance of the app) and a service principal object are automatically created in your home tenant. You also have a globally unique ID for your app ( the app or client ID ). In the portal, you can then add secrets or certificates and scopes to make your app work, customize the branding of your app in the sign-in dialog, and more.

! You can also create service principal objects in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, and other tools.

Application Object

A Microsoft Entra application is scoped to its one and only application object. The application object resides in the Microsoft Entra tenant where the application was registered (known as the application’s “home” tenant). An Application object is used as a template or blueprint to create one or more service principal objects. A service principal is created in every tenant where the application is used. Similar to a class in object-orianted programming, the application object has some static properties that are applied to all the created service principals ( or application instances ).

The application object describes three aspects of an application:

  • How the service can issue tokens in order to access the application.
  • Resources that the application might need to access.
  • The actions that the application can take.

The Microsoft Graph Application entity defimes the schema for an application object’s properties.

Service principal object

To access resources secured by a Microsoft Entra tenant the entity that is requesting access must be represtented by a security principal. This is true for both users ( user principal) and applications (service principal).

The security principal defines the access policy and permissions for the user/application in the Microsoft Entra tentant. This enables core features such as authentication of the user/application during sign-in, and authorization during resource access. There are three types of service principal:

  • Application: This type of service principal is the local representation, or application instance, of a global application object in a single tenant or directory. A service principal is created in each tenant where the application is used, an references the globally unique app object. The service princpal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.

  • Managed Identity: This type of service principal is used to represent a managed identity. Managed identities provide an identity for applications to use when connecting to resources that support Microsoft Entra authentication. WHen managed Identity is enabled, a service prinicipal represinting that managed idenity is created in your tenant. Service principals representing managed identities can be granted access and permissions, but can’t be updated or modified directly.

  • Legacy: This type of service principal represents a legacy app, which is an app created before app registrations were introduced or an app created through legacy experiences. A legacy service principal can have:

    • credentials
    • service principal names
    • reply URLs
    • and other properties that an authorized user can edit, but doesn’t have an associated app registration.

Relationship between application objects and service principals

The application object is the global representation of your application for use across all tenants, and the service principal is the local representation for use in a specific tenant. The application object serves as the template from which common and default properties are derived for use in creating corresponding service principal objects.

An application object has:

  • A one to one relationship with the software application, and
  • A one to many relationships with its corresponding service principal objects.

A service principal must be created in each tenant where the application is used to establish an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant application has only one service principal (in its home tenant), created and consented for use during application registration. A multitenant application also has a service principal created in each tenant where a user from that tenant consented to its use.

Applications that integrate with the Microsoft identity platform follow an authorization model that gives users and administrators control over how data can be accessed.

The Microsoft identity platform implements the OAuth 2.0 authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or application ID URI.

🚧 <>

Azure app service provides built-in authentication and authorization support, uses federated identity, in which a third-party identity provider manages the user identities and authentication flow.

Microsoft Entra, Facebook, Google, X, Any OpenID Connect Provider, Github, Apple

  • The OAuth 2.0 On-Behalf-Of flow (OBO) is used when an application invokes a service or web API, which in turn needs to call another service or web API. The idea is to propagete the delegated user identity and permissions through the request chain.
  • The OAuth 2.0 authorization code grant can be used in apps that are installed on a device to gain access to protected resources, such as web APIs.
  • The OAuth 2.0 client credentials grant flow permis a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service.

Delegated permissions

Delegeted permissions are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests and the app can function as the signed-in user when making calls to Microsoft Graph.

Application Insights

  • COHORT - Be able to see the engaged users based on the usage of the applications
  • IMPACT - Be able to see how load times of the application interfere with the conversion rates for various parts of the application
  • FUNNELS - Be able to see how users progress through multiple stages of application
  • RETENTION - Be able to compare the number of users who return to your application

Preaggregated standard metrics are not affacted by telemetry sampling and provide accurate real-time data, which makes them suitable for dashboarding and alerting.*

Azure Event Hubs

Azure Event Hubs is a native data-streaming service in the cloud that can stream millions of events per second, with low latency, from any source to any destination. Event Hubs is compatible with Apache Kafka. It enables you to run existing Kafka workloads without any code change.

Key Components

  • Event Hub namespace - This is a container for multiple Event Hubs
  • Event Publisher - This sends data to the Event Hub (producer)
  • Event Retention - Standard 7 days. Premium, Dedicated 90 days maximum
  • Event Consumer - These consume the events from Event Hub (consumer)
  • Consumer Group - This is a logical grouping of consumers that read data from the event hub

Throughput

Throughput capacity of the Event hub is controlled via number of throughput units you assign. These are prepurchased but billed per hour.

  • Ingress (input) - 1MB per second or 1000 events per second
  • Egress (output) - 2MB per second or 4096 events per second

NuGet Packages

dotnet add package Azure.Messaging.EventHubs
dotnet add package Azure.Messaging.EventHubsProcessor

Notable Classes

EventHubProducerClient, EventDataBatch, EventData, EventHubConsumerClient, PartitionEvent, EventProcessorClient

Important Notes

  • Events are not deleted after reading, consumer needs to keep track of events being read. Events will be removed after retention period. It’s not a persistence storage

  • Multiple partitions increase throughput

  • The number of partitions is specified at the time of creating an event hub. It must be between one and maximum partition count allowed for each pricing tier

  • EventProcessorClient uses storage account to store the position of the reads

  • Capture Feature Basic pricing tier doesn’t support this feature, has to be at least Standard pricing tier. Avro, Parquet, Delta Lake are the data formats. Stores in a storage account, time based or size based limitations available

  • Azure Service Bus comparison - Used for messages, Azure Service Bus is fully managed enterprise message broker. Has topics and messages. Azure Event Hub is used for receiving events, this is a data streaming service that can stream millions of events per second. This can be from any source or destination

Azure Event Grid (MQTT)

Azure Event Grid is a highly scalable, fully managed Pub Sub message distribution service that offers flexible message consumption patterns using the MQTT and HTTP Protocols.

It can be used with Azure services as publisher, and Azure functions as subscriber.

Key Components

  • System Topic - Events from Azure Services
  • Custom Topic - Here you can publish your own Application-based events
  • Subscribers - Multiple subscribers can subscribe to a topic

Event Grid Schema

[
    {
        "topic": "string",
        "subject": "string",
        "id": "string",
        "eventType": "string",
        "eventTime": "string",
        "data": { "object-unique-to-each-publisher" },
        "dataVersion": "string",
        "metadataVersion": "string"
    }
]

Event sources send events to Azure Event Grid in array, which can have several event objects. When posting events to an Event Grid topic, the array can have a total size of up to 1MB.

NuGet Package

dotnet add package Azure.Messaging.EventGrid

Notable Classes

EventGridEvent, SubscriptionValidationEventData, EventGridPublisherClient

Important Notes

  • Endpoint validation - For HTTP based subscribers, on first subscription Event Grid sends validation code, and you respond with that validation code back (handshake). Once hub receives the code, the subscription starts
  • To send custom topics we need to create a topic before sending. Topic Endpoint and access key will be used to send/receive events to this topic

API Management Service

With Azure API Management, you get an API Gateway. All requests to the APIs can flow via the API Gateway.

https://azure.microsoft.com/en-us/pricing/details/api-management/

  • Has built in security, validates API keys or JWT tokens
  • Cache responses
  • You can enforce usage quotas and rate limits

To enable access only from the API Management to a web app instance:

Networking > Public Network Access > Enabled from select virtual networks and IP Addresses >

  • Add Rule > Action: Allow > Enter the API Management Public IP Address
  • Add Rule > Action: Deny > 0.0.0.0/0

Policies in Azure API Management

API publishers can change API behavior through configuration using policies. Policies are a collection of statements that run sequentially on the request or response of an API.

  • inbound - Modify the behavior of the request as it comes to API Management Service
  • backend - Modify the way of the requests are forwarded to the web application
  • outbound - Modify the way of the responses of the API Management Services
  • on-error - Modify the way error is handled

Policies can be defined for All Operations or per operation.

<policies>
    <inbound>
    <!-- statements to be applied to the request go here -->
    </inbound>
    <backend>
    <!-- statements to be applied before the request is forwarded to 
        the backend service go here -->
    </backend>
    <outbound>
    <!-- statements to be applied to the response go here -->
    </outbound>
    <on-error>
    <!-- statements to be applied if there's an error condition go here -->
    </on-error>
</policies>

Example Policies

IP Filter

<inbound>
    <base />
    <ip-filter action="forbid">
        <address>127.0.0.1</address>
    </ip-filter>
</inbound>

Rewrite URL

<inbound>
    <base />
    <set-variable name="id" value="@(context.Request.Url.Query.GetValueOrDefault("id"))" />
    <rewrite-uri template="@{return "/api/Course/"+ context.Variables.GetValueOrDefault<string>("id");}" />
</inbound>

Return Response

<outbound>
    <base />
    <choose>
        <when condition="@(context.Response.StatusCode == 200)">
            <return-response>
                <set-status code="200" reason="OK" />
                <set-header name="Response-reason" exists-action="override">
                    <value>"Returned course list"</value>
                </set-header>
                <set-body>@{ 
                string text = context.Response.Body.As<string>(preserveContent: true);                      
                return text; 
                }</set-body>
            </return-response>
        </when>
    </choose>
</outbound>

Cache (internal cache example)

*duration is in seconds

<policies>
    <inbound>
        <base />
        <cache-lookup vary-by-developer="false" vary-by-developer-groups="false" downstream-caching-type="none" must-revalidate="true" caching-type="internal" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <cache-store duration="60" />
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

Monitoring

  • Applications Insights allows you to create web tests that simulate user intereactions with your application and then set up alerts based on the results of these tests.
  • Azure Monitor Resourse Health alerts are used for infrastructure monitoring
  • Azure Service Health provides information about Azure services issues and planned maintanance
  • Azure Advisor provides best practice recommendations
  • Live Metrics provides real-time observation of application’s activity, allowing for immediate detection and response to performance issues.

Powershell Commands

  • az logout
  • az login --tenant "<tenantId>" --scope "https://management.core.windows.net//.default"
  • Show current subscription az account show
  • List all subscriptions az account list
  • Switch to another subscription az account set --subscription "<name>" you get name from the list above, name field

Create an App Configuration store

  • az group create --name <resourceGroupName> --location <location>

  • az appconfig create --location <location> --name <name> --resource-group <resourceGroupName>

    (MissingSubscriptionRegistration) The subscription is not registered to use namespace ‘Microsoft.AppConfiguration’.

    When there is an error like this about any other service;

    1- Go to portal

    2- Find the subscription

    3- Go to details

    4- Find Resource Providers, and search for the missing registration item

    5- Select and Click register, and re-try

  • az appconfig kv set --name <name> --key TestApp:Settings:TextAlign --value center

Remove a resource group

az group delete --name MyResourceGroup --yes --no-wait
az group show --name MyResourceGroup