Deploying a Key Vault-based TDE protector for Azure SQL

Azure SQL now supports setting up Transparent Data Encryption while bringing your own encryption keys. This is easy to setup in the Azure Portal, but I wanted to try setting this up in an automated manner, preferably leveraging ARM templates as much as possible.

This turned out to be a bit more complex than I expected, but totally doable. Here are the necessary steps:

The ARM templates for this post can be found on GitHub.

Creating the Key

When using the BYOK (bring your own key) feature, you need to store your encryption key in an Azure Key Vault. So let’s first create one:

  • In the Azure Portal, locate your Key Vault
  • Click on the Keys icon
  • Click the + Add button
  • Select the Generate option, the name, and other required properties
  • Click the Create button to complete the process

Creating encryption key

You could do the same in a script with an Azure CLI command like the following:

az keyvault key create --vault-name $vaultName --name $keyName --size 2048 --protection software

Once you’ve created the key, make sure to grab the name of the current version of the key. You can do this in the portal, or by using the command line:

az keyvault key show --vault-name $vaultName --name $keyName --query key.kid

The URI returned will be something similar to https://<vault_name>.vault.azure.net/keys/<key_name>/<key_version>. We’re interested in the last segment.

Deploying the Server

The next step is to use an ARM template to deploy the Azure SQL Server resource. At this point, we are not actually creating the TDE protector, as we need to do something else first.

Here’s the core of the template to create the server:

{
    "name": "[parameters('sqlServerName')]",
    "type": "Microsoft.Sql/servers",
    "location": "[resourceGroup().location]",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [],
    "tags": {
        "displayName": "SQL Logical Server"
    },
    "identity": {
        "type": "SystemAssigned"
    },
    "properties": {
        "administratorLogin": "[parameters('sqlServerAdminLogin')]",
        "administratorLoginPassword": "[parameters('sqlServerAdminLoginPassword')]"
    }
}

The interesting bit here is the identity property. When we set it to SystemAssigned, SQL will go to the Azure Active Directory Tenant associated with the Azure Subscription and create a new Service Principal named RN_<serverName>.

This principal will be setup with X509 certificate credentials for authentication. I’m unsure at this point if it’s possible to create this Service Principal manually, as it would simplify things somewhat.

Once the server is created, we’ll see this identity reflected in the resource properties:

SQL Service Principal

While we’re at it, let’s create a new database on the server, and enable TDE. The latter is done by creating a nested resource of type transparentDataEncryption:

{
    "name": "[parameters('sqlDbName')]",
    "type": "databases",
    "location": "[resourceGroup().location]",
    "apiVersion": "2014-04-01-preview",
    "dependsOn": [
        "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]"
    ],
    "tags": {
        "displayName": "SQL DB"
    },
    "properties": {
        "collation": "[parameters('sqlDbCollation')]",
        "edition": "[parameters('sqlDbEdition')]",
        "maxSizeBytes": "1073741824",
        "requestedServiceObjectiveName": "[parameters('sqlDbRequestedServiceObjectiveName')]"
    },
    "resources": [
        {
            "comments": "Transparent Data Encryption",
            "name": "current",
            "type": "transparentDataEncryption",
            "apiVersion": "2014-04-01-preview",
            "properties": {
                "status": "Enabled"
            },
            "dependsOn": [
                "[parameters('sqlDbName')]"
            ]
        }
    ]
}

At this point, TDE will be enabled on the new Database using the default Server Managed encryption key.

Granting Key Permissions

Once the new Azure SQL Server has an identity in Azure Active Directory, we need to grant permissions to that entity to fetch our encryption key from Key Vault. We can use a command like the following to grant it permissions to get, wrap, and unwrap our encryption key:

objectId=`az sql server show --resource-group $rg --name $sqlserver --query identity.principalId | sed -e 's/^"//' -e 's/"$//'`
az keyvault set-policy --name $vaultName --object-id $objectId --key-permissions get wrapKey unwrapKey

Creating the Protector

Now we can go back to ARM, and create the TDE protector. For this, we need to create 2 separate resources:

Server Key

First, let’s create a new server key from the Key Vault. All we need for this is the name of the server, the key vault, the key name and version. From the last 3, we need to compute a couple of things:

{
    "variables": {
        "serverKeyName": "[concat(parameters('keyVaultName'), '_', parameters('keyName'), '_', parameters('keyVersion'))]",
        "keyUri": "[concat('https://', parameters('keyVaultName'), '.vault.azure.net/keys/', parameters('keyName'), '/', parameters('keyVersion'))]"
    }
}

The serverKeyName variable contains the name of the server key we’ll create; this needs to be in the specific format vaultName_keyName_keyVersion. The keyUri variable is just to avoid having to add an extra parameter to the template with duplicated information.

Now we can create a keys resource, like this:

{
    "name": "[concat(parameters('sqlServerName'), '/', variables('serverKeyName'))]",
    "type": "Microsoft.Sql/servers/keys",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
        "[parameters('sqlServerName')]"
    ],
    "properties": {
        "serverKeyType": "AzureKeyVault",
        "uri": "[variables('keyUri')]"
    }
}

All we really need to specify here is the key type (AzureKeyVault) and the URI of the key in Key Vault.

Rotating the Protector

Now we can rotate the protector by setting the “current” encryptionProtector resource on the server to the new server key.

{
    "name": "[concat(parameters('sqlServerName'), '/current')]",
    "type": "Microsoft.Sql/servers/encryptionProtector",
    "kind": "azurekeyvault",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
        "[resourceId('Microsoft.SQL/servers/keys', parameters('sqlServerName'), variables('serverKeyName'))]"

    ],
    "properties": {
        "serverKeyName": "[variables('serverKeyName')]",
        "serverKeyType": "AzureKeyVault"
    }
}

If we go into the portal and check the TDE settings for the SQL Server, we should see it is indeed configured to use Key Vault at this point:

TDE with Key Vault

Authenticating to SQL Azure with delegated tokens

In a previous post, I discussed how to authenticate to an Azure SQL database from a Web Application (running in Azure App Service) using an Azure Active Directory Service Principal. For this I used a certificate stored in Key Vault to authenticate the principal and obtain a token I could present to SQL.

You can find the updated code for this post on GitHub.

In this post, let’s expand this a bit further: I will add authentication support to the Web Application by federating through OpenId Connect to Azure Active Directory, and then delegate the user credentials all the way to the database.

Configure new users in SQL

Since I want to allow other Azure AD users to connect to the database, I need to grant them permissions to it.

To do this, I just follow the steps we used in my previous post, and use the T-SQL commands:

  • Create the user: CREATE USER [user@tenant] FROM EXTERNAL PROVIDER
  • Grant permissions: ALTER ROLE

Setting up the AD Application

So far, I’ve been using an Azure Active Directory Service Principal to authenticate to SQL. In order to be able to setup OpenId Connect on our Web App, I also need an Azure AD application to go with it.

So what I want is to configure my existing Service Principal so that I can use it for this purpose as well.

Note: using a separate AD application registration won’t work because doing the On_Behalf_Of delegation will fail.

To do this, I’m going to open the Azure AD portal and find the Service Principal in the App Registrations section of the portal. Here, I open the Reply URLs page and add the application host (i.e. https://<app>.azurewebsites.net/):

Set Reply URL

Granting Sign-In permission

For users to be able to sign-in, I need to give the application the right permissions. I do this by opening the Required permissions page, and clicking on the + Add button.

Here, I select Windows Azure Active Directory (Microsoft.Azure.ActiveDirectory) as the API, and check the “Sign in and read user profile” option under “Delegated Permissions”:

Grant Sign in permissions

Then click save and confirm all changes.

Granting Access to SQL

There is one other permission that I need to grant to the application, so that we can delegate credentials to SQL Server. Again, click on the + Add button, and select Azure SQL Database as the API. Then, check the “Access Azure SQL DB and Data Warehouse” option under the “Delegated Permissions” section:

Grant SQL Permissions

Then save all the changes.

As a final step, I locate the Application Id property in the application properties; we’ll need this value in a moment.

Enabling Authentication

Now I can enable federated authentication. I could do this by leveraging the Easy Authentication feature in App Service, but since I’ll need to change the application code later on, I’ll do this in code just as easy.

First, let’s add references to the following NuGet packages into the project:

  • Microsoft.Owin.Host.SystemWeb
  • Microsoft.Owin.Security.OpenIdConnect
  • Microsoft.Owin.Security.Cookies

Now I can write our startup code to enable authentication:

using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Microsoft.Owin.Security.OpenIdConnect;
using Owin;
using System;
using System.Configuration;

namespace UsingTokenAuthApp
{
    public class AuthConfig
    {
        public static void RegisterAuth(IAppBuilder app)
        {
            var clientId = ConfigurationManager.AppSettings["APP_CLIENT_ID"];
            var tenant = ConfigurationManager.AppSettings["AAD_TENANT_ID"];

            var authority = $"https://login.microsoftonline.com/{tenant}";

            app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
            app.UseCookieAuthentication(new CookieAuthenticationOptions());

            var options = new OpenIdConnectAuthenticationOptions
            {
                Authority = authority,
                ClientId = clientId,
            };
            app.UseOpenIdConnectAuthentication(options);
        }
    }
}

This is obviously a very minimal implementation, but enough for my needs.

Note: Configure the APP_CLIENT_ID value in appSettings to the Application Id value I copied from the Service Principal properties in the previous step.

I also want to force every request to be authenticated, so I’ll add a global AuthorizeAttribute filter:

public class FilterConfig
{
    public static void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new AuthorizeAttribute());
    }
}

Obtaining delegated Credentials

Now that the user can be authenticate by the application, I want to change how the access token is obtained.

I’m going to create a separate method for this:

public static async Task<String> GetDelegatedTokenAsync(ClaimsPrincipal user)
{
    var context = new AuthenticationContext(Authority);
    var certificate = FindCertificate(CertSubjectName);
    if (certificate == null)
        throw new InvalidOperationException("Could not load certificate");

    var id = (ClaimsIdentity)user.Identity;
    var bootstrap = id.BootstrapContext as BootstrapContext;
    var credential = new ClientAssertionCertificate($"http://{ServicePrincipalName}", certificate);

    var userAssertion = new UserAssertion(bootstrap.Token);
    var result = await context.AcquireTokenAsync(SqlResource, credential, userAssertion);
    return result.AccessToken;
}

The code is very similar to the original, except I obtain the JWT token of the current user and use that to create a UserAssertion object that I provide to AcquireTokenAsync.

Note: This is not production-ready code. Besides needing additional error handling, we also need to cache the access tokens by providing a proper TokenCache implemenation. Some discussion on this can be found here.

By default, ClaimsIdentity.BootstrapContext will be null. To fix this, I need to modify the Web.config file to tell the identify framework to save the JWT token of the user during login:

<system.identityModel>
    <identityConfiguration saveBootstrapContext="true" />
</system.identityModel>

Now let’s put this code to use:

public async Task<ActionResult> UsingDelegation()
{
    var user = this.User as ClaimsPrincipal;
    var token = await ADAuthentication.GetDelegatedTokenAsync(user);

    String cs = ConfigurationManager.ConnectionStrings["SqlDb"].ConnectionString;
    var sqlUser = await GetSqlUserName(cs, token);

    ViewBag.SqlUserName = sqlUser;
    return View("DelegatedContext");
}

Testing the application

After deploying all the changes, I am ready to test this. I navigate to the application’s home page (https://<app>.azurewebsites.net/) and I should be redirected to the Azure AD login page. Here I provide credentials for a valid user.

Once logged in, I navigate to the Credential Delegation page (/home/UsingDelegation).

If you’re trying this yourself, you might get an error similar to:

{
    "error":"invalid_grant",
    "error_description":"AADSTS65001: The user or administrator has not consented to use the application with ID 'http://TestSpWithCert'. Send an interactive authorization request for this user and resource.\r\nTrace ID: 48285e18-5e24-48a3-98b3-93027d392700\r\nCorrelation ID: 08cd347f-117e-412d-9a63-b2fc533fef22\r\nTimestamp: 2017-08-31 18:32:48Z",
    "error_codes":[65001],
    "timestamp":"2017-08-31 18:32:48Z",
    "trace_id":"48285e18-5e24-48a3-98b3-93027d392700","correlation_id":"08cd347f-117e-412d-9a63-b2fc533fef22"
}

This means you forgot to grant permissions to your application to access SQL Server in Azure AD.

If everything was setup correctly, however, I should see my user credentials delegated to SQL server without problems:

User delegated to SQL

Conclusion

Doing credential delegation to Azure SQL DB through Azure Active Directory opens up interesting options, such as taking advantage of row-based security or better use of Data Masking in SQL.

Hope you found this series of articles useful!

Token authentication to SQL Azure with a Key Vault Certificate

In a previous post, I presented a PowerShell script to create a new Service Principal in Azure Active Directory, using a self-signed certificate generated directly in Azure Key Vault for authentication.

Now, let’s try using it for somethig useful. All the code and samples for this article can be found on GitHub.

We can use the Key Vault certificate in a Web Application deployed to Azure App Service to authenticate to Azure Active Directory using our Service Principal, and then obtain a token to connect to SQL Azure. This saves us from having to store passwords anywhere in our configuration, since Key Vault and App Service will provide us with easy, secure access to our authentication certificate.

In order to do this, we need a few things:

  • Our Service Principal identity in AAD, with the Key Vault certificate
  • A SQL Azure Database
  • A SQL Server (Azure) login based on our AAD Service Principal, with permissions on the database in question.
  • A Web App deployed with our Key Vault certificate
  • An ASP.NET App that will use the certificate to authenticate to AAD, then use the token to connect to SQL.

Let’s get started!

Creating the Database

For this sample, I’m going to create a new Azure SQL Server logical server, then deploy a new, blank database on it. We’ll also set up the server firewall to allow connections from other Azure resources.

Since we want to use Azure Active Directory authentication, we also need to setup our new server to have an AzureAD admin user. For this we need both the username (user@domain) and the object id of the account in the domain.

As usual, let’s use Azure Resource Manager (ARM) Templates for this, by creating a resource of type Micosoft.Sql/servers/administrators:

{
    "name": "[parameters('sqlServerName')]",
    "type": "Microsoft.Sql/servers",
    "location": "[resourceGroup().location]",
    "apiVersion": "2014-04-01-preview",
    "dependsOn": [],
    "tags": {
        "displayName": "SQL Logical Server"
    },
    "properties": {
        "administratorLogin": "[parameters('sqlServerAdminLogin')]",
        "administratorLoginPassword": "[parameters('sqlServerAdminLoginPassword')]"
    },
    "resources": [
        {
            "name": "activedirectory",
            "type": "administrators",
            "location": "[resourceGroup().location]",
            "apiVersion": "2014-04-01-preview",
            "dependsOn": [
                "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]"
            ],
            "properties": {
                "administratorType": "ActiveDirectory",
                "login": "[parameters('sqlServerAdAdmin')]",
                "sid": "[parameters('sqlServerAdAdminObjectId')]",
                "tenantId": "[subscription().tenantId]"
            }
        },
        {
            "name": "AllowAllWindowsAzureIps",
            "type": "firewallrules",
            "location": "[resourceGroup().location]",
            "apiVersion": "2014-04-01-preview",
            "dependsOn": [
                "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]"
            ],
            "properties": {
                "startIpAddress": "0.0.0.0",
                "endIpAddress": "0.0.0.0"
            }
        },
        {
            "name": "[parameters('sqlDbName')]",
            "type": "databases",
            "location": "[resourceGroup().location]",
            "apiVersion": "2014-04-01-preview",
            "dependsOn": [
                "[resourceId('Microsoft.Sql/servers', parameters('sqlServerName'))]"
            ],
            "tags": {
                "displayName": "SQL DB"
            },
            "properties": {
                "collation": "[parameters('sqlDbCollation')]",
                "edition": "[parameters('sqlDbEdition')]",
                "maxSizeBytes": "1073741824",
                "requestedServiceObjectiveName": "[parameters('sqlDbRequestedServiceObjectiveName')]"
            }
        }
    ]
}

Creating the SQL Login

To define our new SQL login for our service principal, we must first connect to our new database using the Azure AD admin account we created in the previous step. Then, we want to grant the necessary permissions (we’ll make it db_owner for now). You can do this from SQL Server Management Studio, or use a version of SqlCmd.exe that supports Azure AD Authentication:

[CmdletBinding()]
param(
    [Parameter(Mandatory=$true)]
    [String]$sqlServer,
    [Parameter(Mandatory=$true)]
    [String]$database,
    [Parameter(Mandatory=$true)]
    [PSCredential]$sqlAdminCredentials,
    [Parameter(Mandatory=$true)]
    [String]$servicePrincipalName
)

# For this script to work we need SqlCmd v14.1 or later
# which can be downloaded from
# https://www.microsoft.com/en-us/download/details.aspx?id=53591
# and the latest ODBC driver from
# https://www.microsoft.com/en-us/download/details.aspx?id=53339

$query = @"
CREATE USER [$servicePrincipalName] FROM EXTERNAL PROVIDER
GO
ALTER ROLE db_owner ADD MEMBER [$servicePrincipalName]
"@

sqlcmd.exe -S "tcp:$sqlServer,1433" `
           -N -C -d $database -G -U $sqlAdminCredentials.UserName `
           -P $sqlAdminCredentials.GetNetworkCredential().Password `
           -Q $query

Creating the Web Application

Now we can create a new Web App resource that references our Key Vault certificate. We will add this to our ARM template, and most of it is normal stuff, so I won’t repeat it here. The interesting bit is importing the certificate from Key Vault using a resource of type Microsoft.Web/certificates.

To do this, we will need the Id of our Key Vault (which we can get from the resource group and key vault names), the certificate secret name, and the Id of our App Service Plan:

{
    "name": "[concat(parameters('servicePrincipalName'))]",
    "type": "Microsoft.Web/certificates",
    "location": "[resourceGroup().location]",
    "apiVersion": "2015-08-01",
    "properties": {
        "keyVaultId": "[resourceId(parameters('keyVaultRGName'), 'Microsoft.KeyVault/vaults', parameters('keyVaultName'))]",
        "keyVaultSecretName": "[concat('SPCert-', parameters('servicePrincipalName'))]",
        "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('webAppPlanName'))]"
    }

}

Note: In order to use certificates, our Web Application must be deployed on an App Service Plan in the Basic/Standard/Premium tiers.

The first time you try this, you will likely get an error similar to this:

The service does not have access to '<keyvault_id_here>' Key Vault.
Please make sure that you have granted necessary permissions to the service to perform the request operation

This happens because internally, App Service will use a “Well-Known Service Principal” to connect to your Key Vault and read the certificate, but it won’t have permissions by default.

You can find such principals by enumerating Service Principals in your Azure Active Directory Tenant, using a command such as Get-AzureRmAdServicePrincipal in PowerShell, or az ad sp list in the Azure CLI:

ServicePrincipalNames : {abfa0a7c-a6b6-4736-8310-5855508787cd, Microsoft.Azure.WebSites}
ApplicationId         : abfa0a7c-a6b6-4736-8310-5855508787cd
DisplayName           : Microsoft.Azure.WebSites
Id                    : a365affc-333f-4deb-9f8b-5f802fbb2615
Type                  : ServicePrincipal

We can then grant this principal the necessary permissions by using PowerShell:

Set-AzureRmKeyVaultAccessPolicy -VaultName trkv -ServicePrincipalName abfa0a7c-a6b6-4736-8310-5855508787cd -PermissionsTo get

We can do the same using the Azure CLI command:

az keyvault set-policy --name trkv --spn abfa0a7c-a6b6-4736-8310-5855508787cd --secret-permissions get

Now, our ARM template should be able to deploy the certificate without problems.

Using Token Authentication to SQL

We can now write a simple ASP.NET application that will connect to our SQL Database and show us the authenticated user, which should match our Service Principal.

Let’s first write the code to load the certificate, and then authenticate to Azure AD using the Active Directory Authentication Library (ADAL):

public static class ADAuthentication
{
    static String TenantId = ConfigurationManager.AppSettings["AAD_TENANT_ID"];
    static String Authority = "https://login.windows.net/" + TenantId;
    static String ServicePrincipalName = ConfigurationManager.AppSettings["SERVICE_PRINCIPAL_NAME"];
    static String CertSubjectName = ConfigurationManager.AppSettings["CERT_SUBJECT_NAME"];
    const String SqlResource = "https://database.windows.net/";

    public static X509Certificate2 FindCertificate(String subjectName)
    {
        using (var store = new X509Store(StoreName.My, StoreLocation.CurrentUser))
        {
            store.Open(OpenFlags.ReadOnly);
            var results = store.Certificates.Find(X509FindType.FindBySubjectName, CertSubjectName, false);
            if ( results.Count > 0 )
            {
                return results[0];
            }
        }
        return null;
    }

    public static async Task<String> GetSqlTokenAsync()
    {
        var context = new AuthenticationContext(Authority);
        var certificate = FindCertificate(CertSubjectName);
        if (certificate == null)
            throw new InvalidOperationException("Could not load certificate");

        var credential = new ClientAssertionCertificate($"http://{ServicePrincipalName}", certificate);

        var result = await context.AcquireTokenAsync(SqlResource, credential);
        return result.AccessToken;
    }
}

A couple of notes about the code:

  • To use ClientAssertionCertificate, we need to specify the URL identifier for our Service Principal. The code above assumes that’s http:// and its name.
  • To acquire a token, we need to specify the target resource. In this case, we will always use the https://database.windows.net/ resource identifier to tell AzureAD we want a token to an Azure SQL Server database.

Once we have a token, we can go ahead and connect to SQL Azure, by assigning the token to the SqlConnection.AccessToken property:

public async Task<ActionResult> UsingSP()
{
    String cs = ConfigurationManager.ConnectionStrings["SqlDb"].ConnectionString;

    var token = await ADAuthentication.GetSqlTokenAsync();

    var user = await GetSqlUserName(cs, token);
    ViewBag.SqlUserName = user;
    return View("UserContext");
}

private async Task<String> GetSqlUserName(String connectionString, String token)
{
    using (var conn = new SqlConnection(connectionString))
    {
        conn.AccessToken = token;
        await conn.OpenAsync();
        String text = "SELECT SUSER_SNAME()"; 
        using (var cmd = new SqlCommand(text, conn))
        {
            var result = await cmd.ExecuteScalarAsync();
            return result as String;
        }

    }
}

If the connection succeeds, we should see something like this in our application:

Authenticated user message

Here the username will be composed of two parts separated by @:

  • The first one is the ApplicationId of our service principal in Azure AD.
  • The second is the TenantId for the directory

Conclusion

In this post I introduced how to take advantage of Azure Active Directory Service Principals, together with Key Vault-based certificates to authenticate to Azure SQL using an access token. Along the way, we figured out how to automate some deployment processes and configure things using ARM templates.

Hope you found it useful. In a follow up post, I’ll discuss how to extend this further.

Azure AD Service Principal with a Key Vault Certificate

It is often useful to create Azure Active Directory Service Principal objects for authenticating applications and automating tasks in Azure.

While you can authenticate a Service Principal using a password (client secret), it might be better to use an X509 certificate as an alternative. You still need to find a way to keep the certificate secure, though.

That’s where Azure Key Vault comes in, allowing you to store the authentication certificate in a secure manner. An application could then obtain the certificate from Key Vault as needed, or if it’s running in Azure, there might be ways to provision the certificate automatically so that we don’t need to copy stuff around.

You could obtain a certificate from any valid certification authority and store it safely in Key Vault. However, Key Vault can also generate self-signed certificates, which might be good enough for many scenarios.

Here is a useful PowerShell script that will create a new self-signed certificate directly in Key Vault. Then it will create a new service principal in the subscription tenant, with the new certificate for authentication.

[CmdletBinding()]
param(
  [Parameter(Mandatory = $true)]
  [String]$keyVaultName,
  [Parameter(Mandatory = $true)]
  [String]$principalName,
  [Parameter()]
  [int]$validityInMonths = 12
)

function New-KeyVaultSelfSignedCert {
  param($keyVault, $certificateName, $subjectName, $validityInMonths, $renewDaysBefore)

  $policy = New-AzureKeyVaultCertificatePolicy `
              -SubjectName $subjectName `
              -ReuseKeyOnRenewal `
              -IssuerName 'Self' `
              -ValidityInMonths $validityInMonths `
              -RenewAtNumberOfDaysBeforeExpiry $renewDaysBefore

  $op = Add-AzureKeyVaultCertificate `
              -VaultName $keyVault `
              -CertificatePolicy $policy `
              -Name $certificateName

  while ( $op.Status -ne 'completed' ) {
    Start-Sleep -Seconds 1
    $op = Get-AzureKeyVaultCertificateOperation -VaultName $keyVault -Name $certificateName
  }
  (Get-AzureKeyVaultCertificate -VaultName $keyVault -Name $certificateName).Certificate
}


$certName = "SPCert-$principalName"
$cert = New-KeyVaultSelfSignedCert -keyVault $keyVaultName `
                                   -certificateName $certName `
                                   -subjectName "CN=$principalName" `
                                   -validityInMonths $validityInMonths `
                                   -renewDaysBefore 1

Write-Verbose "Certificate generated $($cert.Thumbprint)"

$certString = [Convert]::ToBase64String($cert.GetRawCertData())

New-AzureRmADServicePrincipal -DisplayName $principalName `
                              -CertValue $certString `
                              -EndDate $cert.NotAfter.AddDays(-1)

The script assumes you’ve already signed in to your Azure Subscription using Login-AzureRMAccount. Let’s try executing the script:

Executing the script

If we go into the Key Vault in the Azure Portal, we can see the new certificate generated:

Key Vault Certificate Secret

We can also query the new Service Principal and verify that it is indeed setup with certificate-based authentication:

Get-AzureRmADSpCredential -ObjectId 37800b1f-5d17-461b-80a3-c4a8df10b319

StartDate            EndDate              KeyId                                Type
---------            -------              -----                                ----
8/28/2017 2:29:54 AM 8/27/2018 2:29:49 AM 8a3250a4-4383-4a5f-acdc-4deafd930e6d AsymmetricX509Cert

I’ll show some useful scenarios for this in a follow-up post.

Creating Event Grid Subscriptions

A few days ago, I wrote about using Azure Resource Manager (ARM) templates to deploy Azure Event Grid. That sample showed how to create a new Event Grid Topic resource. This basically gives you an URL you can publish custom events to and have them routed to one or more event subscribers.

However, one of the very powerful features in Event Grid is not custom topics, but subscribing to events published by the Azure fabric itself; that is, events published by Resource Manager Providers. As of these writings, only a few providers support Event Grid, but this number is sure to grow in the coming months.

Supported Event Publishers

What Azure resource manager providers support Event Grid? An easy way to find this out is to ask Azure itself. To do this, we can leverage the excellent ArmClient tool. Resource Managers that support publishing events through Event Grid are called Topic Types, and we can query these:

armclient get /providers/Microsoft.EventGrid/topicTypes?api-version=2017-06-15-preview

If the command succeeds, we should see something like this:

{
  "value": [
    {
      "properties": {
        "provider": "Microsoft.Eventhub",
        "displayName": "EventHubs Namespace",
        "description": "Microsoft EventHubs service events.",
        "resourceRegionType": "RegionalResource",
        "provisioningState": "Succeeded"
      },
      "id": "providers/Microsoft.EventGrid/topicTypes/Microsoft.Eventhub.Namespaces",
      "name": "Microsoft.Eventhub.Namespaces",
      "type": "Microsoft.EventGrid/topicTypes"
    },
    ...
  ]
}

You can also use the Azure CLI command az eventgrid topic-type list on version 2.0.14 or later.

Knowing what event publishes exists is only half the story, though. We also want to know what type of events a publisher supports. These are called Event Types in Event Grid, and we can query those as well.

For example, let’s say we want to find out events supported by the Microsoft.Resources.ResourceGroups topic type:

armclient get /providers/Microsoft.EventGrid/topicTypes/Microsoft.Resources.ResourceGroups/eventTypes?api-version=2017-06-15-preview

If the command succeeds, we should see an output similar to the following:

{
  "value": [
    {
      "properties": {
        "displayName": "Resource Write Success",
        "description": "Raised when a resource create or update operation succeeds.",
        "schemaUrl": "TBD"
      },
      "id": "providers/Microsoft.EventGrid/topicTypes/Microsoft.Resources.ResourceGroups/eventTypes/Microsoft.Resources.ResourceWriteSuccess",
      "name": "Microsoft.Resources.ResourceWriteSuccess",
      "type": "Microsoft.EventGrid/topicTypes/eventTypes"
    },
    ...
  ]
}

The equivalent Azure CLI command would be az eventgrid topic-type list-event-types --name Microsoft.Resources.ResourceGroups.

Now let’s see how we can subscribe to events published by the Azure fabric.

Event Hub Namespaces

Currently, you can only subscribe to events published at the Event Hub Namespace level, not an individual Event Hub itself. For this we’d use the Microsoft.EventHub.Namespaces topic type to create a nested resource of type Microsoft.EventGrid/eventSubscriptions:

{
    "apiVersion": "2017-06-17-preview",
    "name": "[concat(parameters('eventHubNamespaceName'), '/Microsoft.EventGrid/', parameters('subscriptionName'))]",
    "type": "Microsoft.EventHub/namespaces/providers/eventSubscriptions",
    "tags": {
        "displayName": "Webhook Subscription"
    },
    "dependsOn": [
        "[concat('Microsoft.EventHub/Namespaces/', parameters('eventHubNamespaceName'))]"
    ],
    "properties": {
        "destination": {
            "endpointType": "WebHook",
            "properties": {
                "endpointUrl": "[parameters('webhookUrl')]"
            }
        },
        "filter": {
            "includedEventTypes": [ "All" ],
            "subjectBeginsWith": "",
            "subjectEndsWith": "",
            "subjectIsCaseSensitive": false
        }
    }
}

Notice that the type property has the format <parent_resource_type>/providers/<child_resource_type> as is standard practice in ARM Templates. The name property would then start with the name of the parent resource (the Event Hub Namespace, in this case), followed by ‘/Microsoft.EventGrid/` and the name of the new resource representing the event grid subscription.

Resource Groups

The Microsoft.Resources.ResourceGroups topic type publishes events on management operations on a resources in an specific resource group. Using ARM, we’d create this as a top-level resource on the template resource group:

{
    "apiVersion": "2017-06-15-preview",
    "name": "[parameters('subscriptionName')]",
    "type": "Microsoft.EventGrid/eventSubscriptions",
    "tags": {
        "displayName": "Webhook Subscription"
    },
    "properties": {
        "destination": {
            "endpointType": "WebHook",
            "properties": {
                "endpointUrl": "[parameters('webhookUrl')]"
            }
        },
        "filter": {
            "includedEventTypes": [ "All" ],
            "subjectBeginsWith": "",
            "subjectEndsWith": "",
            "subjectIsCaseSensitive": false
        }
    }
}

Subscriptions

The Microsoft.Resources.Subscriptions topic type publishes events on management operations on an entire Azure subscription. This is a more interesting scenario, because an event subscription at the Azure-subscription level (pardon the redundancy) is a global resource itself that doesn’t belong to a resource group.

Unforunately, I have been unable to find a magical incantation that can create such a resource on an ARM template, since an ARM deployment is always done on a resource group. I do hope there is a way to do this, as otherwise automating a solution will be a bit harder.

Anyway, creating one using ArmClient is quite possible. To do this, first create a text file with the right JSON content. For example:

{
  "properties": {
    "destination": {
      "endpointType": "WebHook",
      "properties": {
        "endpointUrl": "https://requestb.in/1i0mp2v1"
      }
    },
    "filter": {
      "includedEventTypes": [ "All" ],
      "subjectBeginsWith": "",
      "subjectEndsWith": "",
      "subjectIsCaseSensitive": false
    }
  }
}

Then use ArmClient to create global resource. To do this, you will need the Azure subscrition_id, the name you want to provide to your Event Grid event subscription, and the path of the JSON file created in the previous step:

armclient put /subscriptions/<subscription_id>/providers/Microsoft.EventGrid/eventSubscriptions/<name>?api-version=2017-06-15-preview @<file>

Listing Event Subscriptions

Another useful thing to know is how to manage existing Event Grid event subscriptions.

We can easily list all event subscriptions created on an Azure subscription using the two commands:

armclient get /subscriptions/<subscription_id>/providers/Microsoft.EventGrid/eventSubscriptions?api-version=2017-06-15-preview

az eventgrid event-subscription list

Conclusion

Hopefully, this article gives you a few ideas on how to work with Event Grid and how to automate creating event grid event subscriptions for different Azure Resources.

Deploying an Event Grid + WebHook with ARM

Azure Event Grid was announced a couple of days ago for building event-driven architecture. While the initial preview is a bit limited in the number of supported event publishers, it has tons of promise and I was immediately intrigued by the possibilities!

Being a fan of Azure Resource Manager templates, I soon tried to figure out how you could automate the creation of Event Grid resources. After some trial and error and some research, I was able to come up with an initial template to create:

  • An Event Grid topic resource
  • A WebHook-based event subscription.

You can find the complete sample template here.

The sample template first declares some simple parameters and variables:

{
    "parameters": {
        "eventGridName": {
            "type": "string",
            "minLength": 1
        },
        "webhookName": {
            "type": "string",
            "minLength": 1
        },
        "webhookUrl": {
            "type": "string",
            "minLength": 1
        },
        "webhookEventTypes": {
            "type": "array",
            "defaultValue": [ "All" ]
        },
        "webhookPrefixFilter": {
            "type": "string",
            "defaultValue": ""
        },
        "webhookSuffixFilter": {
            "type": "string",
            "defaultValue": ""
        },
        "webhookCaseSensitive": {
            "type": "bool",
            "defaultValue": false
        },
        "webhookLabels": {
            "type": "array",
            "defaultValue": [ "" ]
        }
    },
    "variables": {
        "apiVersion": "2017-06-15-preview"
    }
}

Creating the Event Grid topic is relatively simple; it’s a resource of type Microsoft.EventGrid/topics. We just need to supply the right API version (2017-06-15-preview in this release), the grid name, and one of the supported Azure regions:

{
    "apiVersion": "[variables('apiVersion')]",
    "name": "[parameters('eventGridName')]",
    "type": "Microsoft.EventGrid/topics",
    "location": "[resourceGroup().location]",
    "tags": {
        "displayName": "EventGrid"
    },
    "properties": {
        "name": "[parameters('eventGridName')]"
    }
}

Creating the WebHook subscription took a bit of work to figure out. Eventually realized this needed to be a nested resource of type Microsoft.EventGrid/topics/providers/eventSubscriptions, and it needed a special name in the pattern <EventGridName>/Microsoft.EventGrid/<WebHookName>. You can then specify the WebHook URL, and the filter of which events you want delivered to it:

{
    "apiVersion": "[variables('apiVersion')]",
    "name": "[concat(parameters('eventGridName'), '/Microsoft.EventGrid/', parameters('webhookName'))]",
    "type": "Microsoft.EventGrid/topics/providers/eventSubscriptions",
    "tags": {
        "displayName": "Webhook Subscription"
    },
    "dependsOn": [
        "[concat('Microsoft.EventGrid/topics/', parameters('eventGridName'))]"
    ],
    "properties": {
        "destination": {
            "endpointType": "WebHook",
            "properties": {
                "endpointUrl": "[parameters('webhookUrl')]"
            }
        },
        "filter": {
            "includedEventTypes": "[parameters('webhookEventTypes')]",
            "subjectBeginsWith": "[parameters('webhookPrefixFilter')]",
            "subjectEndsWith": "[parameters('webhookSuffixFilter')]",
            "subjectIsCaseSensitive": "[parameters('webhookCaseSensitive')]"
        },
        "labels": "[parameters('webhookLabels')]"
    }
}

End result of applying the template in the portal shows are hook created correctly:

WebHook in Azure Portal

I also added some outputs to the ARM template so that it returns the topic endpoint URL of the created Event Grid, as well as the first access key:

"outputs": {
    "eventGridUrl": {
        "type": "string",
        "value": "[reference(parameters('eventGridName')).endpoint]"
    },
    "eventGridKey": {
        "type": "string",
        "value": "[listKeys(resourceId('Microsoft.EventGrid/topics', parameters('eventGridName')), variables('apiVersion')).key1]"
    }
}

Something I did notice while testing this is that it appears that, in the current implementation, some properties of the event subscription cannot be updated after it has been created. Hopefully this restriction is eventually lifted, as it makes it a bit harder to support continuous integration with ARM, but if not, you’d need to first drop the webhook and create it again.

VSTS Build hanging with XUnit tests

I was setting up a simple demo on Visual Studio Team Services (VSTS) today and ran into an oddity.The sample project I was using had a mixture of MSTest and XUnit-based tests. This would run just fine in Visual Studio, but after setting up a hosted build in VSTS, I noticed that the build would seem to hang after apparently running all tests, so I had to cancell builds.

Looking at the logs, I eventually found this line, which was not expected:

2017-08-15T16:48:44.4074176Z Information: [xUnit.net 00:00:00.8781599]   Starting:    Microsoft.VisualStudio.QualityTools.UnitTestFramework

Suspecting this could be related, I modified the build definition. In the Test Assemblies task, I modified the Test assemblies property to include the following line:

!**\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll

This tells the test runner not to try to attempt to discover or run tests in this DLL.

Surprisingly, this worked and allow builds to complete normally after running all tests.

Trying out PowerShell 6.0 in containers

I’ve been meaning to give a try to the PowerShell Core 6.0 builds on Linux recently. The PowerShell team offers some nice pre-built Docker images you can use to test it, so that’s great.

I thought that would be a simple, but cool scenario to try out the new Azure Container Instances service! Of course, you cannot use the pre-built image directly for this, as you wouldn’t have a direct way to connect to the hosted container, so I decided to create a new image that not only had PowerShell Core 6.0 in it, but also have SSH built-in. This would be useful to quickly spin up a PowerShell-on-Linux environment for demos and simple tests.

So the first thing to do is to create a new empty directory for our new container files:

mkdir powershell
cd powershell

Then, using a text editor (Vim, is my poison of choice) I created a new dockerfile in this directory, with the following contents:

FROM microsoft/powershell

RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd

ENV newuser tomasr

# add new user
RUN useradd -m ${newuser}
RUN usermod -aG sudo ${newuser}
# copy SSH keys
RUN mkdir -p /home/${newuser}/.ssh
ADD authorized_keys /home/${newuser}/.ssh/authorized_keys
# set right permissions
RUN chown ${newuser}:${newuser} /home/${newuser}/.ssh
RUN chown ${newuser}:${newuser} /home/${newuser}/.ssh/authorized_keys
RUN chmod 700 /home/${newuser}/.ssh/authorized_keys

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

This dockerfile does a few simple things:

  • Inherits from the official PowerShell docker image
  • Installs SSHD
  • Creates a new user called tomasr
  • Copies my authorized_keys file to the new user dir and sets the right permissions
  • Starts SSHD

Now, let’s build ourselves a new docker image:

docker build -t powershell .

If everything works, you should see a message similar to Successfully built 34f996c46a23. Now, before I can use this image with Azure Container Instances, I need to push this to some image repository. I’ll use Azure Container Registry, which I have already provisioned on my Azure account by this point.

So let’s provide docker with the credentials to my registry, tag the image, and then push it to the registry:

sudo docker login blogtests.azurecr.io -u <user> -p <password>

sudo docker tag powershell blogtests.azurecr.io/tomasr/powershell

sudo docker push blogtests.azurecr.io/tomasr/powershell

Once this is complete, we can use the Azure CLI to create a new container instance called p4 using our new image. We’ll give it a public IP address listening on port 22:

az container create --name p4 --image blogtests.azurecr.io/tomasr/powershell --resource-group blogtopics --ip-address public --port 22 --registry-username <username> --registry-password <password>

After a few minutes, we can check if our deployment is complete and our container is running, using the following command:

az container show --name p4 --resource-group blogtopics

If the container is ready, we should see something like:

  "ipAddress": {
    "ip": "52.168.164.152",
    "ports": [
      {
        "port": 22,
        "protocol": "TCP"
      }
    ]
  },
  "location": "eastus",
  "name": "p4",
  "osType": "Linux",
  "provisioningState": "Succeeded",

When the container is fully deployed, we just need to use an SSH agent with the right private key to connect and run powershell on it:

PowerShell on Linux

Once we’re done playing with our environment, we can just delete the container instance:

az container delete --name p4 --resource-group blogtopics

Conclusion

Using containers, we’ve built a simple way to quickly provision a new machine to do simple PowerShell demonstrations. It’s not terribly useful, but still a fun use of technology!

Using Azure Functions to create AppInsights release annotations

I recently had the opportunity to discuss with a customer one cool feature in Application Insights: Release Annotations.

As the article above shows, you can easily create Release Annotations using Visual Studio Team Services release pipelines. In this post, I’d like to provide an alternative way to implement release annotations for applications hosted on Azure App Service, by using the WebHooks support in Kudu, and Azure Functions.

Context

Let’s assume that you already have a Web application deployed to a Azure App Service, and have hooked up some sort of automated deployment story. For example, you may be using the local git repository to deploy a new version of your app through a git push command.

Let’s also assume you already have an Application Insights resource you’re using to monitor this application.

Step 1: Obtain Application Key

The first step in automating the creation of release annotations is going to be to prepare our Application Insights resource by obtaining a new Application Key. We can do this from the Azure Portal, by finding our Application Insights resource, and selecting the API Access option on the left-hand menu:

  • Press the Create API key option at the top-left.
  • Provide a description for your new key (such as “Create Annotations”).
  • Select the “Write annotations” permission.
  • Press the “Generate key” button.

Create a new API key

Once the key has been generated, copy the value. If you don’t, you’ll have to create a brand new key! Also, note the Application ID value for your Application Insights resource, displayed alongside your new API key. We will need both to create a new annotation later on.

Step 2: Create the Function App

Create a brand new Function App in your Azure Subscription. For this demo, I’ll create a new one in a Consumption Plan:

Create a new Function App

Now, create a new Function. For simplicity, I’m going to use the “Webhook + API” template in JavaScript:

Add new function

The request sent by Kudu is going to contain some useful properties for our new function:

  • id: the deployment ID.
  • siteName: The name of our Web App. You could use this, for example, to have a single function support deployments for multiple applications.
  • message: the deployment message. For a Git-based deployment, this could be part of the commit message.
  • authorEmail: For Git-based deployment, this would be the author of the last commit.

You can find a complete list of the properties here.

Creating the release annotation requires a few things besides the Application Insights application id and key:

  • The name you want to give to the release annotation. We’ll use a combination of the site name and the deployment ID.
  • The time the release happened. We’ll just use the current date/time.
  • A list of arbitrary properties: We’ll store the commit message, as well as the author.

Our function code would look something like this:

var request = require('request');

module.exports = function (context, req) {

    var deploymentId = req.body.id;
    var siteName = req.body.siteName;
    var releaseName = siteName + '_' + deploymentId;
    var appId = process.env.APPINSIGHTS_APPID;
    var apiKey = process.env.APPINSIGHTS_APIKEY;

    var releaseProperties = {
        ReleaseName: releaseName,
        Message: req.body.message.trim(),
        By: req.body.authorEmail
    };

    context.log('Creating a new release annotation: ' + releaseName);

    var body = {
        Id: deploymentId,
        AnnotationName: releaseName,
        EventTime: (new Date()).toISOString(),
        Category: 'Deployment',
        // the Properties part contains a JSON object as a string
        Properties: JSON.stringify(releaseProperties)
    };
    var options = {
        url: 'https://aigs1.aisvc.visualstudio.com/applications/' 
           + appId + '/Annotations?api-version=2015-11',
        method: 'PUT',
        headers: {
            'X-AIAPIKEY': apiKey
        },
        body: body,
        json: true
    };
    context.log('Sending request to: ' + options.url);
    context.log(JSON.stringify(body));

    request(options, function(error, response, body) {
        if ( error ) {
            context.log(error);
            context.res = {
                status: 500,
                body: err
            };
            context.done();
        } else {
            context.log("Response: " + response.statusCode + " - " + JSON.stringify(body));
            context.res = {
                status: response.statusCode,
                body: body
            };
            context.done();
        }
    });
};

We also need to configure our Application Insights application id and application key. To avoid hardcoding these on the code, let’s create them as application Settings:

Application Settings

Finally, since we’re using the require package, we also need to add a package.json file as described here. In our case, contents would be simple, like:

{
    "name": "hooksappcreator",
    "version": "0.0.1",
    "dependencies": {
        "request": "*"
    }
}

Step 3: Configure the Webhook

Now that we have our function app all setup, we need to register the Webhook with Kudu. Before we can do that, though, we need to obtain the key used to control access to our Function. To do this, in the portal, select the Manage option for the function, and then either use the value of the default key, or create a new one:

Function key

Take a note of the value of the key; we’ll use it in a bit (note that the </> Get function URL link on the function editor also gives you the full URL + key).

Now, we need to open up Kudu for our Web App, by navigating to https://<app_name>.scm.azurewebsites.net/. In Kudu, selct the Tools -> Web hooks option from the top-level navigation menu. Now, create a new hook with the following settings:

  • Subscriber URL: We build this out of the Function App name, the Function name, and your Function key: https://<function_app>.azurewebsites.net/api/<function_name>?code=<function_key>
  • Select this as a PostDeployment hook.
  • Press the Add Url button to create the hook.

Webhook in Kudu

Step 4: Testing the Webhook

Once the webhook is registered, we should be ready to test it! Do a new deployment, and watch the log for the Azure Function. If the call is successful, you should see something like this in the log:

2017-08-08T19:49:29.036 Creating a new release annotation: hooksapp1_e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c
2017-08-08T19:49:29.052 Sending request to: https://aigs1.aisvc.visualstudio.com/applications/030fb56a-900f-4e24-b058-a8680fbb5f32/Annotations?api-version=2015-11
2017-08-08T19:49:29.052 {"Id":"e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c","AnnotationName":"hooksapp1_e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c","EventTime":"2017-08-08T19:49:29.052Z","Category":"Deployment","Properties":"{\"ReleaseName\":\"hooksapp1_e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c\",\"Message\":\"Updating title\",\"By\":\"tomas@winterdom.com\"}"}
2017-08-08T19:49:29.897 Response: 200 - [{"AnnotationName":"hooksapp1_e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c","Id":"e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c","Category":"Deployment","Properties":"{\"ReleaseName\":\"hooksapp1_e62d8c4eb609de8374fcfae20ebb6d9f09d3ce3c\",\"Message\":\"Updating title\",\"By\":\"tomas@winterdom.com\"}","EventTime":"2017-08-08T19:49:29.052+00:00","RelatedAnnotation":null}]
2017-08-08T19:49:29.913 Function completed (Success, Id=43021fb6-7166-4e09-be5d-7ab9b9d36e01, Duration=20906ms)

In Application Insights, we should also see our Release Annotation appear on charts:

Release annotations on charts

Viasfora v3.6 Released

Today I pushed to the Visual Studio Gallery version v3.6 of my Viasfora Visual Studio Extension.

This version includes a few minor bugfixes, and some new features, such as:

  • Export/Import settings
  • Export/Import color themes
  • Support for JavaScript template literals

If you run into any issues, or have any feature suggestion, please create a new issue on GitHub!