Using custom databases with the AuthP library – Part1: normal apps

Last Updated: May 31, 2023 | Created: May 31, 2023

The AuthPermissions.AspNetCore library, referred to as AuthP library, provides enhanced Roles authentication and multi-tenant services to an .NET application. The AuthP library needs to store information in a database, and the previous version (4.2.0) supported SQL Server and PostgreSQL, but with release of AuthP version 5.0.0 you can use the main database provider that that EF Core supports. This feature is called “custom databases” and which allows you to use other database providers other than the build-in SqlServer or PostgreSQL database providers.

This article explains the steps you need to use a different database provider with a normal (i.e. not sharding / hybrid) multi-tenant application. A second article called “Use custom databases with the AuthP library – Part2: sharding / hybrid apps” will cover using the “custom databases” feature with sharding / hybrid multi-tenant application.

This article is part of the series that covers AuthP’s multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

  1. The database: Using a DataKey to only show data for users in their tenant
  2. Administration: different ways to add and control tenants and users
  3. Versioning your app: Creating different versions to maximise your profits
  4. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  5. Advanced techniques around ASP.NET Core Users and their claims
  6. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core
  7. Three ways to securely add new users to an application using the AuthP library
  8. How to take an ASP.NET Core web site “Down for maintenance”
  9. Three ways to refresh the claims of a logged-in user
  10. Use custom databases with the AuthP library – Part1: normal apps (this article)
  11. Use custom databases with the AuthP library – Part2: sharding / hybrid apps (coming soon)

TL;DR; – Summary of this article

  • The AuthP library is designed to make building .NET multi-tenant applications by providing a the backend design and admin features to get your application build more quickly.
  • The new AuthP version 5.0.0 and contains the “custom databases” feature (plus other features – see next section), where you can now use any of the main of EF Core database providers with AuthP, focusing on normal (i.e. not sharding / hybrid) multi-tenant applications.
  • To use the “custom databases” feature with a normal multi-tenant applications you need to go three stages:
    • Create an EF Core migration for your selected database provider
    • Create an extension method to register your custom database
    • Various minor changes to your tenant data to work with your custom database
  • There is a working example of a normal multi-tenant applications using Sqlite as the custom database. You can find this in the AuthP.CustomDatabaseExamples repo – look at projects with a name starting with “CustomDatabase1.”
  • This article compares a multi-tenant application using a build-in database and the same multi-tenant application using Sqlite as the custom database.

Summary of the new features in AuthP version 5.0.0

Before getting to the details of the new “custom databases” in AuthP version 5.0.0 I provide an overall list of all the new in this release:

  • BREAKING CHANGE in AuthP’s sharding / hybrid multi-tenant feature: If you are using AuthP’s sharding /hybrid features, then look at the UpdateToVersion5.md file for what you need to do.
  • This new release makes it easier to use, and change, the sharding / hybrid multi-tenant feature. The two items below cover the easy of use / change:
    • Easier to use: there is a new extension method called SetupMultiTenantSharding that sets up the sharding / hybrid multi-tenant feature. This makes it easier to set up this feature.
    • Easier to change: You can change the internal services / setting of the sharding / hybrid feature, e.g. one developer wants to store sharding data in a database, instead of in a json file, which this release allows. This is done by creating a new extension method containing the code in the SetupMultiTenantSharding extension method with your changes.
  • New feature: You can now use range of database providers (see list later) to use with the AuthP library. The table below shown the database providers AuthP 5.0.0 supports:
Supported database providers in V5.0.0Comments
Microsoft.EntityFrameworkCore.SqlServerBuilt-in – already in AuthP library
Npgsql.EntityFrameworkCore.PostgreSQLBuilt-in – already in AuthP library
Microsoft.EntityFrameworkCore.Sqlite 
Microsoft.EntityFrameworkCore.Cosmos 
Pomelo.EntityFrameworkCore.MySqlPomelo Foundation Project
MySql.EntityFrameworkCoreMySQL project (Oracle)
Oracle.EntityFrameworkCoreOracle

NOTE: The AuthP library uses Giorgi Dalakishvili’s EntityFramework.Exceptions library to detect concurrency and unique errors, and Giorgi only supports the main EF Core database providers, i.e. SqlServer, PostgreSQL, SQLite, MySQL, MySQL.Pomelo, and Oracle.

The rest of this article covers using a “custom database” with a single tenant database, and Part2 article will show to use “custom databases” with tenants in many databases (i.e. sharding / hybrid approach).

Setting the scene – how AuthP’s multi-tenant features uses databases  

The AuthP’s multi-tenant feature provides nearly all the backend services / admin to build a multi-tenant application. I started building the AuthP library using one / two databases, which works for small / medium sized multi-tenant apps. Then in version 3.0.0 I added a “one db per tenant” approach (known as sharding) plus a hybrid design, which gives you more flexibility to handle both small and large tenants on the same multi-tenant app. The diagram below shows the four ways AuthP can handle databases to provide the right cost / performance for your multi-tenant project.

NOTE: If you aren’t creating a multi-tenant project, then you follow the first, “All in in one Db” approach with AuthP data and any project data in one database.

Up to this point the AuthP only supported SQL Server and Postgres database types, but a few developers asked if the I could support other database providers. So, when I found time, I created a new version of the AuthP I added the “custom database” feature that supports allows any database server that EF Core supports.

Introducing the AuthP.CustomDatabaseExamples repo

To help you, and to make sure the custom database feature works, I created a repo called AuthP.CustomDatabaseExamples which contains two examples: this article covering multi-tenant application that keep all tenant data in one database (see 1 and 2 in the Four Ways diagram).

To use a custom database you must change various code and migrations, which this article will explain. I chose Sqlite for my custom database type because the Individual User Accounts authentication supports Sqlite.

NOTE: Using Sqlite for the custom database examples requires extra code over other database providers. That’s because the Sqlite’s connection string doesn’t have a Server / host part but has a filepath in the data source part. Other databases, such as MySql, should be slightly easier as the connection string fully defines the server and database name.

The example which works with AuthP “add tenant data in one database” and has three projects, all starting with CustomDatabase1.

Steps in building a normal AuthP application with a custom database

We start with an application using AuthP which uses one / two databases, i.e. 1 and 2 in the previous “Four Ways” diagram. Here are the steps to use a custom database are listed below, which will detailed later.

  1. Create a migration of the AuthPermissionsDbContext for your custom database.
  2. Create an extension method to register your custom database.
  3. Other, non-AuthP things you need to think about

These stages are explained below.

1. Create a migration of the AuthPermissionsDbContext for your custom database

To create or update the AuthPermissionsDbContext you need to create an EF Core migration, and each database provider needs its own migration based on your chosen custom database type.

If you are using AuthP’s build-in database providers, then the AuthP NuGet package contains migrations for SQL Server and Postgres. But when using the custom database feature, then you need to create the migration for your custom database type. There are many ways to create a EF migration, but personally I use a IDesignTimeDbContextFactor<TContext>.   The code below comes from the CustomDatabase1’s AuthPermissionsDbContextSqlite class.

public class AuthPermissionsDbContextSqlite :
     IDesignTimeDbContextFactory<AuthPermissionsDbContext>
{
    // The connection string must be valid, but the  
    // connection string isn’t used when adding a migration.
    private const string connectionString = 
        "Data source=PrimaryDatabase.sqlite";

    public AuthPermissionsDbContext CreateDbContext(string[] args)
    {
        var optionsBuilder =
            new DbContextOptionsBuilder<AuthPermissionsDbContext>();
        optionsBuilder.UseSqlite(connectionString);

        return new AuthPermissionsDbContext(optionsBuilder.Options, 
            null, new SqliteDbConfig());
    }
}

The following lines you to change for your custom database provider

  • Lines 6 and 7: You must have a connection string in the correct format for your custom database type, but if you are creating a migration then the database won’t be accessed. 
  • Line 13: you need to use the correct “Use???” method for your custom database provider.
  • Line 16: The third parameter to the AuthPermissionsDbContext class allows you to add any custom-specific set of EF Core model commands to be run at the start of the AuthPermissionsDbContext’s  OnModelCreating method.  The main use is to set up concurrency tokens to capture concurrent changes to the same table. See the SqliteDbConfig class for Sqlite concurrent change commands.

NOTE: Setting up the Sqlite concurrent change in the EF Core migration is a bit harder than from other database types. That’s because you need to add extra trigger code – see this article on what Sqlite needs and see the …Version5.cs in the SqliteCustomParts Migrations folder.

2. Extension method to register your custom database

You need to create an extension method to register your custom database. You do this by copying one of existing extension methods already in the AuthP code, such as UsingEfCoreSqlServer, and alter six parts:

  1. You set the AuthPDatabaseType to the enum AuthPDatabaseTypes.CustomDatabase
  2. Change the AuthPermissionsDbContext to your custom database provider.
  3. Link to your assembly containing the AuthPermissionsDbContext migration.
  4. Update the EntityFramework.Exceptions to your custom database provider.
  5. Add new code to register custom database configuration code.
  6. Optional: Update the RunMethodsSequentially code to provide a global lock

The diagram below shows how you would take the UsingEfCoreSqlServer extension method and alter it to become your custom database extension method (NOTE click to get a bigger version of the diagram).

The AuthP.CustomDatabaseExamples repo has a UsingEfCoreSqlite extension method in the SqliteSetupExtensions class which sets up a Sqlite database as the custom database. This has gone through the six steps shown in the diagram above.

3. Other, non-AuthP things you need to think about

There are a few other things to do that use your custom database outside the AuthP library. The main one is the tenant part provides some sort of application that users will use. In the AuthP library there is one that mimics an application where you can entry invoices. Another example is managing shops sales/stock. If you want to use your custom database in your tenant data, then you need to set that up too.

The first two options in the “Four Ways” diagram show that you two ways to handle the tenant part of the outside sharding.

  1. “All in one Db”: your tenant data is within the same database, so it must use the same custom database.
  2. “Separate AuthP / tenant data”: In this case your tenant data doesn’t have to use the same custom database that AuthP uses. 

Comparing two AuthP multi-tenant examples to see the changes

I created an example multi-tenant application using Sqlite as the custom database, by copying an existing multi-tenant application that used the built-in SqlServer – see Example3 in the AuthP repo. This allows me to compare the changes to my new Sqlite to show what has changed.

I created a new repo called AuthP.CustomDatabaseExamples and copy / updated an  example multi-tenant application, using three projects wholes names all starting with “CustomDatabase1.”. There are three projects are:

Projects 
CustomDatabase1.InvoiceCodeCode for the tenant data / features
CustomDatabase1.SqliteCustomPartsContains Sqlite code / migration
CustomDatabase1.WebAppASP.NET Core providing a runnable web app

3a. Example3.InvoiceCode -> CustomDatabase1.InvoiceCode

The main changes are to so using Sqlite for the invoice code. Here are the changes:

  • …AppStart.RegisterInvoiceServices – changed to use Sqlite
  • InvoicesDesignTimeContextFactory (IDesignTimeDbContextFactory) to create a migration for the Invoice DbContext to Sqlite (The AuthP Example3 Invoice DbContext used the build-in SqlServer). See my comments on the end of this class which provides one way to create the migration.
  • You need to create a Sqlite Migration for the Invoice DbContext using the InvoicesDesignTimeContextFactory detail above.

3b. New CustomDatabase1.SqliteCustomParts project

This project contains the Sqlite extras that you need to use Sqlite with AuthP. They are:

3.c. Example3.MvcWebApp.IndividualAccounts -> CustomDatabase1.WebApp

The big change in the ASP.NET Core project is changing is the Program class. In the Program class I have added #regions to show what has changed.

The other change is in the appsettings.json file where you need to provide the Sqlite connection string, which is quite different from other database connection strings.

Conclusion

I haven’t had many requests for the “custom database” feature for AuthP, but like the support for multiple languages in AuthP many found it useful once its there.

Creating the “custom database” feature for a normal (non-sharding) applications was fairly quick to create, but doing the same to the sharding / hybrid applications it turns out to be quite more complex. I decided to release AuthP version 5.0.0 without the sharding / hybrid “custom database” feature because this release contains other improvements that people have asked for.

Watch this space for Part2 of the “custom databases” article to see how to use “custom database” feature when building sharding / hybrid multi-tenants applications. Happy cod

A pattern / library for methods that return a status, including localization

Last Updated: January 26, 2023 | Created: January 26, 2023

This article is about a pattern / library that allows you to create methods that can return a status, that is a success / fail value, plus the error messages – I call this “returning a status”. It then goes on to describe another library called Net.LocalizeMessagesAndErrors (shortened to Localize-ME) that returns a status where the errors can be in multiple languages (known as localization).

This article talks about a library called GenericServices.StatusGeneric (shortened to StatusGeneric) that implements the “return a status” pattern. Then it talks about a different library called Net.LocalizeMessagesAndErrors (shortened to Localize-ME) that contains an implementation of “return a status” pattern, but handles multiple languages (known as localization) of the error messages.

This article is part of series about handling localization. The full list of the articles in this series are:

  1. Improving the support of multiple languages in .NET applications
  2. Applying the improved multiple languages support to .NET applications
  3. A pattern / library for methods that return a status, including localization (this article)

TL;DR; – Summary of this article

  • The “return a status” pattern is useful wherever a method could return an error.
  • The StatusGeneric library provides a simple, but powerful implementation of the “return a status” pattern and this article provides information on how to use the StatusGeneric library.
  • A second library, Localize-ME, contains an implementation of  the “return a status” pattern, where the error’s message can be returned in different languages. The localization part uses Localize-ME library, which has extra features on top of the .NET localization services.

1. Introduction to the StatusGeneric library

In 2014 I created a library using Entity Framework (known as EF6) which contains the “return a status” pattern inside it. Then in 2019, when .NET and EF Core were stable, I built a number of libraries that used the “return a status” pattern, so I created the standalone StatusGeneric library so that I could use it in lots of places / libraries. So far, this library has been downloads > 200,000 times so obviously others find it useful too.

The following subsections will introduce you to the StatusGeneric library and how you can use it, starting with what the returned status contains.

The returned status: IStatusGeneric and IStatusGeneric<out T>

The key of the “return a status” is the IStatusGeneric, which defines what the returned status contains. This list below defined each property in the

I created an interface, which I refer to as a status, that is returned to the calling method. This interface, called IStatusGeneric, has the following properties:

  • A IReadOnlyList Errors property where errors are stored. Each error is contained in a ValidationResult class, which contains the error message and optionally the name of the member that error is linked to (ASP.NET Core uses this to show the error next to the actual entry that has an error.
  • A boolean IsValid property, which is true if there are no errors.
  • A boolean HasErrors property, which is true if there are errors.
  • A string Message property, which can contain a success message if there aren’t any Errors or contains “Failed with nn errors” if there are in the Errors property.

I also created the IStatusGeneric<out T> interface, which adds a Result property to the IStatusGeneric for methods that want to return a value as part of the status. The Result property is set to default if there are errors to ensure that

How to use the StatusGeneric library in your code

The first step is to add the StatusGeneric’s NuGet package to the project you want use the StatusGeneric library. Then you can create a method that returns a status, as shown below. Below the code I give a list of the various parts in the code:

public IStatusGeneric CheckNull(string? month)
{
    var status = new StatusGenericHandler();
    status.Message = "All went well";

    if (month == null)
        return status.AddError("input must not be null", 
             nameof(month));

    return status;
}

The list below contains a description of the use of the StatusGeneric library in the method above

  • Line 1: The method returns a class that matches the IStatusGeneric.
  • Line 3: You need to create a status using the StatusGenericHandler constructor. This sets up an empty status, i.e. has no errors.
  • Line 4: Optionally you can set a success Message. Note that if there are errors, then the Message contains the string “Failed with nn errors”;
  • Line 7: This adds a error to the status and then returns the status
    • Line 8: Optionally you can add the name of the member that has the error.
  • Line 10: This returns the status. In this case there are no errors, so it is valid.

NOTE: The name of the member that has the error (line 8) must have the actual name of the property in a class, which in the case above would be Month, not month. I use a method called CamelToPascal (see this extension class in the Localize-ME library) that changes the first character to a capital character.

And the code below provides an example of how use the CheckNull method in an ASP.NET Core application, with a list of the various parts after the code:

NOTE: You need to add the EfCore.GenericServices.AspNetCore NuGet package to obtain access to the  CopyErrorsToModelState method to copy and errors to the Razor pages.

[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult CheckNull(string? month)
{
    var status = _exampleMethods.CheckNull(month);
    if (status.IsValid)
        return RedirectToAction(nameof(Index), 
             new { message = status.Message });

    status.CopyErrorsToModelState(ModelState);
    return View();
}

The list below contains a description of how the CheckNull method could be used in an ASP.NET Core application:

  • Line 5: You call the CheckNull method and you get back a status, which is of type IStatusGeneric.
  • Line 6: If the status is valid, then you go back to the Index page.
  • Line 8: The success message from the CheckNull method to show to the user.
  • Line 9: Otherwise, the status has errors, so we want to show the error(s).
  • Line 10: Using the CopyErrorsToModelState method from EfCore.GenericServices.AspNetCore library the errors (which are stored using the ValidationResult class) are added to the ASP.NET Core’s ModelState.
  • Line 11: This returns to the Get action and the error(s) in the ModelState are shown to the user.

Different ways to add errors to the status

The last section provided a simple example of using the StatusGeneric library, but in real life the checking for errors can be must more complex. Here are some of the situations you might come across and how the StatusGeneric library:

  1. Using multiple checks to return all the errors to the user
  2. Combining errors from multiple statuses

1. Using multiple checks to return all the errors to the user

The example below show you might apply many checks so that the user gets all the errors in one go. This pattern is good for users as all the errors are returned at the same time.

public IStatusGeneric CheckPassword(string password)
{
    var status = new StatusGenericHandler();

    //series of tests and then return all the errors together
    //Good because the user gets all the errors at once
    if (password.Length < 10)
        status.AddError("A password must be 10 or more in length",
            nameof(password));
    if (!password.Any(char.IsUpper))
        status.AddError("A password must contain an upper case character",
            nameof(password));
    if (!password.Any(char.IsLower))
        status.AddError("A password must contain a lower case character",
            nameof(password));
    if (!password.Any(char.IsDigit))
        status.AddError("A password must contain an number",
            nameof(password));
    
    return status;
}

2. Combining errors from multiple statuses

Sometimes the testing for errors is best coded by called to other “return a status” methods so the StatusGeneric library has a CombineStatuses method. This method will copy the errors from one the called “return a status” method into the caller’s status.

The code below shows an example of logging in with tests on the email, password and the actual login part and, if successful, then returns the userId.

public IStatusGeneric<string> Login
    (string email, string password)
{
    var status = new StatusGenericHandler<string>();

    status.CombineStatuses(
        CheckValidEmail(email));

    if (status.HasErrors)
        return status;

    if (status.CombineStatuses(
            CheckPassword(password)).HasErrors)
        return status;

    var loginStatus = LoginUser(email, password);
    status.CombineStatuses(loginStatus);

    status.SetResult(loginStatus.Result);

    return status;
}

The list below contains a description of how the Login method works:

  • Line 1: The Login method will return the logged-in user’s userId.
  • Lines 6,7: The Login method calls a CheckValidEmail method that returns a status which is copying into the caller’s status via the CombineStatuses method.
  • Lines 9,10: This returns the combine status if the status has errors.
  • Lines 12 to 14: This shows shorter way to return on a combined status that has errors.
  • Lines 16: The LoginUser method returns a status that contains (if there are no errors) the logged-in user’s userId.
  • Line 17: Its status is combined to pick up any errors.
  • Line 19: This sets the string Result to send back with the method’s status. NOTE: if there are errors the Result is set to default, which for a string is null.

Real-world examples of using the StatusGeneric library

I built a large library called AuthPermissions.AspNetCore (shortened to AuthP) which allow developers to create multi-tenant applications (and other features). Up to version 4.0.0 the AuthP  library uses the StatusGeneric library, and here is a link to AuthTenantAdminService in AuthP 4.0.0 version, which shows it handles errors (tip: start at line 126, because that’s where the error handling starts).

2. How to use the Localize-ME library

And in the end of 2022 I created another library, referred to as to Localize-ME, that includes a version that supports the StatusGeneric’s interfaces, but handles multiple languages of the error messages (known as localization). I created this library to add localization to my AuthP library and because because the AuthP already used the StatusGeneric library I added the StatusGenericLocalizer / StatusGenericLocalizer<T> classes to replace the StatusGenericHandler / StatusGenericHandler<T> classes.

The design of the StatusGenericLocalizer (and its IDefaultLocalizer service) is to have a default message / error within your code, in your selected language you define when register the IDefaultLocalizer service. This has two benefits, a) the code is easier to understand and b) if localization isn’t turned on it can still provide message / error (useful in NuGet packages).

NOTE: The Localize-ME library was build to add features that overcome certain restrictions in the .NET localization services – see this article for more on this.

The Localize-ME library has comprehensive documentation so this section is really about understanding the differences between the StatusGeneric library and the StatusGenericLocalizer class and its IDefaultLocalizer service in the Localize-ME library.

How the StatusGenericLocalizer is different from the StatusGeneric?

The code below does exactly as the StatusGeneric shown in the “How to use the StatusGeneric library in your code” section ?LINK? but it’s a lot longer than the StatusGeneric version. That’s because we need to provide the service and keys to display the Message and Errors in different languages. The list after the code explains the different parts from the original StatusGeneric version ?LINK?.

public IStatusGeneric CheckNull(string? month)
{
    var status = new StatusGenericLocalizer(
         _defaultLocalizer);
    status.SetMessageString(
        "Success".ClassMethodLocalizeKey(this, false), 
        "All went well");

    if (month == null)
        return status.AddErrorString(
            "NullParam".JustThisLocalizeKey(this), 
            "The input must not be null.", 
            nameof(month).CamelToPascal())

    return status;
}

The list below contains a description of the use of the StatusGenericLocalizer class in the method above:

  • Lines 3,4: The StatusGenericLocalizer constructor needs a IDefaultLocalizer service. This requires to localize your application and  register the IDefaultLocalizer service on startup.
  • Lines 5 to 7: This adds a default success Message with two parts:
    • Line 6:  This creates a unique key (see this document) to lookup the message/error in the localization resource files.
    • Line 7: This is the message to use if the culture (language) of the user / app matches the default culture. Note that if there are errors, then the Message contains the string “Failed with nn errors”;
  • Lines 10 to 13: This adds an error to the status and then returns the status
    • Line 10: This a method that adds an error in the form of a constant string. Other methods allow FormattableStrings, which can contain dynamic data in the error.
    • Line 11: Creates a key to lookup the error message in the localization resource files.
    • Line 12: The error string in the default culture.
    • Line 13: Optionally you can add the name of the member that has the error. In this case the Localize-ME library contains an implementation of the CamelToPascal method, which makes the member name used in a class.

Real-world examples of using the Localize-ME library

As I said I have a library called AuthP and I released version 4.1.0 with localization via the Localize-ME library. In the “Real-world examples of using the StatusGeneric library” section ?LINK? I gave you a link to the AuthTenantAdminService in AuthP 4.0.0, before the localization.

This link to the AuthTenantAdminService comes from the main branch, which does the same as the AuthP 4.0.0 version, but it supports localization (tip: start at line 128, because that’s where the error handling starts).

Conclusions

The “returning a status” method pattern is simple, but powerful pattern. I have used this pattern so many times that I build “return a status” code in 2014. Then in 2019, when .NET was stable, I released the StatusGeneric library with an improved version over the 2014 implementation due to feedback of using the first version for many years.

What I didn’t do well on StatusGeneric library was documentation, and this article is one way to fix that issue, although I have also improved the StatusGeneric’s README file a few times to provide better documentation.

The Localize-ME library DOES have good documentation right from the start, with this page for the StatusGenericLocalizer classes. It needed the documentation because localization is MUCH more complex than the “return a status” pattern, but if you want to support multiple languages then you need it.

Happy coding.

Applying an improved multiple languages library to .NET applications

Last Updated: January 16, 2023 | Created: January 16, 2023

The last article covered why I added extra features to the .NET support of multiple languages  (known as localization in .NET) and via the Net.LocalizeMessagesAndErrors library (shortened to Localize-ME library). This article provides the details of how you would use the Localize-ME library in to add localization to your .NET application. The full list of the articles in this series are:

  1. Improving the support of multiple languages in .NET applications
  2. Applying the improved multiple languages support to .NET applications (this article)
  3. A library / pattern for methods that return a status, including localization (coming soon)

TL;DR; – Summary of this article

  • This article assumes you know the terms / concepts of localization. If you don’t, then go to the “A brief view of how .NET localization works” which introduces to the localization concepts.
  • The Localize-ME library adds extra localization features focused on a) making it easier to localize an existing application, and b) stop localization from making your code harder to understand.
  • The Localize-ME library provides two services
    • SimpleLocalizer service is good in small / simple applications and is simple to use.
    • DefaultLocalizer service is good from large applications with many localizations because it uses a more formal pattern that makes it easier to find / set up the localized messages.
  • This article provides eight steps to localize an ASP.NET Core application using either, or both Localize-ME services. Each step provides a summary of what you need to do with links to the Localize-ME documentation for the definitive details.

Super-quick explanation on the Localize-ME library provides

The Localize-ME library adds extra localization features to improve localizing your application. The big difference from the .NET localization is that you keep your existing messages, error strings, etc. in your code (known as the default messages) while .NET localization would replace your messages with a key (referred to as localize key).

Keeping your existing messages in your application has lots of benefits, but the biggest are a) its easier to add localization an existing application, and b) your code is easier to understand and test.

To make this work you provide the language (referred to as culture) that your default messages are written in when the Localize-ME library is registered. This allows the Localize-ME services to return the message in your code when the user’s / app’s culture matches the default messages culture. If the user’s / app’s culture isn’t the same as the default messages culture, then it uses the .NET localisation services to lookup the message in the correct resource file for the required culture.

The diagram below shows this for French (LHS), which isn’t the default messages culture, and English (RHS), which is default messages culture.  The blue italic words in the diagram explains the two different routes for the two cultures.

Setting the scene – the best places to use the Localize-ME library

The Localize-ME library adds two localize services on top of the .NET localization services which provide new (better) ways to localize your application. From my experience I created two services, SimpleLocalizer and DefaultLocalizer, that both localize a message, but works better in different cases. The list below provides my take on where are best used.

1. The SimpleLocalizer service is good for using in Razor pages etc.

The SimpleLocalizer service provides the simplest approach to obtaining a localized message. Its features that make it simpler are:

  • Auto localize key: the SimpleLocalizer service uses your message as the lookup key (which I call localise key) which is unique, while the IStringLocalizer needs you to add a string that must be unique to your message.
  • Simpler injection: the SimpleLocalizer’s TResource part (the later section 3 describes the TResource part) for what is set on startup so you only need ISimpleLocalizer to get an instance, while the IStringLocalizer needs IStringLocalizer<TResource>

2. The DefaultLocalizer service is better on large localizations with backend projects

When I started adding localization to a medium size application, I found string constants for the localize key were hard to remember and hence error prone. The DefaultLocalizer service uses extension methods to create the localize key, which has the following benefits:

  • Formal localize key design: The localize key has a {class}, {method} and {localKey}, which tells you what class and method the message came from in your code.
  • Auto fill in of {class} & {method}: the extension methods will fill in the {class} & {method} parts of the localize key for you.
  • Short, but unique localize key: The library has various ways to make short, but unique localize keys. See this section from the Localize-ME documentation on how that works.

The common features of both the SimpleLocalizer and DefaultLocalizer are:

  • Your code is easier to understand: the .NET localization services replace your messages with a localise key. The SimpleLocalizer / DefaultLocalizer services keep your message
  • Better missing entry handling: If the the .NET localization services can’t find the localized message, then it returns the localize key which isn’t that useful to the user. While SimpleLocalizer / DefaultLocalizer services returns the default message, which might be in the wrong language but can translated by the user (it also logs a warning with very detailed information what was missing – see this section in the Localize-ME documentation.

Things that the Localize-ME library doesn’t provide

The .NET localization services contain features that I don’t try to provide in the Localize-ME library. They are:

Using the Localize-ME library in an ASP.NET Core application

The list below contains the eight steps for adding localization to an .NET application, with examples from an ASP.NET Core application. Each step provides a summary and examples for its steps, with links to the Localize-ME documentation which contains the more detailed information

  1. Setting up the .NET localization
  2. Setting up the Localize-ME services
  3. Adding your resource files and TResource classes
  4. Getting an instance of the SimpleLocalizer
  5. Getting an instance of the DefaultLocalizer
  6. Using the SimpleLocalizer service
  7. Using the DefaultLocalizer service
  8. How to unit test your localized code

1. Setting up the .NET localization

The Localize-ME library relies on the .NET localization services so we start with this (the Localize-ME adds its extra features, which are described later).

On startup you need to register the .NET localization, its resource files (see section 3 later about resource files) and, set up how to obtain the user’s / app’s language (known as culture in .NET). The Localize-ME documentation contains information on how to set this within an ASP.NET Core application in some detail with lots of links to useful information and three example applications, so go to that document for the full information

I did want to point out how the .NET localization obtains user’s / app’s culture, which is needed to return the messages in the correct language. By default the parameter called RequestCultureProviders within the .NET localization options has three ways to try to obtain the culture, which are used in turn until it gets a culture  – see list below:

  1. QueryStringRequestCultureProvider – uses culture data in the query string, which allows you to create a URL that will set the culture.
  2. CookieRequestCultureProvider – looks for a culture cookie, which is useful if want users to select the language they want.
  3. AcceptLanguageHeaderRequestCultureProvider – this picks up data from the browser to set the culture.

These are the main approaches to get the user / app cultures, and they are described here.

2. Setting up the Localize-ME services

I have made the setting up of the Localize-ME library as simple as possible. Here are the two registrations you need to add to your setup code in the Program class.

The code below registers the DefaultLocalizer – click here for information on the two parameters.

builder.Services.RegisterDefaultLocalizer(
    "en", supportedCultures);

The code below registers the SimpleLocalizer – click here for information on the optional parameter

builder.Services.RegisterSimpleLocalizer
    <HomeController>();

NOTE: The DefaultLocalizer service must be registered for the SimpleLocalizer service to work.

3. Adding your resource files and TResource classes

Resource files hold the messages, in a specific language, in the Value column while the localise key is held in the Name column. The .NET localization services uses the culture to pick the correct resource file and then the localize key obtain the correct message to show to the user.

You link a resource file to your localization service via a class, known as a TResource class, in the ASP.NET Core project. The resource file uses part of the TResource class’s FullName to the start of its filename. For instance, if you used the HomeController class as a TResource class and the language is USA English then the resource file name would be (but see this link to see the other file formats)

Controllers.HomeController.en-US.resx

Resource files aren’t the easiest part of the localization service, so I suggest you read the Localize-ME All about resource files documentation. This explains how to register them, and the different ways you can organise the resource files.

The other problem is finding all the localise keys and the appropriate localized message and then adding into the resource files. I give two approaches that I created when I was applying this library in to my AuthP library. See:

4. Getting an instance of the SimpleLocalizer

Obtaining an instance of the SimpleLocalizer service is easy, as you have already defined the TResource class on startup.

  • If you are in C# code, then you use the interface ISimpleLocalizer via dependency injection to get SimpleLocalizer service.
  • If you are in a Razor page (cshtml), you use “@inject ISimpleLocalizer SimpleLocalizer”.

NOTE: See this Localize-ME document for example code and more on ISimpleLocalizerFactory service.

5. Getting an instance of the DefaultLocalizer

The way to get an instance of the DefaultLocalizer service is the same as getting the SimpleLocalizer service, but you need to the TResource class to define which resource file group this service should look up for localized message.

The simplest approach is to use a dependency injection with the IDefaultLocalizer<TResource> interface, e.g. IDefaultLocalizer<HomeController>.

But if you have backend code in other projects you can’t do that because the TResource class must be in the ASP.NET Core project and your backend projects can’t link to these TResource because that would create a circular reference. In this case you need the IDefaultLocalizerFactory service and some options.

To use the IDefaultLocalizerFactory service you need to register singleton class (shown as MyOptions in the code below) which contains the Type of TResource class(es) you need in your backend code, and then use the IDefaultLocalizerFactory service, as shown below within your backend code.

public class MyBackendCode 
{
    private readonly IDefaultLocalizer _defaultLocalizer;

    /// <summary>
    /// ctor
    /// </summary>
    public MyBackendCode(MyOptions options,
        IDefaultLocalizerFactory factory)
    {
        _localizeDefault = factory.Create(
            options.BackendResourceType) 
    }
    //… rest of class left out
}

NOTE: See the Getting an instance of the IDefaultLocalizer service documentation for more details.

6. Using the SimpleLocalizer service

The SimpleLocalizer service is simple to use! It only has only two methods and the localise key is derived from the message (see this section in the Using SimpleLocalizer documentation on how that works). The first method is shown below and handles a string message.

<label for="month">
    @(SimpleLocalizer.LocalizeString(
        "Provide a string (can be null)", this))
</label>

The other method shown below handles FormattableStrings, where you can provide dynamic data into the localized message.

<h1>
    @(SimpleLocalizer.LocalizeFormatted(
       $"List of {Model.NumBooks} books", this))
</h1>

NOTE: These two examples come from Razor pages, using the “@inject ISimpleLocalizer SimpleLocalizer” approach, but you can also use dependency injection within a ASP.NET Core Controller or Page.

7. Using the DefaultLocalizer service

The DefaultLocalizer service has only two methods:

Both methods take in two parts

  1. The localize key that uses a formal design that can contain the {callingClass}, {method} and {localKey}.
  2. The default message, either in a string or a FormattableString.

The localize key is created by one of the localize key extension methods which contain various combinations of the {callingClass}, {method} and {localKey} – this link shows you the various methods / combinations and what situation each one is used for.

This makes calling a DefaultLocalizer method is a bit more work than the SimpleLocalizer method calls, but the extra effort provides you with a localise key that is easy to understand and easily track back to where you made the localize call.

NOTE: There is an interesting section about DefaultLocalizer localize key creation called the balance between readable localize key and being unique, which provides two ways to create both short and unique localize keys.

8. How to unit test your localized code

Once you change your code to use the Localize-ME library, then you will need to provide a ISimpleLocalizer or IDefaultLocalizer service to runs your tests. I recommend using a stubbing out approach (see this Microsoft document about stubbing) in your unit tests as because the stub to return the default message in your code, which makes it easier to update your unit tests and the tests are easier to understand.

The Localize-ME library contains stubs for the ISimpleLocalizer or IDefaultLocalizer services, called StubSimpleLocalizer and StubDefaultLocalizer respectively. The code below shows the using StubDefaultLocalizer (the StubSimpleLocalizer works the same)

[Fact]
public void TestStubDefaultLocalizer()
{
    //SETUP
    var defaultLoc = new StubDefaultLocalizer();

    //ATTEMPT
    var message = defaultLoc.LocalizeStringMessage(
        "MyLocalizeKey".MethodLocalizeKey(this),
        "My message");

    //VERIFY
    message.ShouldEqual("My message");
    defaultLoc.LastKeyData.LocalizeKey.ShouldEqual(
        "TestStubDefaultLocalizer_MyLocalizeKey");
}

NOTE: The unit testing your localized code document gives more information on the StubSimpleLocalizer and StubDefaultLocalizer classes.

8a. Logging Localize-ME localization during unit testing

I also created a more powerful IDefaultLocalizer stub called StubDefaultLocalizerWithLogging, which returns the default message, which optionally logs the full information of the localization data to a database. This provides a quick way to look at the localized messages, and it can find certain problems.

This stub is much more complex to use, but it does provide a very useful list of the localised messages. This helps in checking the localize keys and also speeded up the process of building the resource files – see the this section from the first article where I explain why I found it so useful, but here is a screenshot of a section of the logged localization data. Note the PossibleErrors column which has found an existing entry in the database with the same localize key, but the message format is different. NOTE Click the screenshot to get a bigger version.

NOTE: There is detailed documentation about how to setup and use the StubDefaultLocalizerWithLogging class.

Conclusions

I had a requirement from a developer to add localization to my AuthP library so I started looking at .NET localization services. The .NET localization didn’t have the features to provide an optional localization feature (i.e. your code will still works without resource files) to my AuthP library as it was. So, I started to create the Localize-ME library that makes localization optional in my AuthP library.

Once I knew I had to create a new library, then I could reimagine how I would like to apply localization in a .NET application. For instance, I made it possible to keep your messages are in your non-localized application which means that your code is easier to update and understand. While the .NET localization approach, which would move the messages to the resource files and replace them with a localize key, makes the code (a bit) harder to understand.

Unfortunately, I didn’t come up a way to remove the manual / tedious job of building resource files, but you might like to look at section 8a about capturing the localize data while running your unit tests. Personally, I found this very useful in providing the data to help in building resource files.

Other improvements came as I started to use the Localize-ME library in different ways, from a test application. updating the AuthP library and creating another test example in the AuthP library. Each usage was different which highlight different issues, and each issue often provided new approaches or features. This means it took way longer that I thought it would create the library, but I’m pleased with the final result. I hope the Localize-ME library will help you too.

Extra note: I ran a twitter poll on whether users of AuthP library would use the new localization feature the votes were 10 to 1 in favour of needing the localization feature. This makes sense as the AuthP library’s features is helping developers to create multi-tenant applications, which could be used in any country or countries.

Happy coding.

Improving the support of multiple languages in .NET applications

Last Updated: January 16, 2023 | Created: January 2, 2023

One of users of my AuthPermissions.AspNetCore library (shortened to AuthP in this article) asked for support for multiple languages – known in .NET as Localization. I looked at the .NET localization solution and it lacked some features I needed for my situation. I took some time to work out to use the .NET localization code and in the end, I built a small library called Net.LocalizeMessagesAndErrors which wraps the .NET version with a interface that adds a some extra features.

This article explains how this new library it easier to add / manage multiple languages in your applications, with articles showing how to use this new localization library. The full list of the articles in this series are:

  1. Improving the support of multiple languages in .NET applications (this article)
  2. Applying an improved multiple languages library to .NET applications.
  3. A library / pattern for methods that return a status, including localization (coming soon)

TL;DR; – Summary of this article

  • This article provides a super-quick introduction to .NET localization feature, as some of the concepts weren’t obvious to me at the start.
  • My problem was if I added the .NET localization to my AuthP library, then no one could use the AuthP library unless they had set up the .NET localization with resource files, which is lot of work.
  • My solution was to build a service that wraps around the .NET localization service and provides extra features. Specifically, you can build applications without .NET localization, and it will still work. This feature will also help developer who need to add localization to an existing application, as you can apply localization in stages.
  • Once I started looking at localization, I found several ways to either make the code easier to understand, or easier to use. The result is the Net.LocalizeMessagesAndErrors library.
  • After the quick explanation how the Net.LocalizeMessagesAndErrors’s IDefaultLocalizer service works I detail six localization challenges and how I got around them, plus a tip on how to setup localization resource files.  

A brief view of how .NET localization works

At first, I was confused by how to support multiple languages because I has no idea what the terms means and how they work together. Therefore, here is a short introduction from me with links at the end to other articles that I found very useful:

  • Localization means providing different languages, e.g. English, German, Mandarin, in your application – (I like the name Multilingual support as its more obvious, but I use localization because that what Microsoft calls it).
  • Globalization is about showing dates and numbers in the correct format, with some compare / order string methods.
  • With .NET localization you store the different messages in resource files in your application. Each resource file has a name based on:
    • A name, usually taken from a class’s Fullname, e.g. Controllers.HomeController
    • A name representing the language it contains, eg. “en-GB”
    • And has extension of .resx.
  • A resource file has entries with a Name (which I call the localize key) and Value which holds you’re message in the correct language. The Name / Value entries in the resource file works like a dictionary, with the Name being the key.
  • You add a resource file for each language (known as culture) and in each resource you would add an entry for each message (Value) you want to show, with a unique localize key (Name) to use as the lookup.
  • You also need to setup the localization services – see this for how to setup an in ASP.NET Core application and the other links below.
  • You would get a localize service, like IStringLocalizer<TResource>, to obtain the localized message. There are three parts to get the localised message:
    • The start of the resource filename is defined by the TResource’s FullName.It then adds the current culture Name from the user, cookie, or other source (depends on what you setup).Your .NET service which takes your localize key, e.g. _localizer[“YourKey”], which return a string containing the entry found in the selected resource file.
    • You can also have formatted messages, such as $”Date is {0}”, which would need extra data, e.g. _localizer[“YourKey”, DateTime.Now].

Once I understood the names / concepts the Microsoft’s documentation of .NET localization made much more sense to me. Here are links to articles about .NET localization that I found useful:

Super-quick explanation on the IDefaultLocalizer works

This article is about why I implemented the Net.LocalizeMessagesAndErrors library and what new features that it contains, but here is an overview of the IDefaultLocalizer service to help you understand the extra features this service provides.

In the nutshell the IDefaultLocalizer service lets you to put strings like “Hello!” or FormattableString like $"The date is {DateTime.Now}" in your code (I use the term message for these two types), which makes your code easier to understand. Have look at the diagram below and read the blue italic words which explains how the IDefaultLocalizer service works.

This article doesn’t tell you how to use the Net.LocalizeMessagesAndErrors library, but it highlights the main change – there is a message in each call. If you want more information on how use the library then see the “How to add multiple languages to a ASP.NET Core using the Net.LocalizeMessagesAndErrors library” (coming soon) or the Net.LocalizeMessagesAndErrors documentation.

The localize challenges I found and how I fixed them in my library

I spent a lot of time trying to come up with ways to use the .NET localization to work with my AuthP library, but it just didn’t work for me. Some of the problems were around adding localization to a NuGet package, but the biggest issue was the massive changes I would have to make to the AuthP library if I changed over to .NET localization.

The list below gives the localize challenges I found and how I overcame them. The list is in order with the biggest challenges first. They are:

  1. I didn’t want to turn all the messages / errors into just a localize key.
  2. I wanted a NuGet that works without having to setup the .NET localization.
  3. I didn’t like .NET’s localization’s handling of missing resource files / entries.
  4. I wanted a better format for the localise key to help creating unique strings.
  5. I wanted to unit test without setting up localization.
  6. I wanted to check the localized messages with their localize key.
  7. Tip: Use Excel (or other app) to setup the data for the resource files.

1. I didn’t want to turn all the messages / errors into just a localize key

As of version 4.0.0 of the AuthP library has 100+ messages over five projects. Most of these messages are error messages while the rest are success messages. Here are an example of success and error message:

  • "Successfully added the new user."
  • $"There is already a Role with the name of {0}"

If I used .NET localization these messages would be turned into a localize key, which from my view has the following downsides:

  • The messages make great comments and turning into just a localise key messages would make the code harder to understand.
  • It’s a lot of work to move these messages to a resource file, and the messages are much harder to update.

My solution was to leave the current success / error messages where they are and define them as generic English (culture “en”) – I call these messages as default messages. I already have a common pattern for my methods / services which handles success and error message, so it was easy to update the code to pass the default messages to my localization wrapper called DefaultLocalizer. The process the DefaultLocalizer follows are:

  1. On registration of the DefaultLocalizer service, I define the culture of the default messages, in this case “en”.
  2. If the user’s / app’s culture started with default culture, then the default message is returned without having to use the .NET localization.
  3. If the user’s / app’s culture doesn’t start with default culture, the DefaultLocalizer service uses the .NET localization to obtain the message from the resource files.

Here is an example of my improved common method to handle localization showing a success message and an error message.

public class ExamplesOfStatusGenericsLoc<TResource>
{
    private readonly IDefaultLocalizer<TResource> 
        _defaultLocalizer;

    public ExamplesOfStatusGenericsLoc(
        IDefaultLocalizer<TResource> defaultLocalizer)
    {
        _defaultLocalizer = defaultLocalizer;
    }

    public IStatusGeneric CheckNull(string? month)
    {
        var status = new StatusGenericLocalizer(_defaultLocalizer);
        status.SetMessageString(
            "Success".ClassMethodLocalizeKey(this, false), 
            "Successful completion.");

        if (month == null)
            return status.AddErrorString(
                "NullParam".JustThisLocalizeKey(this), 
                "The input must not be null.");

        return status;
    }
//...rest of class left out

NOTE: If any errors are added to the status, then the Message is changed to “Failed with {n} errors”. That’s just in case the success Message is incorrectly shown.

2. I wanted a NuGet that works without having to setup the .NET localization

If I just applied the .NET localization to the AuthP NuGet library, it would mean everyone that used this library they would have to set up .NET localization with resources etc. The library is already complex and with the extra needed to understand / setup .NET localization would put off developers from using this library.

The solution I added is into the DefaultLocalizer service is to return the default message if the .NET localization hasn’t been setup. This means when the localization version of the AuthP library is released:

  • The AuthP library doesn’t get any more complex unless the developer what’s to use this new localization feature.
  • Developers that are already using the AuthP library can upgrade to the localization version without needing to change their code.

I hate to think what new and existing users would think if they had to set up .NET localization to use the AuthP library!

NOTE: You might not be creating a NuGet like I has, but if you are adding localization to an existing application, then this approach allows you to add localization in stages. That might be pretty useful.

3. I didn’t like .NET’s localization’s handling of missing resource files / entries

The .NET localization will return the localize key if no entry is found in the resource files. This typically doesn’t provide a good experience for the user. The DefaultLocalizer service can provide the default message which isn’t in the correct language, but easy for the user to translate.

The other issue of missing resource files / entries is reporting. The NET localization does provide a ResourceNotFound parameter, which will be true if the localized message isn’t found, but if you want a log / event then you need to add that to each localization call. On the other hand, the DefaultLocalizer service provides a very detailed log – a example is shown below.

The message with the localizeKey name of 
'MissingResourceEntry_MissingEntry' 
and culture of 'fr' was not found in the 
LocalizedWebApp.Resources.Controllers.HomeController' 
resource. 
The message came from 
DefaultLocalizerController.MissingResourceEntry, line 38.

This provides everything you need to correct this problem, including the class, method, and line number of where the localization came from.

4. I wanted a better format for the localise key to help creating unique strings

The .NET localization service allows you to use any string as the localize key, and its up to you to make sure it is unique. You can use string constants, e.g. “HelloMessage” for the localise key, but when I build (and used!) the DefaultLocalizer service I found string constants were hard work and error prone.

My view is that string constants are fine for small applications, but for larger applications the localize key needs a standard format and methods to help the developer to create unique localize keys quickly. My solution has a format of “{className}_{methodName}_{localKey}”, with the className and methodName being optional. The table below shows are three main versions that are used, with the first one used on 90% cases.

Localise key stringUnique
“MyClass_MyMethod_SetByDev”Unique in the class and method – most used
“MyClass_SetByDev”Unique in the class – useful for common errors
“SetByDev”It’s the developer’s job to ensure it is unique

To implement this localize key format I have created a set of extension methods that can automatically fills in the “{className}” and “{methodName}” for you. This has two advantages:

  • Easier for the developer to create a unique localize key.
  • The developer can work out where the localize key was created.
  • You can cut / paste your localize code and the localise key will automatically change to the new class & method parts of the localize key.

Here are two examples taken from the ExamplesOfStatusGenericsLoc method shown earlier in this article:

  • "Success".ClassMethodLocalizeKey(this, false)
  • "NullParam".JustThisLocalizeKey(this),

You can get a full set of the extension methods in the “Creating localize keys” document also cover some of the problems and solutions of the balance between readable localize key and being unique in this section.

5. I wanted to unit test without setting up localization.

As I said the AuthP library has five project containing code and I have nearly 400 unit tests, of which a third check errors or success messages. If I used .NET’s localization on its own, then I could easily stub out (see this Microsoft document about stubbing) the .NET’s localize methods but would still have to change many of the unit tests to use the localize key instead of the actual error / success messages. It’s more work and makes the unit tests less easy to understand as the actual error / success strings are gone.

Because the DefaultLocalizer can return the default messages it’s easy to create a DefaultLocalizer  stub can return the actual error / success strings. The Net.LocalizeMessagesAndErrors repro contains several stubs, but in this case, you need the StubDefaultLocalizer class.

The StubDefaultLocalizer class has the same methods as the DefaultLocalizer class, but it a) returns the default message, and b) holds the localize key data of the last localize. This allows the unit test to continue in the same way, but if I want to you can check on the localizer key. See the code below which shows how the StubDefaultLocalizer class works.

[Fact]
public void TestStubDefaultLocalizer()
{
    //SETUP
    var defaultLoc = new StubDefaultLocalizer();

    //ATTEMPT
    var message = defaultLoc.LocalizeStringMessage(
        "MyLocalizeKey".MethodLocalizeKey(this),
        "My message");

    //VERIFY
    message.ShouldEqual("My message");
    defaultLoc.LastKeyData.LocalizeKey.ShouldEqual(
        "TestStubDefaultLocalizer_MyLocalizeKey");
}

This works fine, but I found another type of stub that solved another issue I came across, as described in the next section.

6. I wanted to check the localized messages with their localize key

Once I starting to localize my AuthP library, which has ~110 localization I soon found I really needed an overview of all the localizations to check on localise key uniqueness, format, duplicates etc. Stepping though the code to fine each message is hard work and its easy to miss one.

So, I thought – can I write the localize information to a database when running my unit tests. At that point I created StubDefaultLocalizerWithLogging class, which returns the default message, but (optionally) logs the full information of the localization data to a database. This provides a quick way to look at the localized messages, and it can find certain problems.

For each use of a DefaultLocalizer usage it logs the localize key, culture, the message and where the localised entry was created (the full list of what is in the log can be found in the LocalizedLog class, which has 9 parameters).

The screenshot below is a section of the logged localization data. Note the PossibleErrors column which has found an existing entry in the database with the same localize key, but the message format is different. NOTE Click the screenshot to get a bigger version.

I have found quite a few of localization issues by looking through the sorted data. I also I found the logged list very useful when building resource files for other languages because it gives me the Name (localize key) and the Value (string / format) that needs translating. My unit tests only find 75 localized messages when in fact that there are ~110 localized messages. For the 35 localize message ones that aren’t logged I had to go three extra steps to set up the entry in the resource file(s):

  1. Find the code that created the localized message.
  2. Work out what the localize key is.
  3. Copy the message format.

These three manual steps are tedious and error prone. It enough to make me improve my unit test coverage 😊.

The only downside of logging to a database is the unit tests are slower – in the Net.LocalizeMessagesAndErrors library that has ~100 the unit tests which take ~1.5 seconds without logging to the database, but ~2 seconds with logging to the database. In the AuthP library, which has nearly 400 tests the difference between log to database being on / off is a smaller percentage.

Thankfully you can turn the database logging on or off by setting the `SaveLocalizesToDb` to true or false respectively – see the documentation for the StubDefaultLocalizerWithLogging here.

7. Tip: Use Excel (or other app) to setup the data for the resource files.

This isn’t anything to do using DefaultLocalizer, but I found that adding entries to a resource files isn’t a nice process in Visual Studio (VS Code, with the ResX Editor extension, is better). In the end used Excel to entry the resource Name / Value and then turn in into a .csv file. The code below (taken from my AuthP repo) converts CVS to a resource file.

public void CreateResxFileFromCSV()
{
    var csvFilePath = "filepath to csv file";
    var resxFilePath = "filepath to EMPTY resource file";
    
    //see https://joshclose.github.io/CsvHelper/getting-started/#reading-a-csv-file
    using (var reader = new StreamReader(csvFilePath))
    using (var csv = new CsvReader(reader, 
          CultureInfo.InvariantCulture))
    {
        var records = csv.GetRecords<CsvInputOfResx>();
        //see https://learn.microsoft.com/en-us/dotnet/core/extensions/work-with-resx-files-programmatically#create-a-resx-fil

        using (ResXResourceWriter writer = 
             new ResXResourceWriter(@resxFilePath))
        {
            foreach (var entry in records)
            {
                writer.AddResource(entry.Name, entry.Value);
            }
        }
    }
}

I find this is especially useful if you need to change / add to your resource file, as its much easier to search / change in Excel.

Conclusions

The Net.LocalizeMessagesAndErrors is relatively small (the DefaultLocalizer is only has ~100 lines of code), but it took me more than five weeks of work! That’s because when I started to use the library I found a load of improvement – I got to local version 1.0.0-preview034 before I had finished. The result is that the library is much easier to use when updating an existing application to support multiple languages, and hopefully nicer to work with.

The changes I added came from applying the library to a) my AuthP library, b) adding a demo ASP.NET Core app within the library repo (see LocalizedWebApp), c) localizing the Example1 ASP.NET Core in my AuthP library and d) writing the Net.LocalizeMessagesAndErrors documentation (writing the docs always shows me any bad interfaces).

In the following articles I will show how to use the Net.LocalizeMessagesAndErrors library to build localized .NET applications. There also good documentation for this library now that contains all the details if you want to try it out now.

Happy coding.

ASP.NET Core: Three(+1) ways to refresh the claims of a logged-in user

Last Updated: October 25, 2022 | Created: October 18, 2022

An ASP.NET Core application uses claims to hold the logged-in user’s authentication and authorization data. These claims are created on login and stored in a cookie or a JWT Token for quick access. This makes access to the claims is very fast, but downside is claims are fixed. Most of the time the “fixed claims” approach works fine, but there are some circumstances where you might need to update the user’s claims.

This article describes three different ways to dynamically change a user’s claims for cookie and/or JWT Token authentication. The article also introduces some of the deeper parts of ASP.NET Core’s authorization and middleware, which might help you in understanding how ASP.NET Core works.

NOTE: This article covers some similar topics in the Advanced techniques around ASP.NET Core Users and their claims, but includes new approaches provided by a new distributed cache Net.DistributedFileStoreCache library (referred to as FileStore cache in this article). The first example has a small improvement, while the other two approaches are new and work with JWT Token authentication.

This article is part of the series that covers .NET multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

TL;DR; – Summary of this article

  • ASP.NET Core creates HttpContext.User on login, which contains user information (like their name and email) in claims. The User, with its claims, is stored in a cookie or a JWT Token for quick access. You can think of the cookie / JWT Token as a kind of cache.
  • If you have data a) is used in almost every HTTP request, and b) it takes some time calculate, then it’s a good candidate to calculate/add it as a claim during the login. See this article on how to do add your own claims on login.
  • By default, the claims in a logged-in user won’t change until they log out and log back in again. Normally the “fixed claims” is fine but have various situations where I need to update a user’s claims around managing multi-tenant users.
  • Therefore, I have needed to create ways to refresh the claims of a logged-in user. In this article I describe three approaches:
    • 1. Update user claims via cookie event: This is a relatively easy way to update the user’s claims when using is the standard way to update an already logged-in user’s claims, but it only works for cookie authentication – see this section on this approach.
    • 2. Replace a user claim on a change: This uses middleware to update a claim when the logged-in claim is out of date. This approach it works with both cookie and cookie authentication – see this section on this approach.
    • 3. On-demand add a new claim: This uses middleware to add a new claim not already in your JWT Token or Cookie. This is useful if you have secret / personal data that you don’t want in a JWT Token because the data isn’t encrypted – see this section on this approach.
    • 4. Extra, IClaimTransformation: After I released this article Mohammed Ahmed Hussien (twitter: @shoogn17) said that the IClaimTransformation interface can also help on change a user’s claims, so I have added something about using IClaimTransformation instead of the middleware approach.
  • All these examples rely on a distributed cache called Net.DistributedFileStoreCache I created for these types of situations. This get a cache value in ~25 nanoseconds, but adding a cache value is slow-ish ( > 1 milliseconds). The very fast cache read means using these approaches won’t slow down your application.

Setting the scene – three types of dynamically updating a logged-in claims

I have created the AuthPermissions.AspNetCore library (shortened to AuthP in this article) which contains a) an improved ASP.NET Core Roles authorization system and b) features to help create an ASP.NET Core multi-tenant database system. The AuthP’s improved Roles authorization adds a Roles/Permissions claim and if the multi-tenant feature is activated, then an DataKey claim is also added.

In certain circumstances these AuthP’s Roles/Permissions and DataKey claims may change, and to handle this I have developed code to dynamically change a user’s claims. Here are two that I have found:

  • In an AuthP multi-tenant application there is code to move a tenant’s data from one database to another, thus changing tenant DataKey. This requires the DataKey claim of all the user’s linked to the moved tenant. You can try this out on AuthP’s Example4 hierarchical multi-tenant application and Authp’s Example6 sharding multi-tenant application.
  • The AuthP library version of ASP.NET Core Roles allows a Role to be dynamically changed, which means that an admin user can what pages / Web APIs are in a Role. If a Role is changed there might be security issues, so the user’s Roles/Permissions claim needs updating. You can try this out in the AuthP’s Example2 WebApi application that uses the JWT Token authentication.
  • The third approach is useful if you need a secret or personal value claim when using JWT Token authentication. You shouldn’t be added to the JWT Token because the token data isn’t encrypted. For instance, various personal privacy laws stop you from adding Personal Identifiable Information (PII) values in a JWT Token. You can try this out in the AuthP’s Example2 WebApi application that uses the JWT Token authentication.
  • At the end I also cover the IClaimTransformation interface that allows you to change a user’s claims. The IClaimTransformation service is an alternative from using middleware approach used in examples 2 and 3

Introducing the three examples of updating a user’s claims

The three examples of updating a user’s claims use different approaches to implementation. This allows you to choose the approach that works for you, but even if you never need these approaches seeing how they use events and ASP.NET Core middleware might help you understand the ASP.NET Core code a bit more.

The three approaches in this article assume you have added extra claims to your users on login. This is  described in this section of the article called “Advanced techniques around ASP.NET Core Users and their claims”. I recommend you read this article if you aren’t aware how to add extra claims to a user on login.

All the solutions rely on a distributed cache called Net.DistributedFileStoreCache I created for these types of situations (can find out more about the FileStore cache in this article). The FileStore cache’s key feature is that it as a very fast read time, measured in nanoseconds, which is important if you want application to be fast because each implementation is called on every HTTP request. It also a distributed cache, so it will work on web sites using multiple instances.

Here is a list of the three examples with a quick summary, their pros and cons and a comment on performance:

  1. Update user claims via cookie event: Cookie authentication only.
    1. Summary: This is the standard way to update an already logged-in user’s claims.
    1. Pros: Can handle any type of change because it can change all the claims.
    1. Cons: Doesn’t work with JWT Token authentication
    1. Performance: It re-calculate all the extra claims, but only for logged-in users. Very efficient as the cookie is updated to the new claims.
  2. Replace a user claim on a change: JWT Token and cookie authentication
    1. Summary: This uses middleware to update a claim when the logged-in claim is out of date.
    1. Pro: Can work with both JWT Token and cookie authentication.
    1. Cons: Could need a big cache file if lots of users.
    1. Performance: It re-calculates one claim for all users, not just logged-in users. If large number of users, then this can be slower than the third example.
  3. On-demand add a new claim: JWT Token and cookie authentication
    1. Summary: This adds a new claim not already in your JWT Token or Cookie.
    1. Pro: Hide secret data, e.g. you shouldn’t add a Personal Identifiable Information (PII) claim when using JWT Token.
    1. Cons: None
    1. Performance: It has a very good performance because the claim only calculated if a user is logged in, but it would be slower than example 2 if a database change altered many users.

NOTE: At the end I talk about using services that follow the IClaimTransformation interface, which could be used in example 2 and 3 instead of adding middleware.

1. Update user claims via cookie event: Cookie authentication only

This example only works with cookie authentication only but its fairly simple and is fast, i.e. it doesn’t add much extra time to each HTTP request and It’s also quite easy to adapt to different situations. The downside is you can’t use this with a JWT Token.

The two parts of this implementation are:

  • Detect Change: The code links to EF Core’s StateChanged event and detects a change to the entries that would change the claim. On such an event it writes the current UTC DateTime to the “LastChanged” entry in the cache – see the left side of the diagram below.
  • Apply to User: The code links to the cookie’s OnValidatePrincipal event and if the user’s “LastChanged” claim DateTime is older that the “LastChanged” DateTime from the cache,  then it will update the user’s claims and also create a new authentication cookie – see the right side of the diagram below.

The figure below shows how this example works.

The main code to implement contains three pieces of code:

  1. The event code to detect the database changes that require an update to user’s claims
  2. The GlobalChangeTimeService which sets / gets the “LastChanged” entry
  3. The event code called by cookie’s OnValidatePrincipal to check / update a user’s claims

With two support services

Let’s now look at the main code:

1.3 The event code to detect the database changes

I’m using EF Core which has a several events to track what is happing to the database. In this case I used EF Core’s ChangeTracker.StateChanged (see this useful article about EF Core events). The code can be found in the TenantKeyOrShardChangeService class inherits the IDatabaseStateChangeEvent interface and much be register via the ASP.NET Core DI provider. The constructor of the application’s DbContext contains an extra parameter containing an IEnumerable<IDatabaseStateChangeEvent> that contains any registered classes that have the IDatabaseStateChangeEvent interface.

In this example I am looking for two changes to the Tenant entity:

  • The ParentDataKey property being modified – this would change the DataKey claim, which defines the tenant filer key (lines 13 to 14).
  • The DatabaseInfoName property being modified – this is used in sharding and would change the DatabaseInfoName claim, which defines what database to use (lines 15 to 16).

The code below is taken from the TenantKeyOrShardChangeService class and shows the code to register its event listener, and the actual event handler.

public void RegisterEventHandlers(
    AuthPermissionsDbContext context)
{
    context.ChangeTracker.StateChanged += 
        RegisterKeyOrShardChange;
}

private void RegisterKeyOrShardChange(object sender, 
    EntityStateChangedEventArgs e)
{
    if (e.Entry.Entity is Tenant
        && e.NewState == EntityState.Modified
        && (e.Entry.OriginalValues[nameof(Tenant.ParentDataKey)] != 
               e.Entry.CurrentValues[nameof(Tenant.ParentDataKey)] ||
            e.Entry.OriginalValues[nameof(Tenant.DatabaseInfoName)] !=
               e.Entry.CurrentValues[nameof(Tenant.DatabaseInfoName)])
        )
    {
        _globalAccessor.SetGlobalChangeTimeToNowUtc();
    }
}

The SetGlobalChangeTimeToNowUtc method is called if a modification is found. This method comes from the GlobalChangeTimeService class which is described next section.

1.2 The GlobalChangeTimeService class

The GlobalChangeTimeService class provides a thin wrapper around the FileStore cache and has two methods that set and get the “ChangeAtThisTime”  entry in the FileStore cache. Also handles the DateTime conversions. The methods are:

  • SetGlobalChangeTimeToNowUtc(), which sets the cache entry with the name “ChangeAtThisTime” with a value of DateTime.UtcNow.DateTimeToTicks()
  • GetGlobalChangeTimeUtc(), which returns a DateTime from the cache entry with the name “ChangeAtThisTime”, or DateTime.MinValue if that entry hasn’t been set.

1.3 The event code called by cookie’s OnValidatePrincipal

The event code can be found in the SomethingChangedCookieEvent class and needs to register it in your Program class using the code shown below

builder.Services.ConfigureApplicationCookie(options =>
{
    options.Events.OnValidatePrincipal = 
      SomethingChangedCookieEvent
          .UpdateClaimsIfSomethingChangesAsync;
});

The code shown below comes from the SomethingChangedCookieEvent class. The steps are:

  1. Lines 4 to 7: Gets the current user’s claims and the latest time a Global Change was found.
  2. Lines 9 to 11: If the user’s claims are older that the Global Change time it needs to update the user’s claims.
  3. Lines 19 to 25: This updates the AuthP’s claims via its AuthP’s ClaimsCalculator which will recalculate the extra claims, including the claim that holds the last time the claims were updated.
  4. Lines 27 to 29: This a) creates a new User (of type ClaimsPrincipal) for this HTTP request, and b) in line 30 it says the authentication cookie should be updated with these new claims.
public static async Task UpdateClaimsIfSomethingChangesAsync
    (CookieValidatePrincipalContext context)
{
    var originalClaims = context.Principal.Claims.ToList();
    var globalTimeService = context.HttpContext.RequestServices
       .GetRequiredService<IGlobalChangeTimeService>();
    var lastUpdateUtc = globalTimeService.GetGlobalChangeTimeUtc();

    if (originalClaims.
           GetClaimDateTimeTicksValue(EntityChangeClaimType) 
        < lastUpdateUtc)
    {
        //Need to refresh the user's claims 
        var userId = originalClaims.GetUserIdFromClaims();
        if (userId == null)
            //this shouldn't happen, but best to return
            return;

        var claimsCalculator = context.HttpContext.RequestServices
            .GetRequiredService<IClaimsCalculator>();
        var newClaims = await claimsCalculator
            .GetClaimsForAuthUserAsync(userId);
        //Copy over unchanged claims
        newClaims.AddRange(originalClaims.
            RemoveUpdatedClaimsFromOriginalClaims(newClaims)); 

        var identity = new ClaimsIdentity(newClaims, "Cookie");
        var newPrincipal = new ClaimsPrincipal(identity);
        context.ReplacePrincipal(newPrincipal);
        context.ShouldRenew = true;
    }
}

private static IEnumerable<Claim> 
    RemoveUpdatedClaimsFromOriginalClaims(
        this List<Claim> originalClaims, List<Claim> newClaims)
{
    var newClaimTypes = newClaims.Select(x => x.Type);
    return originalClaims
        .Where(x => !newClaimTypes.Contains(x.Type));
}

2. Replace a user claim on a change: JWT Token and cookie authentication

This approach uses middleware to replace a claim, not by updating the user’s claims in the first approach, but by updating the current HTTP User on every HTTP request. The pro of this approach is that it works for JWT Token and cookie authentication, but the con is needs code to run on every HTTP request which cause some performance problems. I only considered this approach after I had created a the FileStore distributed cache, as it has a read time of ~25 nanoseconds.

Like the first example there are two parts to this approach:

  • Detect Change: The code links to EF Core’s StateChanged event and detects a change to the entries that would change the claim. In this case it calculates the new claim value for each affected user and stores each claim value in the cache – see the left side of the diagram below.
  • Apply to User: The extra middleware code runs after the authorization middleware and if a new claim for the current user is found in the cache, then it replaces the out-of-date claim and creates a new ClaimsPrincipal – see the right side of the diagram below.

The figure below shows how this example works.

The main code to implement contains pieces of code:

  1. The event code to detect the database changes and add replacement claims to the cache.
  2. The middleware which updates the HTTP User if a replacement claim is found in the cache.

It also uses the Net.DistributedFileStoreCache library to provide a cache with a very fast read.

Let’s now look at the main code:

2.1 The event code to detect the database changes

In the first example the database event code just had to detect a change, so it’s used EF Core’s StateChanged event. In this example we need to detect a change and then calculate the updated claim once the database has been updated, which makes the code more complex. You can find the code in the RoleChangedDetectorService class, but because the code is quite long, I will describe the various parts with pseudo-code

The code in the AddPermissionOverridesToCache method calculates the claim for each effected Users and stores the new claim in the FileStore cache with a key containing the userId of user that the claim applies to.

The RoleChangedDetectorService inherits the the IDatabaseStateChangeEvent interface and much be register via the ASP.NET Core DI provider. The constructor of the application’s DbContext contains an extra parameter containing an IEnumerable<IDatabaseStateChangeEvent> that contains any registered classes that have the IDatabaseStateChangeEvent interface.

2.2. The middleware which updates the HTTP User outdated claim

The middleware code can be found in the UpdateRoleClaimMiddleware class which both provides the extension method to register the middleware, and the code that will be called on every HTTP request. Here is the code in you need to your Program class to add this code into the into the middleware pipeline (see highlighted line) – note that the UsePermissionsChange method must come after the UseAuthorization.

//other code left out
app.UseAuthentication();
app.UseAuthorization();
app.UsePermissionsChange();
//other code left out

The actual method in the UpdateRoleClaimMiddleware class that updates a claim if that claim has been updated is shown below, with this list explain what each part does and where that code is found:

  1. Lines 4 to 5: It only looks at logged-in user. Not logged-in requests are ignored.
  2. Lines 12 to 13: This looks for a replacement value for its Permissions’ claim value. If its null, then there no replacement and the current User is used.
  3. Lines 17 to 23: This gets the current User’s claims and replaces the Permissions’ claim with a new claim where its value is taken from the found cache.
  4. Lines 25 to 28:  This creates a new ClaimsIdentity containing the updated claim. This new user sent back to the outer code which assigns it to the HTTPContext’s  context.User property.
public static async Task<ClaimsPrincipal> ReplacePermissionsMiddleware(
    IServiceProvider serviceProvider, ClaimsPrincipal user)
{
    var userId = user.GetUserIdFromUser();
    if (userId != null)
    {
        //There is a logged-in user, find if the
        //FileStore cache contains a new Permissions claim
        var fsCache = serviceProvider.GetRequiredService
            <IDistributedFileStoreCacheClass>();

        var replacementPermissions = await fsCache.GetAsync(
            userId.FormReplacementPermissionsKey());
        if (replacementPermissions != null)
            //Replacement permissions claim found, so update the User

            var updateClaims = user.Claims.ToList();
            var found = updateClaims.FirstOrDefault(c =>
                c.Type == PermissionConstants.PackedPermissionClaimType);
            updateClaims.Remove(found); 
            updateClaims.Add(new Claim(
                PermissionConstants.PackedPermissionClaimType, 
                replacementPermissions));

            var appIdentity = new ClaimsIdentity(
                updateClaims, 
                user.Identity!.AuthenticationType);
            return new ClaimsPrincipal(appIdentity);
        }
    }
    
    return null; //no change to the current user
}

The ReplacePermissionsMiddleware method is called from the code that registers the middleware. If the method returns null, then the current HTTP User is unchanged as there was no update. If there is an update to the User’s claims the method returns a new User (ClaimsPrincipal), which is then assigned to the HttpContext.User property.

3. On-demand add a new claim in middleware: JWT Token and cookie

The final example is similar to the second example as its uses middleware, but it’s the middleware that calculates the claim rather than the database trigger. This approach is good when you have secret / sensitive claims that you don’t want to put in a JWT Token, because a JWT Token isn’t encrypted. For instance, various privacy rules say that Personal Identifiable Information (PII), e.g. user’s email address, when using JWT Token.

The two parts to this approach are:

  • On-demand Claim: The middleware code will look for claim value from the cache. If the cache value is empty, then it will access the database to get the latest value and store that in the cache. Finally, the code will add the claim to the HTTP User.
  • Detect Change: If a database change alters the cache value, then it removes any existing cache value, thus causing the middleware to recalculate the claim value for the user.

The figure below shows how this example works.

The main code to implement contains pieces of code:

  1. The middleware which adds a new claim to the HTTP User from the cache.
  2. The event code to detect the database changes and removes the cache value.

It also uses the Net.DistributedFileStoreCache library to provide a cache with a very fast read.

Let’s now look at the main code:

3.1 The middleware which adds a new claim

The middleware code can be found in the AddEmailClaimMiddleware class which both provides the extension method to register the middleware, and the code that will be called on every HTTP request. Here is the code in you need to your Program class to add this code into the into the middleware pipeline (see highlighted line) – note that the UseAddEmailClaimToUsers method must come after the UseAuthorization.

//other code left out
app.UseAuthentication();
app.UseAuthorization();
app.UseAddEmailClaimToUsers();
//other code left out

The actual method in the AddEmailClaimMiddleware class that adds the new claim is shown below, with this list explain what each part does and where that code is found:

  1. Lines 4 to 5: It only looks at logged-in user. Not logged-in requests are ignored.
  2. Lines 11 to 13: This looks for a replacement value for its Email claim value. If its null, then it needs to access the database for the latest email of the current HTTP User.
  3. Lines 16 to 26: This gets the current user email from the database and adds a cache entry so that the next time it doesn’t have to query the database.
  4. Lines 30 to 35:  This creates a new ClaimsIdentity containing the added email claim. This new user sent back to the outer code which assigns it to the HTTPContext’s  User property.
public static async Task<ClaimsPrincipal> AddEmailClaimToCurrentUser(
    IServiceProvider serviceProvider, ClaimsPrincipal user)
{
    var userId = user.GetUserIdFromUser();
    if (userId != null)
    {
        //There is a logged-in user, so cache contains a new Permissions claim
        var fsCache = serviceProvider.GetRequiredService
            <IDistributedFileStoreCacheClass>();

        var usersEmail = await fsCache.GetAsync(
            userId.FormAddedEmailClaimKey());
        if (usersEmail == null)
        {
            //Not set up yet, get the user's email 
            var context = serviceProvider.GetRequiredService
                 <AuthPermissionsDbContext>();
            usersEmail = context.AuthUsers
                 .Where(x => x.UserId == userId)
                 .Select(x => x.Email).FirstOrDefault();
            
            if (usersEmail == null)
                return null; //shouldn't happen, but could in certain updates

            //Add to cache so next time it will be quicker
            await fsCache.SetAsync(userId.FormAddedEmailClaimKey(), usersEmail);
        }

        //We need to add the Email from the cache
        var updateClaims = user.Claims.ToList();
        updateClaims.Add(new Claim(ClaimTypes.Email, usersEmail));

        var appIdentity = new ClaimsIdentity(updateClaims,
            user.Identity!.AuthenticationType);
        return new ClaimsPrincipal(appIdentity);
    }
    
    return null; //no change to the current user
}

The AddEmailClaimToCurrentUser method is called from the code that registers the middleware. If the user is logged in the method will return a new User (ClaimsPrincipal) with the email claim added, which is then assigned to the HttpContext.User property. For user that aren’t logged in, then the method returns null, which means the current User should be used.

3.2. The event code to detect the change of a user’s email

The database event code is very simple – if a user’s email has changed, then make sure the cache entry linked to the user’s email is removed. That means the next time that user accesses the web app the AddEmailClaimMiddleware will recalculate the user’s email (and add the cache entry).

The code below come from the EmailChangeDetectorService class.

public void RegisterEventHandlers(AuthPermissionsDbContext context)
{
    context.ChangeTracker.StateChanged += 
         delegate(object sender, EntityStateChangedEventArgs e)
    {
        if (e.Entry.Entity is AuthUser user
            && e.NewState == EntityState.Modified
            && e.Entry.OriginalValues[nameof(AuthUser.Email)] 
               != e.Entry.CurrentValues[nameof(AuthUser.Email)]
            )
            //Email has changed, so we remove the current cache value
            _fsCache.Remove(user.UserId.FormAddedEmailClaimKey());
    };
}

The EmailChangeDetectorService inherits the IDatabaseStateChangeEvent interface, and much be register via the ASP.NET Core DI provider. The constructor of the application’s DbContext contains an extra parameter containing an IEnumerable<IDatabaseStateChangeEvent> that contains any registered classes that have the IDatabaseStateChangeEvent interface.

4. Extra: Using IClaimTransformation to update the user’s claims

I wanted to add this extra part to talk about services that follow the IClaimTransformation interface to update the user’s claims. Like the middleware approach the IClaimTransformation services run on every HTTP request after the AuthenticateAsync middleware.

I haven’t used IClaimTransformation services but when Mohammed Ahmed Hussien (twitter: @shoogn17) pointed it out I can see that it might be easier for people to use because:

  • You can use the normal constructor dependency injection to inject other services you need in your code.
  • Unlike the middleware approach a IClaimTransformation service only runs if there a logged-in user, which means you don’t have to add code to ignore non-logged in users.

This section from the Microsoft docs gives you a simple example of how you would create and register a IClaimTransformation service.

I haven’t compared the middleware and IClaimTransformation service for performance, but I would expect both of them are fast, assuming your code doesn’t access a database every HTTP request. (I found a couple of articles about IClaimTransformation which has database accesses on every HTTP request. I don’t recommend that for performance reason, which is why I use the FileStore cache to keep the claim update fast.)

Conclusions

This article gives a detailed looks at three ways to update the claims of a logged-in user to a ASP.NET Core application. The three examples provide a wide range of approaches that cover most claim update situations. And each example can be tweaked to make them perform better for specific changes: for instance, the first example updates all logged-in users, but I could be changed to be more selective on which users are updated.    

I start with the cookie-only approach which I used many years ago on my first ASP.NET Core application for a client. Its pretty simple and performs very well when changes are infrequent. I have used this for many years, including in client’s applications, and it well tried and tested. The only change I have added in this article is the use of the FileStore distributed cache, which makes it a bit faster.

For many years I didn’t have a viable solution for changing the claims when using JWT Token authentication, which is a problem as many frontend frameworks work better with a JWT Token. It wasn’t until I build the FileStore distributed cache, which has a very fast read time, e.g. ~25 nanoseconds (see FileStore distributed cache benchmark for full data), that I found an approach that has a good per-HTTP request performance.

Examples 2 and 3 offer two approaches that work with both cookie and JWT Token authentication, with their implementations almost the reverse of each other: example 2 calculates the updated claims within the database event, while example 3 calculate the claim in the middleware code. These different approaches also provide different performance parameters – see the performance section in the list of the three approaches in introduction section for more on this.

NOTE: These three examples are in the AuthP’s repro examples and be tried by running various ASP.NET Core applications. The Update user claims via cookie event version can be found in AuthP’s Example4 hierarchical multi-tenant application and Authp’s Example6 sharding multi-tenant application. The last two examples work with JWT Token, so both middleware versions are in the AuthP’s Example2 WebApi application that uses the JWT Token authentication.

Happy coding.

How to take an ASP.NET Core web site “Down for maintenance”

Last Updated: October 20, 2022 | Created: September 20, 2022

If you have an e-commerce or business web app used by lots of users, then you really don’t want that app to be “down” (e.g. “site offline” or “site not found”) because it’s bad for business. But at the same time some database changes are just too complex to allow users to access a database while the data being changed. This article describes a way to momentary divert users during a database is changed, which means that the database change code has exclusive access, and any change has the smallest effect on your logged-in users.

I designed this approach for multi-tenant applications, especially when using sharding. In these sorts of applications a single tenant might need changing or moved and the code to do that needs exclusive access – see this Microsoft article which describes the split and merge processes, which are two examples of changes that need exclusive access.

This article is part of the series that covers .NET multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

  1. The database: Using a DataKey to only show data for users in their tenant
  2. Administration: different ways to add and control tenants and users
  3. Versioning your app: Creating different versions to maximise your profits
  4. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  5. Advanced techniques around ASP.NET Core Users and their claims
  6. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core
  7. Three ways to securely add new users to an application using the AuthP library
  8. How to take an ASP.NET Core web site “Down for maintenance” (This article)
  9. Three ways to refresh the claims of a logged-in user

TL;DR; – Summary of this article

  • The feature described is solves a problem that can arise in multi-tenant applications, that is it can temporarily stop users from accessing a tenant’s data while a complex change is applied to the tenant data. A “complex change” might be moving a tenant’s data to another database.
  • The solution uses ASP.NET Core’s middleware to intercept every HTTP request and checks that that data that the user might access isn’t “down”, i.e. that data is being changed and mustn’t accessed. If the data the user uses is “down” they are diverted to a “down for maintenance – back soon” page.
  • Because the middleware is called on every HTTP request, I have used the FileStore distributed cache, which has a read time of ~25 ns, which means this feature doesn’t slow down the application.
  • I have implemented the code in version 3.4.0 of my open-source AuthPermissions.AspNetCore library – see the “Down for Maintenance” documentation. But the design and code of this feature can be copied to any ASP.NET Core application.

Setting the scene – why did I need this feature

I have a library called my AuthPermissions.AspNetCore library (referred to as AuthP in this article) that helps developers to build complex multi-tenant applications and it includes sharding, that is each tenant has their own database. One of the best ways to manage lots of databases is Azure SQL Server elastic pools but the suggested elastic pool support library is not supported any more. So, if I wanted to use SQL Server elastic pools, then I needed to build code that implements the split-merge code.

I had built most of the features needed, like defining a tenant and the keys for each tenant and sharding, in version 3.0.0 of the AuthP library, but the last missing feature is the ability to stop users from accessing a tenant while it is changed / moved (I used the term move for both split and merge). That’s because if a user is accessing the tenant data at the same time as a, then the user might get the wrong data or more crucially, it can cause data loss during a move.

The diagram below shows the process I need to build if I want to successfully change / move a tenant’s data while the application is still running.  Note that only tenant user linked to “tenant 123” are diverted while users not linked to “tenant 123” would work normally.

NOTE: In the AuthP library the keys to a tenant data key(s) are held in the user’s claims, which means that after a change / move the user’s tenant claims(s) need updating. The AuthP library has a feature called “update claims on tenant change” – click the link to go to the documentation.

It turns out that the solution to implement this “down” process is to use ASP.NET Core’s Middleware. You can intercept a user and divert them to another page / url if a move / change is in action by adding an extra middleware in the correct place. I call a divert a “down” because the tenant is “down for maintenance” while the change / move is being executes.

The downside of the added the extra middleware is that the code is called on every HTTP request. This means the middleware needs to be is fast, otherwise you will slow down your whole application for a few, infrequent change / move diverts. I solved this by creating the FileStore distritributed cache, which has a very fast read time (e.g. ~25 ns).

Read on to see how this works and how you could use it.

Design aims: what database changes do I want to cover?

The main “down” feature is temporarily diverting users accessing a tenant database while a change / move is being applied, but I also found some other added some extra diverts as well, which are listed below:

  • Manual, application “down”: Allows an admin user to manually “down” the whole application. Every user apart from the admin who took the app “down” will be diverted to a page with an explanation and expected time when the app will be available.
  • Manual, tenant database “down”: Allows an admin user to manually “down” a tenant database, thus diverting all users linked to the tenant database to a page saying, “stopped by admin”.  Access to the tenant can be restored by an admin manually removing this this “down”.
  • Tenant database Delete: This permanently diverts all users linked to the deleted tenant to a page saying, “the tenant is deleted”. This is a permanent divert, but it can be removed manually. 

Here is a diagram that shows how the“ down for maintenance” feature can be implemented in ASP.Net Core.

The rest of the article describes each step in “down for maintenance” feature, with references to the code in my AuthP library. The steps are:

  1. Startup: Registering the services
  2. Adding a StatusController (or an equivalent Web API)
  3. Using the ISetRemoveStatus service to set / remove a “down” state
  4. Understanding the “down for maintenance” middleware
  5. Other things to consider when moving a tenant database

1. Startup: Registering the services

There are two parts to setup the register the “down for maintenance” feature:

  • Registering the “down for maintenance” services
  • Adding the “down for maintenance” middleware.

Both parts are applied in the ASP.NET Core Program / Startup code. First is the registering of the FileStore cache, which holds the various “down” statuses, and the SetRemoveStatus class, which provide simple methods to add / remove “down” statuses. The code below is added in the startup section that registers services with the .NET dependency injection provider.

//previous code left out
builder.Services.AddDistributedFileStoreCache(options =>
{
    options.WhichVersion = FileStoreCacheVersions.Class;
}, builder.Environment);

builder.Services.AddTransient
     <ISetRemoveStatus, SetRemoveStatus>(); 

The “down for maintenance” middleware is added in the “app” part of the ASP.NET Core startup code – see the highlighted line that adds the extra middleware.

var app = builder.Build();
//other app code left out

app.UseAuthentication();
app.UseAuthorization();
app.UseDownForMaintenance();

//other code left out

The important thing is that the “down for maintenance” middleware is added AFTER the UseAuthorization method. That’s because the “down for maintenance” middleware needs assess to the user’s claims.

2. Create a Controller / web APIs to handle the “down for maintenance”

You need pages / APIs to handle the following:

  • For the admin users
    • Look at all the current “downs” and have the ability to remove any
    • Manually set the app “down” (with messages for the users)
    • Manually set a tenant “down”
  • For diverted users
    • App Down
    • Tenant down while being updated
    • Tenant down by admin
    • Tenant is deleted

In the Example4 web site (hierarchical tenant design) and Example6 web site (single-level + sharding) I have a controller called StatusController that contains the actions / pages listed above. Please look at the Example4’s StatusController for an example of what you need to create.

NOTE: the diverted pages are hard coded into the RedirectUsersViaStatusData class, while the controller’s name can be changed. If you want to have different urls for the diverted pages, then you need to copy the code and register your version of the RedirectUsersViaStatusData class.

3. Using the ISetRemoveStatus service to set / remove a “down” state

The SetRemoveStatus class contains the code to set, remove and display the “down” statues in the FileStore distributed cache. There are many types of diverts and this service creates the cache key which defines the type of divert that the user should be diverted to.

The AppDown divert is easy because it has one divert, but the tenant divert is more complex because a) it has three divert types and b) a divert is unique to a tenant. Each “down” entry in FileStore distributed database has a unique key name, which allows you to have multiple “downs” at once. And in the case of a tenant down the FileStore entry’s value is the tenant key, which is used to detect if the user is linked to a tenant that is in a “down” state.

The ISetRemoveStatus service makes it easy for the developer to wrap your change / move code with a “down” at the start and remove the “down”” at the end. The code below shows an example of how the ISetRemoveStatus service would work, with the “down” and remove “down” code highlighted.

[HttpPost]
[ValidateAntiForgeryToken]
[HasPermission(Example6Permissions.MoveTenantDatabase)]
public async Task<IActionResult> MoveDatabase(
    ShardingSingleLevelTenantDto input)
{
    var removeDownAsync = await _upDownService
        .SetTenantDownWithDelayAsync(
              TenantDownVersions.Update, input.TenantId);
    var status = await _authTenantAdmin
        .MoveToDifferentDatabaseAsync(input.TenantId, 
              input.HasOwnDb, input.ConnectionName);
    await removeDownAsync();

    return status.HasErrors
        ? RedirectToAction(nameof(ErrorDisplay),
              new { errorMessage = status.GetAllErrors() })
        : RedirectToAction(nameof(Index), 
              new { message = status.Message });
}

As you can see you define what type of tenant change via the TenantDownVersions enums. The ISetRemoveStatus service handles creating the key name for the actual “down” entry in the FileStore distributed database. The “down” entry key string is designed to make finding / filtering the “down” values to work quickly, so the key string is a bit complex. The figure below shows the various combinations of key names to provide a) define what type of divert it is, and b) is unique name for each tenant.

NOTE: For a tenant “down” entry the value is the tenant’s unique key, while for the AppDown the value contains a message, expected time, and UserId of the user that “downed” the whole app.

4. Understanding the “down for maintenance” middleware

The middleware code (see RedirectUsersViaStatusData class) is called on every HTTP request, and its job is to quickly let through a user if there isn’t an “down” status that effects the current user. There are three stages in this middleware to cover each part of the filter. They are:

NOTE: I use the term admin user (see this link) to define a user who is managing the application. These types of users have a) access to high-level admin features and b) aren’t linked to a tenant.

STAGE 1: Allowed URLs get through

The middleware allows two types of URLs.

  • You can login and logout. I added this when I “downed” the app and then rerun the app, at which point I couldn’t log to remove the “App down”!
  • I allow access to the Status controller. This allows an admin user and manually turn off a “down” if anything goes wrong.

STAGE 2: Handle AppDown

The AppDown feature stops all users from using the application’s features, apart from the admin user that “downed” the app. This means that the admin user can check / fix the problem before removing the “down” on the app.

This feature is there for situations where the application’s software or data that can’t be updated by the normal deploy / migrate approach. You will rarely need the AppDown feature, but it’s there for emergencies.

STAGE 3: Handle Tenant “down”

The main usage of the middleware is to managing changes to a tenant’s data and the code uses the start of the “down” key to detect which types of divert is needed. The three types are:

  • Tenant down while being updated
  • Tenant down by an admin use (known as tenant “manual down”)
  • Tenant is deleted (this stops user trying a tenant that doesn’t exist)

NOTE: An example of the code to take a tenant “down” while being updated can be found in section 3.

The middleware code isn’t complex, but it’s a bit hard to follow so I have provided a flowchart to show how the three stages are handled. The important thing is the middleware is very fast (via using the FileStore distributed cache) at letting though users when no “down” is active.

NOTE: The RedirectUsersViaStatusData class has comments starting with the three STAGES shown in the flowchart.

5. Other things to consider when moving a tenant database

The tenant “Down for Maintenance” feature solves the most complex issue of ensuring that the tenant data isn’t accessed during the data is moved. But there are some extra issues you need to consider which the AuthP library already has solutions for. The issues are:

  1. Updating the tenant user’s DataKey claims on a move
  2. An internal hierarchical move needs to “down” two parts of the tenant data
  3. The admin access to tenant data feature needs extra code in the middleware

5.1. Updating the tenant user’s DataKey claims on a move

If you are moving a database in a sharding multi-tenant application or moving data in a hierarchical multi-tenant application, then the information used by the user to access the tenant data will change. Therefore, you MUST update the information used by the user to access the tenant data.

In the AuthP library the user’s key to a tenant data is held in the user’s claims which makes the user access very fast (see this section of an earlier article). But that means that the tenant claims need to be updated when the DataKey changes, and AuthP has feature that detects a change to the tenant DataKey parts and then makes sure all the logged-in users have their claims updated – see the AuthP “update claims on tenant change” documentation on how this works.

5.2. An internal hierarchical move needs to “down” two parts of the tenant data

The AuthP hierarchical multi-tenant has a move feature where a section of the hierarchical data can be moved to another part of the hierarchy – known as the parent (see this example). In this case you need to “down” both the section to be moved and the section that the moved too.

For this reason, the SetTenantDownWithDelayAsync method has an optional parameter called parentId. If the parentId is not zero, then it will also “down” the parent during the hierarchical move. The code below shows the code, with the extra parentId parameter highlighted.

var removeDownAsync = await _upDownService
    .SetTenantDownWithDelayAsync(
        TenantDownVersions.Update, input.TenantId, 
        input.ParentId);
var status = await _authTenantAdmin
    .MoveHierarchicalTenantToAnotherParentAsync
        (input.TenantId, input.ParentId);
await removeDownAsync();

5.3. The “admin access to tenant” data feature needs extra code in the middleware

The AuthP library provides a feature that allows admin / support users (i.e. users not linked to a tenant) to temporary gain access to a tenant’s data (see the admin access to tenant documentation for more information).

This is implemented by using a cookie to contain the tenant DataKey, but the “down for maintenance” middleware doesn’t contain code to handle that. While giving admin user a way to access the tenant’s data is useful if a problem occurs in the change / move, but admin must be aware of any tenant change / move and not try to access that tenant (or turn off the “admin access to tenant” feature).

Conclusions

Back in 2015 I wrote an article about how to take an ASP.NET MVC5 web site “Down for maintenance” and now in 2022 I this article provides a version for an ASP.NET Core application. The basic approach of using middleware is the same, but this latest approach also contains features to handle multi-tenant applications.

Both the older ASP.NET MVC5 version and the latest ASP.NET Core are designed to be quick. This focus on high performance is because the code is run on every HTTP request. Both versions use a shared file to work across multiple instances of the web applications, for instance when you use Azure’s scale-out. But the new version has much more complex needs, with tenant-level “down” features, which required a more sophisticated approach, which is handled by the FileStore distributed cache acting as a fast-read / slow-write database.

With this feature added to version 3.4.0 of the AuthP library you can safely manage tenants while users are accessing your multi-tenant application.

How to turn an ASP.NET Core appsettings.json file into a fast-read database

Last Updated: September 22, 2022 | Created: September 15, 2022

This article describes a way to create a database using ASP.NET Core appsettings.json configuration feature. The big benefit of this approach is the read is blistering fast, something like 1,000 times quicker than a SQL Server database query. The downside is the write is relatively slow (e.g. >1 ms write) so this approach is best for situations where you have lots more reads than writes. I use this approach in an ASP.NET Core applications where certain data are read on every HTTP requests.

TL;DR; – Summary of this article

  • There is a way to use ASP.NET Core Configuration feature to create a type of database that has is much faster (~1,000 times faster) read than a typical database, but the write is slower than a database (small database = ~2 times slower, 400kb size database = ~10 times slower).
  • A good use for this type of database is where you have lots of reads and rare writes. I created this approach to handle a database query on every HTTP request.
  • This article describes the five steps to implement a database using ASP.NET Core Configuration feature.
  • There is a library called Net.DistributedFileStoreCache (shortened to FileStore cache) which provides a pre-build version of this approach. You might like to refer to these articles:
    • A .NET distributed cache with a ~25 nanosecond read time!
    • How to change/move databases on a live ASP.NET Core web site, which uses the FileStore cache as a database.

Setting the scene – why did I use an appsettings.json file as a database?

My AuthPermissions.AspNetCore library (shortened to AuthP) provides back-end code for building multi-tenant applications using ASP.NET Core and EF Core, and in version 3 of the AuthP library I added support for sharding. To implement sharding I needed to create a connection string that points to the database server+database on every HTTP request from a tenant user.

Also, there were couple of extra features that I wanted my sharding implementation to support

  • Should work with Azure’s SQL Server elastic pools. Azure elastic pooling provides a cost-effective way to have lots of databases (see this document on why this is useful).
  • Support geographically placed database servers to improve performance when you have users that are geographically spread out.
  • Good security: the connection strings contains Username/Password of the servers, so for security reasons I store the database strings in Azure.

The diagram below shows how the implementation of the sharding is changed to support these three extra features:

The sharding code gets the sharding data for a specific tenant which contains the name of the connection string linked to a database server and the name of the database on that database server. From these two parts it forms the composite connection string needed to access the tenant database. This isn’t that complex procedure, but it does to read in the sharding data (name of connection string and name of the database) on every HTPP request for a tenant user.

I could go with a database access, but I’m really trying to make this library very fast, so I started to look at ASP.NET Core Configuration features i.e. appsettings.json and IOptions because I know that the read of configuration data is really fast because the data is cached by ASP.NET Code configuration code.

Therefore, I created a appsettings.json type file which I could update and then used the Configuration IOptionsSnapshot<T> method to get the latest data my settings file. (see Microsoft Options Pattern docs for more info on this type of read). The diagram below shows the whole process.

This might seem very complex, but it’s:

  1. Very fast: something like 1,000 times quicker than using a database access.
  2. Secure: The connection string isn’t in any of your code or your claims.
  3. Doesn’t use a database: This means your tenant databases won’t be slowed by lots of small queries.

The rest of this article describes the steps needed to create a generic fast-read database by using ASP.NET Core Configuration feature. In the steps I show examples from the sharding feature described above, with links to the code in the AuthP’s open-source repo. That way you have working code examples of how I used this approach.

Steps to turn an appsettings.json file into a fast-read database

NOTE: I refer to the json file which will be used as database as the database json file in this article.

The steps are to implementing a database json file are:

  1. Create a json file to act as a database
  2. Make sure the database json file isn’t overwritten
  3. Register the database json file on startup
  4. Write to the database json file within a distributed lock
  5. Use IOptionsSnapshot<T> to read the database json file

1. Define a json file to act as a database

The first thing to do is work out what data you need to store the database json file for your application. Once you have decided on the data you need, then you must implement the dats by using class(es) that contain parameters that can be serialized / deserialized to json by the .NET Text.Json library.

For my sharding settings I have a List of the class called DatabaseInformation, which has four properties (all of type string) that define a specific settings of each sharding server+database. The code below shows the type of json the sharding settings file would contain.

{
  "ShardingDatabases": [
    {
      "Name": "ShardWest1",
      "DatabaseName": "West1",
      "ConnectionName": "WestServer",
      "DatabaseType": "SqlServer"
    },
    {
      "Name": "ShardWest2",
      //… rest of the content has been left out 
    }
  ]
}

NOTE: The name of the section / array used in your database json file must be unique across all the configuration json files.

2. Make sure the database json file isn’t overwritten

A normal appsettings.json file gets overwritten when an application is deployed. But because we want to use json file as a database, then you don’t want the file to overwritten. I do two things to make sure the database json file isn’t overwritten.

First, I use a filename which includes the environment name, e.g. Debug, Staging, Production, so my implementation the filename is $“shardingsettings.{EnvironmentName}.json”. This means that filename used developing the application in Debug mode can’t overwrite your Production database json file.

But the most important thing to do (but easy to forget) is to set the file’s “Copy to Output Director” property to “Do not copy”. This stops the database json file being copied in your deployment. You can manually set this via file properties, but I prefer to add a ItemGroup to the ASP.NET Core .csproj file, as shown below.

<ItemGroup>
	<Content Update="shardingsettings.Production.json">
		<CopyToOutputDirectory>Never</CopyToOutputDirectory>
	</Content>
</ItemGroup>

3. Register the database json file on startup

There are two parts to registering database json file on startup. They are:

  1. Register the database json file to the ASP.NET Core’s Configuration
  2. Register your IOptions access via the Configure<T> service

3.1 Register the database json file to the ASP.NET Core’s Configuration

To register your database json file to be part of the Configuration you use the AddJsonFile method. The code below goes in the Program class and registers my shardingsettings file.

var shardingFileName = 
    $“shardingsettings.{builder.Environment.EnvironmentName}.json”
builder.Configuration.AddJsonFile(shardingFileName, 
      optional: true, reloadOnChange: true); 

You need to think what happens when you first deploy using a database json file. In this case I set the optional parameter to true, which means the application can start without the file. If you use IOptionsSnapshot<T> (see next subsection on IOptionsSnapshot<T>) it will return null if the database json file isn’t there or doesn’t have any json in it, but once you create the file the application will start tracking the file and the IOptionsSnapshot<T> Value will be non-null.

NOTE: The other approach is set the optional parameter to false and ensure that there is a json file exists. But if the optional parameter is false, then if the json file isn’t there, then the application will fail on startup. This means you need to create on startup if no file exists.  

3.2 Register your IOptions access via the Configure<T> service

You must register a Configure<T> service, where T is the class which defines the json content of the database json file, to allow you use the IOptions access to the data inside. This is done by registering a class to a specific part of the configuration setting via a class.

In my shardingsettings file I use a collection of data, so my options class looks like this:

public class ShardingSettingsOption
{
    public List<DatabaseInformation> 
          ShardingDatabases { get; set; }
} 

And configured by the code below, which will look for a json array in all the registered json files with the name of ShardingDatabases.

builder.Services.Configure<ShardingSettingsOption>
      (builder.Configuration);

4. Write to the database json file within a distributed lock

To update the data in the database json file you need to read in the current json, add your change and write back out. This read->update->write process is fairly easy to implement – Have a look at my AccessDatabaseInformation class for an example of what this would look like.

While the update part of the code is straightforward, we do need to handle simultaneous updates, because one update could overwrite another update. This type of simultaneous updates is rare, but because they can occur, we need to handle this. This means we need to wrap the update process with some code that would stop other updates from running until the current update has finished.

If you are only running one instance of your ASP.NET Core application, then you could use a .NET lock. But my library is designed with high performance applications where multiple instances of the application are running at the same time (Azure calls this scale out), so I need a distributed lock. I use an excellent library called DistributedLock.  

The DistributedLock library uses a global resource, such as a database, to form a lock across all the running instances. The code below (adapted from the Acquire section of the DistributedLock Readme)

var myDistributedLock = 
     new SqlDistributedLock(name, connectionString); 
using (myDistributedLock.Acquire())
{
	//Run the read->update->write process within this lock
} // this releases the lock

5. Use IOptionsSnapshot<T> to read the database json file

Finally, you can access the information in the database json file via ASP.NET Core’s IOptionsSnapshot<T> method. The code below is a simplified version of the AuthP’s ShardingConnections constructor. When the ShardingConnections service is created it uses the IOptionsSnapshot<T> method to get the data in the database json file, in this case my sharding settings file. (see Microsoft Options Pattern docs for more info).

As I showed in the setting the scene section using the IOptionsSnapshot<T> method in the code below reads in the current sharding settings.

private readonly ShardingSettingsOption _shardingSettings;
public ShardingConnections(
     IOptionsSnapshot<ShardingSettingsOption>
     shardingSettingsAccessor, AuthPermissionsOptions options,
     … other parameters left out)
{
    _shardingSettings = shardingSettingsAccessor.Value
        //If no sharding settings file, 
        //then add the default sharding setting
        ?? new List<DatabaseInformation>
       {
           DatabaseInformation.FormDefaultDatabaseInfo(options)
       };
}

Note that if the sharding settings file doesn’t exist the IOptionsSnapshot<T> Value will be null, and you need to work out in that case. You could return the null, but often the best solution is to create an empty collection or similar. In the AuthP’s sharding settings shown above a new deployment always has a single, default DatabaseInformation, which is formed from the multi-tenant setup information contains.

Conclusions

Creating a fast-read / slow-write database using ASP.NET Core’s Configuration / IOptionsSnapshot<T> might not be the first approach you would think of for creating a database, but in situations where you want a very fast read where the data changes rarely. For instance, the AuthP sharding feature is a very good fit to this approach because it needs two read queries (one to get the sharing data and another to get the database server connection string) on every HTTP tenant user request read with rare changes to the sharding data.

I also created a library called Net.DistributedFileStoreCache (shortened to FileStore cache) which implement a Distributed cache. This library uses the approach that ASP.NET Core Configuration / IOptionsSnapshot<T>  but uses .NET’s FileSystemWatcher class instead to IOptionsSnapshot<T>. The FileStore cache has a ~25 ns. read time and a write time > 1 ms. – see the FileStore cache full performance figures here.

I use the FileStore cache in database mode in the article “How to change / move databases on a live ASP.NET Core web site” because this feature needs multiple reads on every HTTP request. Using FileStore cache removes the extra ~1 ms. that a database might have.

If nothing else you have learnt more about ASP.NET Core’s Configuration / IOptionsSnapshot<T>, and you have learnt a new way to store data with a different performance from a normal database.

Happy coding.

A .NET distributed cache with a ~25 nanosecond read time!

Last Updated: October 14, 2022 | Created: August 8, 2022

This article described a .NET distributed cache library called Net.DistributedFileStoreCache (referred to as FileStore cache in this article). The FileStore distributed cache library is very quick to read a cache value (known as Get), for instance, FileStore cache take ~25 ns. to Get a cache value, but a SQL Server distributed cache would take at least 0.1 ms – that means FileStore cache is 4,000 faster on Gets! Typically, you don’t need that speed, but I have a situation where I needed to read lots of cache value in every HTTP request, so I built this FileStore cache.

The other positive of the FileStore cache is its very easy to setup and use. The FileStore cache design is based on ASP.NET Core’s appsetting.json files so it stores the cache entries in a json file which is shared by all the instances of the application. This also means you don’t have to set up, or pay, for a database for your caching 😊.

The downside of the FileStore cache is it’s slower than database caches when adding a new cache value (known as Set). For instance, to Set a cache value to an existing 100 cache values takes ~1.5 ms., while a database would normally execute a Set in less than one millisecond. And it gets worse as the cache gets bigger, e.g. to Set a cache value to an existing 10,000 cache values takes ~9 ms. So, the FileStore cache library is useful when your application needs fast cache reads and only a few cache writes. Also, it’s NOT a good fit for large cached values, like a image, as the bigger the cache gets the slower the update will be.

The other (smaller) downside is the FileStore cache doesn’t implement the IDistributedCache’s SlidingExpiration feature, because that would make the read performance slow. But the FileStore cache does support the two AbsoluteExpiration versions.

Read on to understand how the FileStore cache works so you can decide if it would be useful to your application.

TL;DR; – Summary of this article

  • The FileStore cache library provides a .NET distributed cache that has a very fast – it only takes ~25 ns. to Get of one entry in a cache containing 10,000 entries, but it is slow when you use Set / Remove (100 entries = ~1.3 ms., 1,000 entries = ~1.7 ms., 10,000 = ~7.9 ms.)
  • The FileStore distributed cache relies on a json file that all the the instances of the application can accesse, just like your ASP.NET Core appsettings.json files (see Azure’s Scale-Out approach). This make it’s easy to setup because it just relies on a json file in your applications directory so you don’t have to set up, or pay, for a database for your SQL Server cache, Redis cache, or NCache cache.
  • The main FileStore cache implementation has a value of type string, mainly because the values are stored in a json file. This Version is known as String.
  • There are three other versions which have different cache value types / interfaces convert their cache value to a string and then call the String version:
    • Version=Class: This inherits the String version and adds three method types that serializes the class to a json string. Useful when you want to store a complex data I the cache.
    • Version=Bytes: This has a value of type byte[] with extra features over the IDistributedCache interface.
    • Version=IDistributedCache: The Bytes version is accessed via the IDistributedCache interface. Useful if you have an existing caching using the IDistributedCache interface.
  • Net.DistributedFileStoreCache is an open-source library under the MIT license and a NuGet package. The documentation can be found in the GitHub wiki.

Useful links to general info on caching

Setting the scene – why and how I built the FileStore cache library.

The articles / documentation about a software cache talk about improving performance and scalability. For instance, in my book “Entity Framework Core in Action” I use two different cache approaches to improve performance of displaying and searching first 100,000 books and then ½ million books (see this article for the details and link to a video).

But I want to talk about to another use of caches, especially distributed cache, where they are used to manage application data that changes and all the instances of the application need to access. I have used a distributed cache in this way to:

But recently I used the ASP.NET Core’s appsetting.json file as a simple distributed cache when adding sharding to the multi-tenant part of the AuthPermissions.AspNetCore library. In this case I configured a separate json file containing the list of database information used ASP.NET Core’s IOptionsSnapshot service to read it. This means I can update the json file and then get the latest data when I use IOptionsSnapshot – and IOptionsSnapshot is very quick, which is good as that data is accessed every HTTP request.

IOptionsSnapshot is great, but it turns out there are some issues to fix if its going to properly implement a distributed cache. For instance, if two instances update the json file at the same time, then there would be a problem. At best result would be that one instance throws an IOException, but the worse result is one of changes is lost and you aren’t aware of it. This breaks the “coherent (consistent) across requests to multiple servers” rule of a distributed cache.

I tried to fix these problems using the IOptionsSnapshot but I found a number of situations that I couldn’t fix. But the idea is good, so I went back to the basic .NET features needed to make a true distributed cache. The two .NET features that I used to make a true distributed cache with fast read are:

  1. File Locking: locking a file during an update means no other processes can read or write the file until the updated file has been saved. If another process tries to access a locked file, then it throws an exception which the library catches and retries the access after a delay. This makes sure that the FileStore cache implements the coherent (consistent) across requests to multiple servers” rule.
  2. FileSystemWatcher: the FileSystemWatcher class will alert each the instance of the application that the cache file has been changed, which means the (hidden) local static cache is out of date. On the next time the application code accesses the FileStore cache will read the file (within a lock) and update the local static cache before executing the application’s request.

The diagram below shows the four steps that happens when the FileStore cache goes through an update (e.g, Set or Remove).

NOTE: Even if you have only one application instance to the four steps are still run as a single application can be running multiple processed at the same time.

DESIGN NOTES

  • The FileSystemWatcher class is known to create two events when a file is changed and this can happen in the FileStore cache. For that reason, the library invalidates the local cache and then waits for another access to the FileStore cache: that way it’s less likely that the read of the FileStore cache file will runs twice.
  • For performance reasons a cache entry’s AbsoluteExpiration has expired it will return null when accessed, but the cache entry is still in the FileStore cache file. Once any cache update (i.e. set, remove, or reset) is executed, then the expired cache entries still present entries are removed. This keeps the read performance of the library high.
  • As stated earlier the FileStore cache doesn’t implement the IDistributedCache’s SlidingExpiration to keeps the performance of the library high.  I could have used a similar approach to the AbsoluteExpiration in the last design note, but it would only be based on only local accesses.

The end result is that the Net.DistributedFileStoreCache library is an cache that meets the requirements of a distributed cache. It also contains some extra useful features over the default IDistributedCache interface (described later).

The four different FileStore cache interfaces

The primary .NET interface for distributed caches is IDistributedCache which has only four methods types: Set, Get, Remove and Refresh, plus async versions of each, with the key being a string and the value is a byte[]. For instance, the sync Get method has the following signature:

public byte[] Get (string key);

But because I am storing the data in a json file its much better to have the value of type string, called DistributedFileStoreCacheString, which contains the primary code. But to make the library useful to users already using the IDistributedCache I added other versions where the value is byte[]. Also, I added class that added a few extra features to the primary …String version. The figure below gives you a summary of the four interfaces the FileStore cache library provides.

The table below shows the which interface is registered is defined by the WhichVersion parameter in the FileStore’s options – they are:

Version nameRegistered InterfaceValue type
StringIDistributedFileStoreCacheStringstring
ClassIDistributedFileStoreCacheClassstring + class
BytesIDistributedFileStoreCacheBytesbyte[]
IDistributedCacheIDistributedCachebyte[]

Registering the distributed FileStore Cache

the AddDistributedFileStoreCache extension within FileStore cache library allows to register the FileStore cache version you want to use as a service. It also sets up / find the FileStore cache file name and location using your environment information, which is simpler (and cheaper) compared to using a distributed cache that uses a database. The code below shows how you would this in an ASP.NET Core’s Program startup.

builder.Services.AddDistributedFileStoreCache (options =>
    {
        options.WhichInterface = String;
    }, builder.Environment);

The registration code above would register a service with the interface IDistributedFileStoreCacheString. It also uses the ASP.NET Core environment class to create an different cache filename based on your EnvironmentName, for instance the file name would be “FileStoreCacheFile.Development.json” when in a development environment, and file is stored directory defined by the environment’s ContentRootPath FilePath. This makes sure that your cache file is in the right place and your development c and doesn’t interfere with your production cache file.

NOTE: Go to the Register the FileStore Cache documentation for more information on the setup and the various options you might need.

The performance of the distributed FileStore Cache

I measure the performance of the FileStore cache String version by the excellent BenchmarkDotNet library. My performance tests both reads and writes of the cache on a cache that already has 100, 1,000 and 10,000 cached values in it. The performance tests where run on an Intel Core i9-9940X CPU 3.30GHz.

The full performance figures are available in the repo’s README file but here is a summary.

Read performance

  • Reads a single cache value took ~25 ns at the three levels of cache size.
  • Getting a Dictionary of ALL the cache key/values took ~80 ns at the three levels of cache size.

Write performance

The time taken to add a cache value to cache goes up as the size of the cache is – see table below. This makes sense as unlike a database you are reading and then writing ALL the cache values into a file. The async versions are slower than the sync versions, but it does release a thread while reading and writing.

Cache number / size100 / 4.6 kb1,000 / 40.1 kb10,000 / 400.0 kb
AddKey (Set)1.3 ms.1.7 ms.7.9 ms.
Add 100 (SetMany)1.4 ms.1.7 ms.8.3 ms.
AddKeyAsync (SetAsync)1.7 ms.2.3 ms.9.0 ms.

NOTE: If you want to make sure your use of the FileStore cache is the best it can be, then I recommend you read the Tips on making your cache fast document.

Using the distributed FileStore Cache

If you are familiar with any of .NET’s distributed cache libraries like Distributed SQL Server cache, Distributed Redis cache, Distributed NCache cache etc. then it will the same, but without of the hassle / cost of setting up a database.

The biggest difference if you use the String version is that the value is a string instead of byte[]. But each FileStore version has some other differences, as detailed below:

  • All the versions apart from the IDistributedCache version has two extra method types:
    • GetAllKeyValues/Async methods, which returns all the key/values as a directory. I find this useful if I want to load multiple cache values as it only takes ~85 ns.
    • ClearAll, which drops all the key/values. Useful for testing and stackoverflow has may questions on how to clear out all the key/values in a cache.
  • To keep the read performance FileStore cache doesn’t support SlidingExpiration. This has three effects:
    • All four versions will throw an exception if you try to set the SlidingExpiration option via the DistributedCacheEntryOptions parameter in the Set/SetAsync.
    • The two FileStore cache versions that use the byte[] type for the value has a Refresh / RefreshAsync method, but if you call them it will throw an exception. This makes these two versions compatible to the IDistributedCache interface.
    • In the FileStore cache versions that use a string type for the value do not contain the the Refresh / RefreshAsync methods.

The code below uses the String version and shows the use of the GetAllKeyValues (see line 12) and the Set method (line 30).

public class HomeController : Controller
{
    private readonly IDistributedOptionsCacheString _fsCache;

    public HomeController(IDistributedOptionsCacheString optionsCache)
    {
        _fsCache = optionsCache;
    }

    public IActionResult Index()
    {
        var allKeyValues = _fsCache.GetAllKeyValues();
        foreach (var key in allKeyValues.Keys)
        {
            logs.Add($"Key = {key}, Value = {allKeyValues[key]}");
        }

        return View(logs);
    }

    public IActionResult AddCache()
    {
        return View();
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public IActionResult AddCache(string key)
    {
        _fsCache.Set(key, DateTime.Now.ToString("s"), null);
        return RedirectToAction("Index");
    }
}

NOTE: The 1.1.0 version of FileStore cache adds the following extra commands:

  • SetMany / SetManyAsync which adds multiple entries in one go, which is quicker than individual Set / SetAsync calls.
  • The ClearAll method has optional parameters to add multiple entries into the cache after the cache has been cleared.

Looking at the IDistributedFileStoreCacheClass version

The reason for building the FileStore cache library was to improve my AuthPermissions.AspNetCore library and its handling of multi-tenant applications. In this library I have some data in the appsettings.json files which would be better in a distributed cache, but they have multiple parameters.

I saw this article which suggested some extension methods to turn a class into a json string before saving that json string in a cache, and I though that would be really useful for me. Rather than creating extension methods I built a small class called DistributedFileStoreCacheClass that inherits the string FileStore cache version that adds the following method types (the first threehave an async version):

NOTE: The <T> part has the following where cause: where T : class, new()

  • SetClass<T>(string key, T yourClass, …) – this serializes the “yourClass” into a json string and saved to the string to the FileStore cache.
  • SetManyClass<T>(List<KeyValuePair<string,T> manyEntries, ...) which adds multiple entries in one go, which is quicker than individual SetClass calls.
  • T? GetClass<T>(string key) – This reads in the string from the string FileStore cache and deserializes the string back to a class.
  • T? GetClassFromString<T>(string? jsonString) – This is useful if you want to obtain a class from a cache entry obtained by the GetAllKeyValues / Async method.

NOTE: Be aware that the cache does not hold the class type in the cache. It is up to you to use the same <T> on the GetClass as you used in the SetClass method.

Here is an example from my unit tests to show how it works:

[Fact]
public void DistributedFileStoreCacheSetClass_JsonClass_Example()
{
    //SETUP
    _distributedCache.ClearAll();

    //ATTEMPT
    _distributedCache.SetClass("test", new JsonClass2 { MyInt = 3, 
        MyClass1 = new JsonClass1 { MyInt = 1, MyString = "Hello" } }, null);

    //VERIFY
    var jsonClass2 = _distributedCache.GetClass<JsonClass2>("test");
    jsonClass2.ShouldBeType<JsonClass2>();
    jsonClass2.ShouldNotBeNull();
    //… other tests left out
}

The string below shows how your class is turned into a string to save in the cache.

"{\"MyClass1\":{\"MyInt\":1,\"MyString\":\"Hello\"},\"MyInt\":3}"

The byte[] value versions of the distributed FileStore cache

Because many existing usages of a distributed cache uses byte[] for the value. I created two versions that supports byte[] for the value, which then convert the byte[] value into a string and the calls  String version to access the FileStore cache.

There is one DistributedFileStoreCacheBytes class, but it can be registered either the IDistributedCache interface, or the IDistributedFileStoreCacheBytes interface which contains the extra methods, GetAllKeyValues/Async and ClearAll.

Conclusions

I needed a distributed cache that has a very fast read because wanted to read a lots of cache values on every HTTP request. If I used a database-based distributed cache could have taken at least a millisecond and more, but I remembered how fast the ASP.NET Core IOptionsSnapshot<T> was when it returns data in the appsettings.json file.

After quite bit of work, I found I couldn’t build a distributed cache using ASP.NET Core’s IOptionsSnapshot<T>. But it did guide me to a design that that a) implements a true distributed cache, b) which is blistering fast on reads, and c) is very easy to setup. From my point of view the slower update of the cache (e.g. Set, Remove) is a pity, but in my situation adding a new cache value happens very infrequently.   

In the end I put in extra features, like the byte[] values version, because there must be users already using the IDistributedCache interface. To my mind the FileStore cache’s simple setup and no extra costs (for Redis, SQL Server, etc.) may be more attractive than its super-fast read! Good luck and do let me know how you get on with this library if you use it.

Happy coding.

Three ways to securely add new users to an application using the AuthP library

Last Updated: October 20, 2022 | Created: June 17, 2022

The AuthPermissions.AspNetCore library (shortened to AuthP in this article) provides various features that can help you building certain ASP.NET Core applications. The main features are better Roles authorization and back-end code for creating multi-tenant applications (see this series). This article introduces new services in version 3.3.0 of the AuthP library to make it easier for you to add new users and tenants to your application.

This article is part of the series about the AuthP library. The list below provides links to the other articles in this series:

  1. Finally, a library that improves role authorization in ASP.NET Core
  2. The database: Using a DataKey to only show data for users in their tenant
  3. Administration: different ways to add and control tenants and users
  4. Versioning your app: Creating different versions to maximise your profits
  5. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  6. Advanced techniques around ASP.NET Core Users and their claims
  7. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core
  8. VIDEO: Introduction to multi-tenant applications (ASP.NET Community standup)
  9. Three ways to securely add new users to an application using the AuthP library (this article)
  10. How to take an ASP.NET Core web site “Down for maintenance”
  11. Three ways to refresh the claims of a logged-in user

TL;DR; – Summary of this article

  • The AuthP library adds extra data (e.g. Roles, Tenants) to an ASP.NET Core user’s claims. To do this we need to link a ASP.NET Core’s user to the AuthP User which contains the extra data.
  • This article covers three ways to add a new AuthP user with its extra data to an ASP.NET Core user. The three approaches are:
    • Syncing user: Compare all the ASP.NET Core’s users against the AuthP Users and showing the difference to an admin user to decide what to do.
    • Invite a user: Sends a Url containing encrypted data holding the user’s email and AuthP data to an invited user. When clicked the new user is added to the application.
    • Sign up / versioning: This allows a user to create a new tenant for their company, with the option of selecting which version that fits the company’s needs.
  • AuthAdapter: The “Invite a user” and “Sign up / versioning” features need a way to add a new user to the application. These two features rely on a generic AuthAdapter that links the ASP.NET Core user to a new AuthP User. There are two implementations of the AuthAdapter interface to choose one, our create you own versions to the same interface.

Setting the scene – how the AuthP makes it simpler to manage an application

From my experience of a building a large multi-tenant for a client I know how much administering of tenants and users there is. As I worked on the AuthP library I found more and more ways to move admin tasks from the application’s support team (referred to as app admin) and out to the users using the application, especially with multi-tenant applications.  

One early feature was to allow a user within a tenant (known as a tenant admin user) to manage the other users in the same tenant. This needed careful design to isolate a tenant admin just to their tenant’s users and limit what features they can change. But it has worth the work as tenant users can get a quicker/better response from their tenant admin user and it also reduces the load on the app admin team.

After the basics of the tenant admin user was implemented, I started looking for more self-service features, where the tenant user or tenant admin user could handle admin services on their tenant. In multi-tenant article 2 I showed a way to invite a user to join their tenant, and in multi-tenant article 3 I showed a way to allow a new user to “sign up” to get a new tenant, with the extra feature of having different versions of a tenant (for versions think Visual Studio with its Community, Pro, and Enterprise versions).

The “invite a user” and “sign up / versioning” features significantly reduce the workload on the app admin team by allowing a tenant user to invite new users and automating the setup of a new tenant. But the downside is the original versions for these three features are hand-coded in Example3 in the AuthP repo and would take some work to change it for a different type of multi-tenant or authentication provider. Therefore, the focus of the version 3.3.0 release of the AuthP library is to:

  • Provides a generic version of the “invite a user” which works with any type of application
  • Provides a generic version of the “sign up / versioning” features which works with any tenant type, i.e. single-level, hierarchical, sharding / not sharding.

With that background the rest of this article will look at how new users can securely add a new user to an application using the AuthP library.

Looking at the three different ways to add a new user using the AuthP library

The AuthP library is designed to help applications where users have to log in because they can use certain features. For instance, a multi-tenant application users must log in to be able to access the data in their tenant. And when a new ASP.NET Core user is added we also need to link the ASP.NET Core user to the extra data in the AuthP data.

This means we need ways to handle new ASP.NET Core users being added, and the three main ways are shown the figure below, with a brief list of Pros/Cons.

NOTE: The Green rectangles are ASP.NET Core authentication code, the brown rectangles are AuthP code and the mixed brown / green rectangles represent the AuthAdapter which is a mixture of ASP.NET Core authentication handler and AuthP code.

The following sections describe each approach with their pros and cons. Each section also provides links to the documentation for each of the approaches.

1. Sync Users

The “Sync users” approach allows an app admin to compare the list of users in the ASP.NET Core authentication provider against the list of AuthP users, and it will show any new, changed or deleted users – see the screenshot below for an example of this would look like.

NOTE: See the documentation on the sync users feature for a fuller explanation on how to set it up and the ways you can add this to your application.

The Pros of the “Sync users” approach

The “Sync users” is the simplest for me to build, so it came out in the first version. Its good when you have a set of users that don’t change much. However, with the new generic “invite a user” and “sign up / versioning” features there are other ways to achieve this.

The Cons of the “Sync users” approach

The biggest con from my perspective is the extra work for the app admin. Not only does the app admin has to add each user they also have to set the AuthP’s Roles / tenant for the user and the sync user process can’t provide that information. In a big multi-tenant application that would be very difficult to manage with this approach.

The other limiting parts of this approach is that it can only work with authentication providers where you can access a list of their valid users. This means this approach won’t work with social-based authentication providers, e.g. Google, Twitter.

2. Invite a user

The new “invite user” service allows an app admin or a tenant admin to create a secure invite to a user with a given email or username. While creating the invite the admin user can (must) define the AuthP’s Roles and Tenant that the new user should have once they log in. All of this data is encrypted and added to the url which goes to the AcceptInvite page.

The generated url should be sent to the invited user and when they click on url they the user is asked for their email (or username), which is checked against the email in the encrypted data. If everything is OK, then new user is registered as a valid user on the application with the Roles/Tenant setting as found in the encrypted data provided by the invitee.

The “invite a user” can be used to add a new ASP.NET Core user, but if a user with the same email / username is already in your chosen authentication provider, then it will just create the AuthP User linked to the existing ASP.NET Core user.

The screenshot shows the url created by person inviting the user. As you can see the invite is for the user me@gmail.com which will make that user have access to the “4U Inc.” tenant. You can try this yourself by cloning the AuthP repo and running the Example3 ASP.NET Core project, then logging in as a tenant admin (e.g. admin@4uInc.com) and clicking the “invite user” nav item.

NOTE: The SupportCode -> Invite new user service documentation describes how to setup and use “invite user” service in detail.

The Pros of the “invite a user” approach

This is better than the “sync users” for two reasons. Firstly the ASP.NET Core user data and the AuthP data are set up at the same time – that stops the problem of the “sync users” finding a user but the admin user has to refer to other information to properly set up the AuthP data.

The second, and more powerful reason, is that tenant admin user can safely create an invite for a user to join their tenant. This means you can delegate the adding of a new user to either app admin or tenant admin.

NOTE: The version 3.3.0 “invite a user” service contains lots of checks to make sure that a tenant admin user can only set Roles and a Tenant that the tenant admin user can access.

The Cons of the “invite a user” approach

There aren’t really any downsides of the “invite a user” approach now that there is a generic service is available. The in AuthP version 3.3.0 service works with normal applications and all multi-tenant versions.

3. Multi-tenant: “sign up / versioning” to create a new tenant

This feature consists of two parts:

  • The “versioning” part provides a way to create different levels of features (and prices) for the user to choose from. The versioning part is optional, and you can have the same set of features for all tenants it you want.
  • The “sign up” part allows a new user to create a new tenant and links the new user to the new tenant created. Once the tenant is created with the user’s chosen version features, then the new user is created as a valid user linked to the new tenant.

Here is a screenshot show the page where the “sign up” user can pick which version that fits your company / organisation. You can try this yourself by cloning the AuthP repo and running the Example3 ASP.NET Core project.

NOTE: The SupportCode -> “sign up / versioning” documentation describes how to setup and use “sign up / versioning” service in detail.

If your application is using AuthP’s Sharding feature, then the “sign on” part need a way to select the correct database and/or database server for the new tenant. There are lots of things to consider when selecting a database (sharding / hybrid, geography located database servers, etc.) so the AuthP library can’t give you a generic service. Instead it provides an interface and you need to write a class to provide this service and register on startup. The docs for the IGetDatabaseForNewTenant explains what you have to do and has a demo service to look at.

The Pros of the “sign up / versioning” approach

The “sign up / versioning” feature contains two parts, both of which are very useful.

The “sign up” part makes signing up for a tenant makes it easy for a company / organization to sign up to the multi-tenant application. This type of self-service provision is used by all the big multi-tenant / SaaS sites because it reduces the barrier to trying out their offerings. It also reduces the load on the app admin team.

The “versioning” part is also used by the all the big multi-tenant / SaaS sites because it increases the number of companies / organizations that sign up. That’s because the potential tenants can sign up   the lower cost versions which are more affordable. At the same time people who want the extra features will pay extra, thus increases the profits of the multi-tenant application.

NOTE: You should be aware that AuthP makes it easy to change a tenant’s version because the tenant’s features are mainly controlled by AuthP’s Tenant Roles. The only complex change is to switch the tenant’s data between a shared database and an own database (Sharding). AuthP provides methods that managed database type, but you have to write code to move the tenant data between databases, which is fairly complex.  

The Cons of the “sign up / versioning” approach

The only downside is that the “sign up / versioning” is specific to multi-tenant applications.

Linking these two features to the ASP.NET Core’s authentication handlers

Like the whole of the AuthP library the “Invite new user” and the “sign up / versioning” features rely on ASP.NET Core’s authentication handlers. Both services add a new user as part of the process. While I could have created the code to use one type of authentication handler that wouldn’t make these services generic.

My solution was to create an interface called IAddNewUserManager (shown as an AuthAdapter in the earlier diagram) which can provide a common add user service that (potentially) can work with any ASP.NET Core authentication handler. Both the “Invite new user” and the “sign up / versioning” code uses this interface.

In version 3.3.0 of the AuthP library there are only two implementations of the IAddNewUserManager interface that the “Invite new user” or “Sign up / with versioning” features. They are:

NOTE: The SupportCode -> Add New User adapter documentation describes in more detail on how this works at providing an adapter of the various ASP.NET Core authentication handlers which provides a common interface to use within applications that use the AuthP library.

Conclusions

The “sync user” approach to linking the ASP.NET Core’s user to the AuthP User is the easiest to build, but it’s not that good in a multi-tenant application as it creates a LOT of work for the app admin team. And it’s the job of the developer to think about things like the (over)load of the app admin team and come up with solutions.

Version 2.0.0 of the AuthP library brought in tenant roles, which allowed tenants to have different features, and in version 2.3.0 examples of the features called “invite a user” and “sign up / versioning” where added to Example 3 application. These features are great help, but the downside of the examples is that they didn’t cover all the possible options.

Version 3.3.0 of the AuthP rewrote the two examples and turned them into generic services which covers all possible situations. It also adds “invite a user” and “sign up / versioning” features to the AuthP library, making it easier for the developer to use them. This meant a few other services has to be created to support these features, with the AuthAdapter being the main one.

Happy coding.

Part6: Using sharding to build multi-tenant apps using ASP.NET Core and EF Core

Last Updated: October 20, 2022 | Created: April 5, 2022

This article describes how to use EF Core and ASP.NET Core to create a multi-tenant application where each different groups (known as tenants) has its own database – this is known as sharding. A second section describes how to build a hybrid a multi-tenant design which supports both multiple tenants in one database and tenants have a database just for their data, i.e. sharding. 

NOTE: Multi-tenant applications are also referred to as SaaS (Software as a Service). In these articles I use the term multi-tenant as SaaS can also cover having one application + database for each company / tenant.

This article is part of the series that covers .NET multi-tenant applications in general. Also the designs shown in this article comes from the library called AuthPermissions.AspNetCore library (shortened to AuthP in these articles) which provide pre-built (and tested) code to help you build multi-tenant apps using ASP.NET Core and EF Core. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

  1. The database: Using a DataKey to only show data for users in their tenant
  2. Administration: different ways to add and control tenants and users
  3. Versioning your app: Creating different versions to maximise your profits
  4. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  5. Advanced techniques around ASP.NET Core Users and their claims
  6. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core (this article)
  7. Three ways to securely add new users to an application using the AuthP library
  8. How to take an ASP.NET Core web site “Down for maintenance”
  9. Three ways to refresh the claims of a logged-in user

TL;DR; – Summary of this article

  • Multi-tenant applications provide a service to many tenants. Each tenant has their own set of data that is private to them.
  • Sharding is the name given to multi-tenant applications where each different tenant has its own database. The other approach is to put all the tenants in one database where each tenant has a unique key which makes sure only data marked with the tenant’s key is returned (known as shared-database).
  • The good parts of the sharding approach are its faster and data is more secure than the shared-database approach, but sharding comes with the price of having many databases.
  • There is a third, hybrid approach which supports both shared-database and sharding at the same time – this allows you to manage the cost / performance by putting a group of tenants with small data / usage into one database while tenants with high data / usage can have their own database.
  • I detail 7 steps to create a sharding multi-tenant application using EF Core and ASP.NET Core. The steps are a mixture of ASP.NET Core code and EF Core’s code.
  • Then I detail 8 steps (3 of which the same as the sharding approach) that implements the hybrid approach. This implementation has been added in version 3 of the AuthP library and there is an example called Example6.SingleLevelSharding which you can run.

Setting the scene – what is sharding and why it is useful?

Wikipedia says that database sharding “A database shard, or simply a shard, is a horizontal partition of data in a database or search engine. Each shard is held on a separate database server instance, to spread load”. I emphasized the last sentence because that’s the key part – a multi-tenant / SaaS application will have a database for each separate tenant. The alternative to using sharding is to store all the data in one database and the tenant’s data are differentiated by a unique key for each tenant – I will refer to as shared-database, while sharding is a dedicated-database approach or sharding, which is shorter.

There are a number of pros / cons to each approach, but the biggest is the cost verses performance issue. A sharding approach should be quicker than the shared-database approach, but sharding’s performance comes from having lots of databases, which costs more money. The other pro for the sharding approach is that each tenant’s data is more isolated from each other, as each tenant has its own database.

NOTE: This Microsoft document describes some other differences between sharding and shared-database, plus a comparison of three ways to provide a service to many tenants.

There is a third, hybrid approach that allows you to balance the cost /performance. This design uses sharding for tenants that has a lot of data / demand, while tenants with less data / demand go the shared-database approach. The benefit of this approach is you can offer smaller tenants a lower price by putting them in shared database, while tenants that have higher demands will pay for a dedicated database.

Some years ago, I was asked to design and build a multi-tenant application with thousands of tenants, with tenants ranging from a less than a hundred users to a few large tenants with thousands of users. My client wanted a hybrid approach to cover this wide range of tenant types, which is why I have added both sharding and the hybrid approach to my AuthP library.

Here is a diagram to show all three approaches with a summary of their pros and cons.

Finally, I should add that Azure has a useful feature called SQL Server Elastic Pools which can help with the cost / performance by providing an overall level of database performance which is shared across all the databases in the pool. I will talk more about that in the next article.

How I implemented sharding in version 3 of my AuthP library

In one of my tweets about building multi-tenant applications a number of people said they used sharding. AuthP version 2 only supports the shared-database approach, but this feedback made me make the focus of version 3 release of the library to implementing sharding for multi-tenant application.

In addition, the AuthP sharding feature is designed to support the hybrid approach as well, which means you can use the shared-database approach and / or dedicated-database (sharding) approach. As I have already explained, this allows you to balance the cost / performance for each tenant if you want to.

I have split the description of the EF Core / ASP.NET Core code into two parts:

  1. Implement a sharding-only multi-tenant application.
  2. Implement a hybrid multi-tenant application.

1. Implement a sharding-only multi-tenant application

The figure below shows what the sharding-only design would look like, with a database containing information about the users and tenants (top left) and a database for each tenant (bottom).

Here are the steps to implement sharding for a multi-tenant application:

  1. Decide on how to manage databases, especially in production
  2. Hold information of the tenant and its users in admin database
  3. When a user logs in, then add a ConnectionName claim to their claims
  4. Provide a service to convert the user’s ConnectionName claim to a database connection
  5. Provide the connection string to the application’s DbContext
  6. Use EF Core’s SetConnectionString method to set the database connection
  7. Migrate the tenant database if not used before

1. Decide on how to manage databases, especially in production

UPDATE: This section has been updated and matches AuthP version 3.2.0

When I looked at the issue of deploying a multi-tenant application that uses multiple database (and possible geographically database servers) I came up with a way that ensures that the private data, e.g. the username / password for the database server, was hidden, while allowing an admin user to add new databases. This approach split the problem into two parts:

  • How to define the database servers
  • How to define each database on a server
1a. How to define the database servers

It’s the database servers that hold the private data, and as such they need to be managed carefully. Thankfully ASP.NET Core and Azure have excellent ways (app secrets or Azure app configuration) to keep the connection strings private.

With this in mind I store the connection strings to the servers I want to use without the database name (plus the DefaultConnection where the AuthP stores its data) – see below:

{
  "ConnectionStrings": {
    "DefaultConnection": "Server=…, username/password, Database=XXX.",
    "WestCoastServer": "Server=… username/password (no database name)",
    "CentralServer": "Server=… username/password (no database name)",
    "EastCoastServer": "Server=… username/password (no database name)",
    …etc.
  },
//… other parts left out
}

You can either use ASP.NET Core’s secrets, or if you are using Azure I recommend Azure app Service configuration, which can be set up during building the Visual Studio’s Publish feature. Either of these approaches ensures that the private parts of the connection string are kept secret.

NOTE: Do NOT use Azure’s Key Vault as it has a limit of 200 requests / second and in a sharding design the connection string is accessed on every HTTP request that accesses a database.

1b. How to define the databases

I decided to define each database would be defined by four properties, and these properties would be known as a database information.

  • Name: This name is used as reference to database information.
  • ConnectionName: This contains the name of the connection string the “ConnectionStrings” section that contains the information to a database server.
  • DatabaseName: This holds the name of the database.
  • DatabaseType: This holds the database type, e.g. SqlServer, Postgres.

The database information for each database is then stored in a file called shardingsettings.json, which is registered with ASP.NET Core Configuration – see an example file below.

{
  "ShardingDatabases": [
    {
      "Name": "DatabaseWest1",
      "DatabaseName": "asp.net-Example6.Sharding_West1",
      "ConnectionName": "WestCoastServer",
      "DatabaseType": "SqlServer"
    },
    {
      "Name": "DatabaseCentral1",
      "DatabaseName": "asp.net-Example6.Sharding_Central1",
      "ConnectionName": "CentralServer",
      "DatabaseType": "SqlServer"
    },
    // other entries left out
  ]
}

The tenant has the database information’s Name and from this it does the following:

  1. Read the database information from the shardingsettings.json configuration.
  2. Read the connection string from the appsettings file with name provided from the ConnectionName from the database information loaded in step 1.
  3. Then the ShardingConnections service will add the DatabaseName form the database information into connection string to provide the full connection string to go to the tenant application’s DbContext.

NOTE: I use the IOptionsSnapshot<T> service when accessing both the appsettings file and the shardingsettings.json file. This means that it gets the latest information from both configuration data.

2. Hold information of the tenant and its users in admin database

The way a multi-tenant application is there are tenants, with many users linked to each tenant. Typically, the tenant will have a unique key, often a primary key provided by the admin database, a name, e.g. “Company XYZ”, and in this case it would contain the name of the database information name.

ASP.NET Core handles the authentication of a user and provides a unique id, often a string, for each user. You need to add extra data to link a user’s id to a tenant – one simple way would add a collection of users’ id using a one-to-many relationship.

NOTE: The AuthP library has built-in AuthUser and Tenant classes, with admin code to manage these and link to the ASP.NET Core authentication handler. This the AuthP documentation page called “Multi-tenant explained” for how that works.

3. When a user logs in, then add a ConnectionName claim to their claims

When a user logs in you need to detect if they are linked to a tenant, then you need to add a claim containing the connection string name. This requires you to intercept the login process and use the user’s id to obtain the connection string name held in the tenant admin class.

Intercepting the login process depends on the ASP.NET Core authentication handler you are using – the article called “Advanced techniques around ASP.NET Core Users and their claims” provides information on the main authentication handlers.

NOTE: The AuthP library automatically adds the ConnectionName claim if sharding is turned on.

4. Provide a service to convert the user’s ConnectionName claim to a database connection

Having decided to use the connection string name in the claim, you need a way to access the “ConnectionStrings” object in the appsetting file. At the same time, I want to be able to add new connection strings while the application is running. This means can’t use the normal IOption<T> options, but I have to use the IOptionsSnapshot<T> option which reads current data in the appsetting file.

I built a service called ShardingConnections which uses IOptionsSnapshot<T> to get the latest “ConnectionStrings”. The code below shows the specific parts of this service to get the connection string from a connection name (thanks to ferarias answer to this stack overflow question). This service should be set up as a scoped service.

public class ConnectionStringsOption : Dictionary<string, string> { }
public class ShardingConnections : IShardingConnections
{
    private readonly ConnectionStringsOption _connectionDict;

    public ShardingConnections(
        IOptionsSnapshot<ConnectionStringsOption> optionsAccessor)
    {
        _connectionDict = optionsAccessor.Value;
    }

    public string GetNamedConnectionString(string connectionName)
    {
        return _connectionDict.ContainsKey(connectionName) 
        ? _connectionDict[connectionName]
        : null;
    }
    //Other methods not shown
}

NOTE: The full service contains extra methods useful for the admin when assigning a connection name to a tenant.

5. Provide the connection string to the application’s DbContext

You need to inject a service into the tenant application’s DbContext which contains the connection string for the current user. To do that we need two parts:

  • Get the ConnectionName claim from the current user
  • Use the ShardingConnections service to get the connection string

The code below shows a Scoped service that uses the IHttpContextAccessor to get the logged-in user (if present) with its claims. From this it can obtain the ConnectionName claim and passes the connextion string name to the ShardingConnections’s GetNamedConnectionString method. This returns the connection string that the DbContext.

public class GetShardingData : IGetShardingDataFromUser
{
    public GetShardingDataUserNormal(IHttpContextAccessor accessor,
        IShardingConnections connectionService)
    {
        var connectionStringName = accessor.HttpContext?
             .User?.Claims.SingleOrDefault(x => 
                   x.Type == PermissionConstants.ConnectionNameType)?.Value
        if (connectionStringName != null)
            ConnectionString = connectionService
                .GetNamedConnectionString(connectionStringName);
    }

    public string ConnectionString { get; }
}

6. Use EF Core’s SetConnectionString method to set the database connection

The tenant application’s DbContext needs to link to the database that has been provided by the GetShardingData service in the last section. The  EF Core’s SetConnectionString method (added in EF Core 5) sets the connection string to be used for this instance of the DbContext. The code below shows the constructor on the DbContext than handles the tanant’s data.

public class ShardingSingleDbContext : DbContext
{
    public ShardingSingleDbContext(
        DbContextOptions<ShardingSingleDbContext> options,
        IGetShardingDataFromUser shardingData)
        : base(options)
    {
        Database.SetConnectionString
           (shardingData.ConnectionString);
    }
    //… other parts left out
}

NOTE: The EF Core team says using SetConnectionString doesn’t have much of an overhead so sharding shouldn’t be slowed down by changing databases. You may also be interested how you would use DbContext pooling when building multi-tenant applications.

7. Migrate a database if not used before

At some point you need to migrate a database that hasn’t been used before. In the AuthP library the creation of a new tenant causes a call a method in a service written by the developer that follows the ITenantChangeService interface. When working with sharding I added the following code to check the database exists and migrate if it has no tables in it. It returns an error string if the database isn’t found, or null if it finished successfully.

NOTE: This stack overflow question has lots of useful ways to detect if a database exist and so on.

private static async Task<string> CheckDatabaseExistsAndMigrateIfNew(
     ShardingSingleDbContext context, Tenant tenant,
     bool migrateEvenIfNoDb)
{
    if (!await context.Database.CanConnectAsync())
    {
        //The database doesn't exist
        if (migrateEvenIfNoDb)
            await context.Database.MigrateAsync();
        else
        {
            return $"The database defined by the connection string"+ 
                "'{tenant.ConnectionName}' doesn't exist.";
        }
    }
    else if (!await context.Database
        .GetService<IRelationalDatabaseCreator>()
        .HasTablesAsync())
        //The database exists but needs migrating
        await context.Database.MigrateAsync();

    return null;
}

The migrateEvenIfNoDb parameter is there because EF Core’s Migrate can create a database if you have the authority, e.g. when in development mode and are using a local SQL Server. But if you don’t have authority, e.g. when in production and using Azure SQL Server, then the code must return an error if there isn’t an database.

That’s the end of the code needed to implement the sharding-only approach to multi-tenant applications. The next section shows the extra steps to support the hybrid approach that allows you to use shared-database approach and / or dedicated-database (sharding) approach at the same time.

2. Implement a hybrid multi-tenant application.

The figure below shows what the hybrid design for multi-tenants where each database can either have many tenants in one database (see left and right databases) or only one tenant in a database (see the middle two databases).

NOTE: You don’t have to use both shared-database approach and dedicated-database (sharding). You can use just use sharding if you want.

There is a runnable example of a hybrid multi-tenant in the AuthP repo. Simply clone the repo and set the Example6.MvcWebApp.Sharding project as the startup project. The example assumes you have a locadb SQL Server and seeds the DefaultConnection database with three, non-sharding tenants. The home page shows you how you can move one of the tenants to another database and make it a sharding tenant.

Here is a screenshot just after a non-sharding tenant called “Pets Ltd.” has been moved into its own database and the tenant is now using sharding.

Here are the steps for the hybrid approach, with changes from the sharding approach shown in bold.

  1. Decide on how to manage databases, especially in production
  2. Hold extra information of the tenant and its users in admin database
  3. When a user logs in, then add a ConnectionName and DataKey claim to their claims
  4. Provide a service to convert the user’s ConnectionName claim to a database connection
  5. Provide the connection string and DataKey to the application’s DbContext
  6. Use EF Core’s SetConnectionString method to set the database connection and DataKey
    1. “Turn off” the query filter
    2. Stop setting the DataKey on entites
  7. Migrate the tenant database if not used before
  8. Extra features available in a hybrid design.

1 Decide on how to manage databases, especially in production

Same as sharding-only approach – see this section.

2. Hold extra information of the tenant and its users in admin database

The hybrid approach needs an additional way to handle and shared-database tenants, as they need some form of filter when accessing one tenant out of all the tenants in that database. In the AuthP library the Tenant creates a unique string for each tenant called the DataKey. This DataKey in injected to the tenant application’s DbContext and used in EF Core’s global query filter to only return the data linked to the Tenant. This is explained in the in the article called “Building ASP.NET Core and EF Core multi-tenant apps – Part1: the database”.

In addition, the AuthP Tenant contains a HasOwnDb boolean property, which is true if the tenant is using sharding. This HasOwnDb property is used in a few ways, for instance to remove the query filter on sharding tenants and to return an error if someone tries to add another tenant into a database that has already go a sharding tenant in it – see section 2fi later for more on that.

3. When a user logs in, then add a ConnectionName and DataKey claim to their claims

The hybrid approach needs both the ConnectionName claim and DataKey claim to handle the two types of database arrangement: the ConnectionName is used by every tenant to get the correct database and the DataKey is needed for the shared-database tenants.

However, you have one tenant DbContext to handle both shared-database tenants and sharding tenants and you can’t change the query filter. This means you would be running a query filter on a sharding tenant which doesn’t need it – see section 2e for how I “turn off” the query filter on sharding tenants.

4. Provide a service to convert the user’s ConnectionName claim to a database connection

Same as sharding-only approach – see this section.

5. Provide the connection string and DataKey to the application’s DbContext

For the hybrid approach we need the connection string and DataKey property is added to the Scoped service that uses the IHttpContextAccessor to get the logged-in user (if present) with its claims. The updated service (see GetShardingDataUserNormal class) provides both the ConnectionString and the DataKey to the tenant application’s DbContext.

Obtaining the DataKey is much easier than the connection string because the DataKey was already calculated when the claim was added, so it just about copying the DataKey claim’s Value into a DataKey property in the service.

6. Use EF Core’s SetConnectionString method to set the database connection and DataKey

In a hybrid approach a tenant can be using the shared-database approach or the sharding approach. Therefore, you have to add extra code to every tenant (including sharding) to handle the shared-database DataKey. This extra code includes adding a DataKey property / column to a tenant data classes and the tenant DbContext must have a global query filter configuring on all of the tenant data classes – this is covered in detail in the Part1 article in sections 6 and 7.   

However, we don’t want a sharding tenant to take a performance hit because of the (unnecessary) global query filter, so how do we handle that? The solution is to use the Tenant’s HasOwnDb property to alter the DataKey.

In the AuthP library if the Tenant’s HasOwnDb property is true (and the tenant type is single-level), then the GetTenantDataKey method doesn’t return the normal DataKey, but returns the “NoQueryFilter” string. This allows two things to happen:

6.1. The query filter is “turned off”

The AuthP contains an extension method called SetupSingleTenantShardingQueryFilter which adds a global query filter with a query that can be forced to true if the special DataKey string of “NoQueryFilter” – the code below shows what manual setup of what the code does (NOTE that the recommended automatic approach uses EF Core’s  metadata methods).

modelBuilder.Entity<Invoice>().HasQueryFilter(
    x => DataKey == "NoQueryFilter" || 
    x.DataKey == DataKey);
modelBuilder.Entity<Invoice>().HasIndex(x => x.DataKey);
modelBuilder.Entity<Invoice>().Property(x => DataKey).IsUnicode(false);
modelBuilder.Entity<Invoice>().Property(x => 
    DataKey).HasMaxLength("NoQueryFilter".Length);

The important line is line 2, where the DataKey from the claim is compared with the “NoQueryFilter” string. If the 2 part of the query filter is true, then there is no need to filter on a DataKey . The SQL Server’s execution planner will see that the WHERE clause is always true and will remove WHERE clause from the execution.

NOTE: The AuthP library also supports a hierarchical multi-tenant type (see this article about a hierarchical multi-tenant) and in that case you still need a DataKey to access the various levels in the hierarchical data. Therefore, AuthP won’t turn off the query filer for hierarchical multi-tenant even if you give its own database.

6.2. It stops the setting of the DataKey

The other part of using a DataKey is to set the DataKey in any newly created entity. In a hybrid design if the DataKey is “NoQueryFilter”, then it returns immediately, thus removing the compute time to detect and update the entities that needed a DataKey. See the code below for the updated MarkWithDataKeyIfNeeded method.

public static void MarkWithDataKeyIfNeeded(this DbContext context, string accessKey)
{
    if (accessKey == MultiTenantExtensions.DataKeyNoQueryFilter)
        //Not using query filter so don't take the time to update the 
        return;

    foreach (var entityEntry in context.ChangeTracker.Entries()
                 .Where(e => e.State == EntityState.Added))
    {
        var hasDataKey = entityEntry.Entity as IDataKeyFilterReadWrite;
        if (hasDataKey != null && hasDataKey.DataKey == null)
            hasDataKey.DataKey = accessKey;
    }
}

This change doesn’t save much compute time, but still think its worth doing.

7. Migrate the tenant database if not used before

Same as sharding-only approach – see this section.

8. Extra features available in a hybrid design.

To make this work in a real application you need some extra code, for instance how to add new connection string the appsetting file while the application is running. There are also maintenance issues, such as converting a tenant from a shared-database to a dedicated-database tenant (or the other way around) while the application is running.

I will cover these issues and using Azure SQL elastic pools for sharding in a future article.

Conclusion

Multi-tenant applications allow you to serve many tenants from one software source. Having different levels of your service is also a good idea, as it allows your tenants to choose what level of service they want to pay for. And that payment needs to cover your costs for all the cloud service and databases you need to provide your multi-tenant application.

Within the features of your multi-tenant application is its performance, that is the speed (how quickly a query takes) and scalability (Wikipedia defines scalability as the property of a system to handle a growing amount of work by adding resources to the system). The more tenants you have then it takes more work to provide a good performance.

Providing a good performance requires lots of different parts: running multiple instances of the ASP.NET Core applications, upgrading your web server / database server, caching and so on. Sharding is a good approach for handing tenant’s data, but like the other performance improving options it increases the costs.

The client I talked about wanted something working as soon as possible (no surprize there!) so I build a shared-database tenant approach first. But they knew their big tenants would want their own database not just for the performance but for the security. The hybrid handles that and that’s why the AuthP library supports that.

So, if you are building a multi-tenant application for a company you might consider using my open source AuthP library to help you. The library contains all the setup and admin code which there is lot to look through, but that’s because building a multi-tenant application isn’t simple to do. There is a lot of documentation, article and a few videos and if something isn’t clear, then raise an issue and I will try to update the documentation.