How to update a NuGet library once the author isn’t available

Last Updated: May 28, 2024 | Created: May 27, 2024

I’m writing this article because I have an illness called dementia, which over time degrades a person’s ability to remember, think, and make decisions. This means at some point I won’t be able to build or update my open-source NuGet libraries, so this article shows you how to update my libraries when I’m not available.

I detected that something had changed because it affected my programming, in a bad way! This gave me early detection of my dementia, and the ways to overcome some the loss of me programming skills that dementia had taken away. (See the ENDNOTE  dementia and programming section at the end of this article)

But my time is limited, so I have focused on making sure that users can still use my libraries before my dementia stops me from updating my libraries myself. Dementia works differently with each person so I won’t know if I can update my libraries to any new .NET, but I’m pretty sure I will be able to update my libraries to .NET 9, but I can’t say for sure on other years.

The rest of this article tells you which of my libraries need to be updated when Microsoft creates a new .NET version, e.g. .NET 9. While I have 20 NuGets in nuget.org, only seven of the NuGets use a specific .NET version – the others NuGets use NET Standard which doesn’t change.

NOTE: The approaches I describe in this article for updating my libraries also works on any NuGet libraries that aren’t being updated by their author(s). 

TL;DR; – Summary of this article

  • In this article I use the following names:
    • Library: I refer to the source code of a library with the name “library”.
    • NuGet: I refer to a library that has been turned into an easy-to-use file referred to as a  NuGet. https://www.nuget.org is one place which holds NuGets. NuGet are normally managed by your development app, e.g. Visual Studio.
    • .nupkg file: When creating a new NuGet you might have to work with the file ending with  .nupkg. Typically you will have to manually move / push a .nupkg file to code that handles NuGets.
  • Many of my libraries use the NET Standard versions which will work with any supported .NET version . This means that you don’t have to update these libraries. This section covers this.
  • The pros and cons of building libraries that needs a specific .NET version is covers in this section.
  • I have seven libraries that use a specific .NET version, e.g. going from .NET 8 to .NET 9 (I call this a “NET-specific library”). These libraries needs to be updated when a new .NET version comes out, and some point I won’t be there to update them. There are two things I have done to make it easier to update:
    • I have made NET-specific libraries easier to update when a new .NET version comes out. This section covers this.
    • Then I detail how you can Clone the library’s code, update the library to a newer .NET version, and create a NuGet version. This section covers this.
  • At the end of this article I have a section called ENDNOTE dementia and programming covering my experience of dementia and programming, both good and bad.

NOTE: I use Visual Studio Community to show you how to update a library because that what I use, but I expect other development tools can do this too.

My libraries that don’t need an update when a new .NET version comes out

Libraries that only uses NET Standard NuGets don’t have to be updated when new .NET comes out, e.g. NET 9. NET Standard libraries contains a set of fundamental APIs (commonly referred to as base class library or BCL) that all .NET implementations must implement.

Eight of my libraries uses NET Standard and they typically provide a basic specific feature that can be used anywhere. For instance, the NetCore.AutoRegisterDi library can automatically register your services into the Microsoft NET’s Dependency injection provider – this library has the most downloads at 3.1 million and is the smallest library with ~200 lines of code.

The table below contains the eight libraries using NET Standard with links to the NuGet, documentation and an article which gives you an overview of each library.

NuGet NameDocs linkArticle link
NetCore.AutoRegisterDiReadMeArticle
GenericServices.StatusGenericReadMeArticle
EfCore.GenericBizRunnerWikiArticle
Net.RunMethodsSequentiallyReadMeArticle
EfCore.GenericServices.AspNetCoreReadMeArticle
Net.DistributedFileStoreCacheWikiArticle
EfCore.GenericEventRunner and …DomainPartsWikiArticle

The rest of this article covers the libraries that works with a specific .NET version, e.g. .NET 9. These libraries are focused on EF Core and ASP.NET Core applications.

The pros and cons of using full .NET version NuGets

Microsoft releases a new .NET version, e.g. .NET 9, every year. This has allowed Microsoft to add new features and improve performance of .NET applications. These yearly releases have allowed .NET to up to date and very fast. But the downside is if you want to update your application to a new .NET version, then you need to upgrade all the NuGets that uses a NET-specific version, e.g. NET 9.

So, if you can’t get every NuGet for the .NET version you are moving to, then your application might not work!

I have six libraries that uses NET-specific that need updating every year. These libraries are listed below, ordered by the most downloaded version coming first.

NuGet NameDocsArticle
EfCore.TestSupportWikiArticle
EfCore.SchemaCompareReadMeArticle
EfCore.GenericServicesWikiArticle
EfCore.SoftDeleteServicesWikiArticle
AuthPermissions.AspNetCore and MultiProjPackWikiArticle
Net.LocalizeMessagesAndErrorsWikiArticle

Sometimes an older NET-specific NuGet version, e.g. .NET 8, will work for a new application using a higher .NET release, e.g. .NET 9. It depends on whether the new .NET version changes some part of the code that the older NuGet uses. I think most of my .NET 8 libraries will work with .NET 9 because my libraries typically use the basic features, but you can’t be sure it will work! If you have a lot of tests to check everything will work with the older NuGens, then it’s MUCH quicker.

I’m assume that most people will want a new NuGet that supports the new .NET version and rest of the rest of the rest of article shows you how to update a library yourself. But first I talk about how I have changed my libraries so that it much easier to update my libraries.

Making NET-specific libraries easier to update

When I created NET-specific libraries I usually created a NuGet which supported multiple .NET versions, for instance EfCore.TestSupport version 6.0.2 supports .NET 6, 7 and 8. The upside of supporting multiple .NETs is that if I add a new feature or fix a bug, then I can release one NuGet with the features/bug fixes that covers multiple .NETs.

But the downside is that it’s very hard to update to a new .NET release for three reasons:

  1. You can’t use the Microsoft Visual Studio’s “NuGet Package Manager” features to update a library. Instead you must manually edit all the .csproj files in the library code. I have been doing this for years and it’s a pain to do.
  2. It’s harder to find / update NuGets with a vulnerability. I found this when .NET 8 had a vulnerability in the System.Data.SqlClient. I had to go through my libraries to select the non-vulnerable replacement, including effected NuGets. You also can’t use the GitHub’s useful dependabot PRs to fix vulnerability, but you must manually edit the .csproj files.
  3. I found it’s harder to find depreciated NuGets because sometimes a NuGet is valid in an older .NET version but is depreciated in the latest .NET version.

To make it easier for me, and you, I changed all my NET-specific libraries to only support one .NET version and fix any vulnerability and removed depreciated NuGets. I applied this to all of my NET-specific libraries and I released new versions of my NET-specific libraries that only support .NET 8.

NOTE: If you are still using .NET version below .NET 8, then the older versions of my libraries are still there for you.

After making my NET-specific libraries simpler to update and cleaner, then the next part shows you how to create a new version of these NuGets if the author (e.g. me!) can’t update the library when another .NET is released.

How to upgrade a NET-specific library to a new .NET version

This (long) section shows how you update a library that hasn’t been updated and the author(s) hasn’t updated to the new .NET version. I only assume you have access to the library’s code, e.g. GitHub, and you have a development application that can edit, compile and test the library’s code, e.g.  Visual Studio.

The steps are:

1a. Get the NuGet’s code into your development app

1b. Update the .NET TargetFramework of the library

1c. Update the NuGets

1d. Compile the changed code

1e. Run the unit tests

1f. Update the NuGets information

1g. Create a local .nupkg file

1h. Add a local NuGet source to your application

1a. Get the NuGet’s code into your development app

Nowadays most Microsoft open-source libraries can be found in GitHub, and that’s where my libraries are situated. You usually can find the NuGet’s source code by looking the NuGet via www.nuget.org and clicking the “Source repository” link found on the RHS. Then you need to “Clone” the source code into your Visual Studio app.

Once you have cloned the library, I suggest you compile and run any unit tests before you change anything. This will let you know if something isn’t working, e.g. the unit tests need the database connection string changed to match your setup and give you a set of unit test results to compare with the unit test after you have updated to a new .NET version. See section ??LINK?? if your tests use databases.

1b. Update the .NET TargetFramework of the library

Each CS project contains a file ending in .csproj which holds the version or versions that the project can work with. Because I have tidied-up libraries to only have one version, e.g., .NET 8, which means it’s very easy to change – you just have to update the <TargetFramework> line from the old .NET to the new .NET. This is simple to do via Visual Studio’s “Find and Replace>Place in Files” feature, the screenshot below shows how to change a library using .NET 8 to .NET 9.

Once you have clicked the “Replace All” button then each project will be updated to the new .NET version. At this point Visual Studio will show an error (see screenshot below) because Visual Studio can’t handle this change automatically. Clicking the “Reload projects” normally fixes this, but in some cases I had to close Visual Studio and reopen the library again remove the errors.

1c. Update the NuGets

After the library has been updated to the new .NET, then you need to update all the NuGets in every project in the library. The simplest way to update all the projects’ NuGets is to use Visual Studio’s “Manage NuGets Packages” feature which is found by right-clicking the top “Solution” found in Solution Explorer window. The screenshot below shows the Manage NuGets Packages in Update mode  (Note: the screenshot was taken before .NET 9 was released so I turned on “prerelease”, but normally you would have “prerelease” turned off).

The obvious way to update all the NuGets in the library is to select the “Updates” button and tick the “Select all packages” to select all the NuGets to be updated to the latest version. This is it quick and it works, but it’s worth checking that you are using the lowest valid versions of the NuGets in the library – typically the lowest valid version that ends with “0.0”, e.g. 9.0.0. Having higher versions, e.g. 9.0.1, can cause problems if your application has the same NuGet at a lower version (i.e. 9.0.0) but of the same NuGet in the library has. In this case it will show an error saying that NuGet SomeName needs a >= 9.0.1, but your app is using SomeName 9.0.0.

There are a couple of my libraries that have specific versions for some of its NuGets:

  • EfCore.TestSupport: When updating my EfCore.TestSupport library you don’t want the highest versions of the xunit.core and xunit.assert NuGets. Thats because when Visual Studio creates a xUnit Test Project it doesn’t use the latest xunit versions. I suggest you create a xUnit Test Project via Visual Studio and find the xunit version it uses, then use the same version  in EfCore.TestSupport xunit.core and xunit.assert NuGets.
  • AuthPermissions.AspNetCore: the AuthP library uses Microsoft.Graph version 4, not version 5. Therefore you should select Microsoft.Graph 4.54.0, and NOT the latest 5.?.? versions.

1d. Compile the changed code

Once you have changed the library’s .NET version (step 1b) and updated the NuGets (step 1c) then you are ready to compile the code. I recommend you use the Build>Rebuild Solution to compile the code because changing the version and NuGets have a lot of effects.

Normally the code compiles OK, but in some very rare cases the compile fails. In this case it’s likely that the new .NET version has changed, moved (e.g. changing the method’s name) or removed some code features. In this case you need to see what the problems are and decide what to do about it.

WARNING: In my EfCore.TestSupport library I added a feature called EnsureClean which uses code that is not supported by Microsoft (see this section about EnsureClean and why it might fail). If this occurs, I suggest you remove the SqlServer EnsureClean code and use the normal EnsureDeleted / EnsureCreated approach of creating an empty database for a test. (I like EnsureClean because it makes my database tests run quicker than EnsureDeleted / EnsureCreated).

1e. Run the unit tests

All of my NET-specific  libraries have a Test project which uses my EfCore.TestSupport library to test the library. You are looking for every test to be passed, but in some cases I have a failing test to say that a feature that doesn’t work (EfCore.SchemaCompare has one of those). That way I suggested you ran the unit tests on the original library in step 1a because you it will show you what a good run looks like.

NOTE: My EfCore.TestSupport library allows you set up a SQLite in-memory database, and a way to set SqlServer and PostgreSQL connections strings – see this documentation on how this.

1f. Update the NuGets information

To define a NuGet there lots of values you need to provide to create a valid NuGet. For most of my libraries there is one project file (.csproj) in the code that contains the setting to create a NuGet file. For instance the EfCore.TestSupport NuGet has the NuGet information in the TestSupport.csproj file. So before you compile the library you need to update three values to create a new version, as shown below.

NameExample valuesNotes
PackageVersion and Version9.0.0Must be unique on your computer
PackageReleaseNotesUpdated to .NET 9 

In the case of the AuthP library, which has multiple projects going into a NuGet, I had to create a dotnet tool called JonPSmith.MultiProjPack, found in nuget.org. This used a file called MultiProjPack.xml in the AuthP and the values you need to change are:

NameExample valuesNotes
version9.0.0Must be unique on your computer
releaseNotesUpdated to .NET 9 

1g. Create a local .nupkg file

All my libraries, apart from AuthP, are designed create a NuGet .nupkg file on every compile. You want to be in “Release” mode when compiling to create a NuGet because it will create a smaller and faster NuGet file (“Debug” NuGets are useful if you want to see debug information from the NuGet).

On compile in “Release” mode the NuGet file will be created in:

…<SolutionName>\<PrimaryProjectName>\bin\Release\<NuGetName>. nupkg

And here is a real example of my EfCore.TestSupport NuGet:

…\EfCore.TestSupport\TestSupport\bin\Release\EfCore.TestSupport.8.0.1.nupkg

In the case of the AuthP library, which has multiple projects to create the NuGet, I created a dotnet tool called JonPSmith.MultiProjPack to create the .nupkg file. The “How to create an AuthPermissions.AspNetCore NuGet package” in the AuthP’s ReadMe file shows how to install and run this dotnet tool.

NOTE: You can read about why I create the JonPSmith.MultiProjPack dotnet tool in the ReadMe of the code.

TIP: I recommend using the NuGet Package Explorer app to check that the NuGet Package you just created has the settings / information that you was expecting.

1h. Add a local NuGet .nupkg file source to your application

The previous step created the NuGet .nupkg file, but to use this file you need to setup Visual Studio’s  NuGet Package Manager to handle local .nupkg files. Typically you would get NuGets via the https://www.nuget.org app, but Visual Studio’s NuGet Package Manager has a way to find NuGet from a directory on your development computer.

To use this “local NuGets” feature you need to:

  1. Create a directory to hold the local NuGets. My local NuGet directory is called “LocalNuGet” in my user account, i.e. C:\Users\JonPSmith\LocalNuGet.
  2. Then you manually copy the new .nupkg file in the Project > bin > Release directory to your local NuGet directory you set up in the last step.
  3. You need go into the Options>NuGet Package Manager>Package Sources and add a new source where source is a directory on your development computer – see the screenshot below.

After that you can click on the “Package source” and select the “Local NuGet” you can access to the local NuGet(s) you updated. That allows you to update your application’s NuGets using the normal NuGets via https://www.nuget.org and local NuGets you created yourself on your local computer.

NOTE: In the case of the AuthP library, the MultiProjPack dotnet tool automatically copies the new NuGet to the “{USERPROFILE}\LocalNuGet” directory.

Conclusion

It would be great if all the NET-specific NuGets you use are always updated when a new .NET version comes out, but sometimes they aren’t. That usually means the author(s) should update a NET-specific NuGet and then add it into the https://www.nuget.org application. But in my case, I know that some time I won’t be able update my libraries, which would be a pity for people who use my libraries.

I hope I’ll be able to still do programming for years, but with dementia you just don’t know. That’s why I created this article now while I’m doing well so that you‘re covered if I’m not around. The other reason for me to keep programming is that it helps me to counter dementia’s degrading of my brain – see the ENDNOTE dementia and programming after this conclusion.

One thing I would say is please don’t try to help me by sending me random Pull Requests. Even before I had dementia, I found some Pull Requests with no information, and typically no test results, and it takes me time to work out what the PR does and is it useful. The really good improvements come from someone opening an issue or PR with a conversation with me to work out what is the best way to do this – this PR as an example.

ENDNOTE dementia and programming

It was my programming that alerted me that something had changed. Normally I can hold all of parts of the code in my head, and I would instinctively know where to go and what to do, but in 2022 I found I couldn’t hold the code in my head, which was devastating! It took over a year to get the diagnosis of dementia in Alzheimer’s disease which I got in early 2024. NOTE: Currently there are no known cures for Alzheimer’s.

Once I had the diagnosis there was no one to tell me what this means for me, so I googled “Alzheimer’s disease” and it wasn’t nice, or useful. Thankfully a friend who is a dementia nurse came over and talked about what is happening to me. They didn’t sugar-coat the future, but they said what will happen and what I could do to slow down my decline. Thankfully my programming changing meant that I caught the dementia early (most people are diagnosed in the middle stage of dementia) so had time to work on things.

From my perspective the best suggestion my nurse friend gave me was Cognitive Stimulation Therapy (CST). The core of CST is that you tackle jobs that are hard, but ones you can complete. The typical suggestions to do crossword puzzles, Sudoku, Rubik’s Cube, etc. I do some of these but for me the best CST work is programming! I had to change my approach to programming to counter the dementia’s symptoms, for instance:

  • Because dementia affects my memory I have to use lists of the things to do and tick them off when I have done them (I rarely used worklists before).
  • Because dementia affects problem-solving I must accept that I will be slower when programming. I was very fast before (so my clients said) but now I’m five times slower or more.
  • I also have to handle the frustrations when I can’t do something I could before, or it took longer than before. I have found that trying to something while I’m frustrated will usually fail and I will feel bad. I have to accept my situation and enjoy the things I can do.  

A concrete example of how this works can been in this article and the many months of thinking, coding, and writing. I starting my cleaning with my biggest library, AuthP library, which has 20 projects including seven example applications. During cleaning the AuthP library I felt I regained some of programming skills that I had before dementia, for instance I started get back a similar feeling that I could hold “the code is in my head” came back. That was a WONDERFUL feeling to see some of damage from dementia could be rolled back.

But on the other hand I now do need a worklist to help me manage the whole cleanup and update process, which covers many months. And I had to accept that I’m slower at programming, but the good news I that I CAN still code.

While I focused on dementia and programming because my blog is about programming, but I also have lots of other things outside programming, like talking to people to keep my speech up and using apps to make sure I remember things and dates. In fact writing this article is another CST’s “hard, but possible” job, especially as one part of dementia’s symptoms is not remembering words.

The (long) journey to a better sharding multi-tenant application

Last Updated: August 18, 2023 | Created: August 18, 2023

This article focuses on the sharding approach of building a .NET multi-tenant application using the AuthPermissions.AspNetCore library (shortened to AuthP). After working exclusively on a sharding-only multi-tenant application I found various issues that make building and using such an application is difficult. This article describes the issues I found and the changes I made to the AuthP’s 6 release to improve sharding multi-tenant application.

This article is part of the series that covers AuthP’s multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

  1. The database: Using a DataKey to only show data for users in their tenant
  2. Administration: different ways to add and control tenants and users
  3. Versioning your app: Creating different versions to maximise your profits
  4. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  5. Advanced techniques around ASP.NET Core Users and their claims
  6. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core
  7. Three ways to securely add new users to an application using the AuthP library
  8. How to take an ASP.NET Core web site “Down for maintenance”
  9. Three ways to refresh the claims of a logged-in user
  10. Use custom databases with the AuthP library – Part1: normal apps
  11. Making .NET sharding multi-tenant apps easier to use with AuthP 5.1 (this article)
  12. Use custom databases with the AuthP library – Part2: sharding / hybrid apps (coming soon)

TL;DR; – Summary of this article

  • This article is about a sharding multi-tenant application. AuthP has two types of sharding:
    • Hybrid, which supports tenants that share a database with other users and tenants that have their own database, which I refer to Sharding-only.
    • Sharding-only, where every tenant have their own database.
  • Part 1 covers two types of multi-tenant applications: one where admin is done by the app-maker and one where specific user(s) in a tenants handle the admin of their tenant.
  • Part 2 describes the changes the AuthP’s sharding-only multi-tenant application, which are
    • Allow certain features to work, e.g. the “Sign up now” feature
    • To improve the administration of a sharding-only multi-tenant application.
    • Add a service to make creating / removing sharding-only tenants easier in code and human time.
  • Part 3 covers how you can use the new features in AuthP version 6. They are
    • Where does each tenant’s data is stored when using AuthP’s sharding?
    • A service that makes it simpler to create / delete a sharding-only tenant.
    • How the AuthP’s 6 “Sign up now” feature handles sharding-only tenants.
  • At the end of this article it provides links to relevant documentation, the file containing steps to update an sharding multi-tenant using an older version, and a new example app.

Part 1: Two types of multi-tenant applications

Recently I have been working on the multi-tenant features in the AuthP’s library, and I started to see that there are different types of multi-tenant applications. This insight came while building a new sharding-only app and it helped me to see areas where I need to improve the AuthP’s sharding futures. The two main types of multi-tenant are based on who is in administration of users and tenants – see the description of two types below.

1. Multi-tenant app controlled by the app-maker

This type of multi-tenant app is controlled by the organisation who owns the app. Normally there is some form of admin team (which I refer to as the app admin) in the organisation who manages the users and its data. Some examples of this type of multi-tenant apps are:

  • A bank app allowing its users to manage their bank accounts.
  • An in-house app that controls the inventory in the organisation.

The pro of this style is they have good control over who can access the app, but the downside is you need an admin team to manage the users, tenants etc., e.g. add / remove users, changing what they can do, etc.

2. Multi-tenant app controlled by tenant user

This type of multi-tenant app move much of the admin features out to a tenant user. The AuthP has the concept of a tenant admin allow a user to create their own tenant for their organisation (normally with a payment) and then manage the users in their tenant, such as adding new users, controlling what they can do, etc. Some examples of this type of multi-tenant apps are:

  • GitHub, which allows users to create their own space and manage.
  • Facebook / Meta,  where you post comments, share photos and have friends.

The pros of this style is that the tenant user can quickly makes changes and its reduces the size of the app admin team you need for to support your multi-tenant application. The downside is the multi-tenant administration code is more complex , but AuthP has built-in solutions to most of these admin features.

NOTE: I was asked to design a large multi-tenant application for a client, and I personally know that there is a LOT of admin code. Also, the app admin team were very busy when a large organisation joined their multi-tenant application. This is why I put a lot of work on the tenant admin features to reduce the amount of work the app admin team must deal with.

The AuthP library supports both multi-tenant types, but you will read how I needed to improve the AuthP’s features for sharding-only databases.

Part 2: The problems I found and what I did to improve sharding features

The AuthP’s implementation of sharding uses a “sharding entry” approach that links a tenant to a database. The diagram below the implementation of two types:  “controlled by the app-maker” and the “controlled by tenant users”.

What I found that both types could be improved, but the “controlled by tenants” had one feature didn’t work as it should, and admin code weren’t that easy to use, especially around creating / deleting sharding-only tenants. Part 2 (below) will cover the problems and how I solve them.

Part 2: The problems I found and what I did to solve them

The list below details the various problems and what I did about it 

  1. The “Sign up for a new tenant” feature doesn’t handle sharding tenants
  2. Solved a limitation in ASP.Net Core’s IOptions feature (breaking changes)
  3. Add a service to make creating / removing sharding-only tenants.

2.1. The “Sign up for a new tenant” feature doesn’t handle sharding-only tenants

The AuthP’s “Sign up for a new tenant” feature allows to a user to create a new tenant, which can significantly reduce the amount of work for your admin people. However the version 5.0.0 of the “Sign up” feature doesn’t contain code to setup a new sharding entry in a sharding-only tenant.

But it was then that is hit a limitation of the ASP.Net Core’s IOptions features – see the next section below what the problem is and how I got around it.

2.2. Solved a limitation in ASP.Net Core’s IOptions feature (breaking changes)

The current sharding entries are held in a json file and accessed via ASP.NET Core’s IOptions feature (similar to how ASP.NET Core uses a appsetting.json file). What I found was that the I added a new sharding entry to a json file the IOptionsMonitor won’t see the changes until a new HTTP request starts – I assume it looked for updates at the end of every HTTP request. This causes me a problem as there are cases where a new sharding entry is added and another service needs to retrieve sharding entry within the same HTTP request.

There are a few ways to get around this, but in the end I changed to using my Net.DistributedFileStoreCache library (shortened to FileStoreCache)  that uses the FileSystemWatcher and updates are immediately updated. The upside of using the FileStoreCache is has a similar very fast (~25ns) read-time that the IOptions approach had. The downside is this change creates a breaking change to applications that have been created to older versions of AuthP.  

NOTE: I have created a detail list of the changes in the UpdateToVersion6 file and a console application to help with update your app’s sharding entries to the AuthP 6 format.

2.3. Add a service to make creating / removing sharding-only tenants

When you are adding a new sharding-only tenant there are four steps:

  1. Add a sharding entry to define the new database that the tenant will use.
  2. Create the tenant using AuthP’s AuthTenantAdminService which will…
  3. Run your ITenantChangeService code to set up the database.

…And if you are using a cloud database service, then you need create a new database.

NOTE: If you need to create / delete cloud database, then you can do that withing your implementation of the ITenantChangeService – create your cloud database in the create tenant method and delete your cloud database in the delete tenant method.

You could do this yourself with the AuthP’s version 5 in two manual steps, but in AuthP version 6 there is a new service called IShardingOnlyTenantAddRemove which runs the three steps for creating and removing a tenant. This service makes it easier, and clearer, for an admin person to add or remove sharding-only tenants.

Part 3 – How you can use these new features

So, having told you what has changed in AuthP version 6 let’s see how you would use these updated features. I will cover the following topics, and I will point out the things that have changed

  1. Where does each tenant’s data is stored when using AuthP’s sharding?
  2. A service that makes it simpler to create / delete a sharding-only tenant
  3. How the AuthP’s 6 “Sign up now” feature handles sharding-only tenants

3.1 Where does each tenant’s data is stored when using AuthP’s sharding?

A multi-tenant application provided a service for an organization / person finds useful, but instead of deploying a single web app / database for each tenant they have one web app and they segregate each tenant’s data. In AuthP provides three ways to segregate the tenant’s data, as shown in the table below.

TypesDescriptionHow filter only tenant’s data?
1. One databaseAll the tenant’s data in one databaseFilter data be unique key
2. HybridSome tenants have their own database, and some tenants share a databaseSelect the correct database
Filter data be unique key
3. Sharding-OnlyEach tenant has their own database for dataSelect the correct database

Since AuthP version 3  supports all these ways to separate each tenant’s data, but when I ran a twitter poll on what type of approach they wanted the “Tenants have their own db” (4. In the diagram above) won by a large margin. That why this article (and the new release of the AuthP library) is all about.

AuthP version 3 defined a way that a tenant can be linked to a database. That database may have data from many tenants in it (shared) or a database has only one tenant’s data (shared-only). In this article we are mainly looking sharding, and in particular at the “Select the correct database” part, which applies to both hybrid and sharding-only approaches. The diagram shows the four stages that sets up the tenant’s data DbContext to the specific database for tenant that the logged-in user is linked to.

AuthP version 6 stills these four steps, but it had to change the service that implements stage 2 was changed due the limitation in the ASP.NET Core IOptionsMonitor features. This created **breaking changes** in the version 6 release and I created UpdateToVersion6.md file which shows how to update your existing sharding multi-tenant built using a previous release of the AuthP library.

NOTE: When fixing the ASP.NET Core IOptionsMonitor limitation I also found the existing code rather hard to understand, partly because of the code being complex and partly because the names of some of the classes and methods didn’t really say what they were doing. Therefore I wanted to make the code easier to understand, but this created more breaking changes. Thankfully these changes are obvious i.e. there will be a compile error as the name have changed.

3.2. A service that makes it simpler to create / delete a sharding-only tenant

Section 1.3 talks about a new service called IShardingOnlyTenantAddRemove that combines the creation of the tenant and its sharding entry at the same time. This service means that the admin user only has one page to fill in and the code sets up the rest. The service does the following:

  • Gathers the information to create the tenant and sharding entry needed:
    • Tenant name – from the admin user
    • The connection string holding the database server (code or human – see diagram below)
    • Set the database provider used to create the tenant database (e.g. SqlServer) is set by your code
  • Then, after various checks, it creates the sharding entry and Tenant in one go.

The diagram below shows two web pages that the app admin user (think “controlled by app-maker” type) to create a new tenant on one go. The two pages show the two types of app: left is where the application has a single database server while on the right is for applications needing multiple database servers.

NOTE: The Example7’s TenantController in the AuthP repo has an example of the multiple database servers option.

NOTE: AuthP’s sharding muti-tenant feature uses connections strings to define the database server, authentication, and other options, but AuthP will add the name of the database. That’s why the  multiple database servers option above uses the name “Servers” for the connection strings names.

The service has a companion class called ShardingOnlyTenantAddDto which makes setting up of the data very simple. In fact the code below will work for both create tenant options as it detects if you only have one valid connection string and sets the ConnectionStringName automatically.

public IActionResult Create([FromServices] 
    IGetSetShardingEntries shardingService)
{
    var dto = new ShardingOnlyTenantAddDto
    {
        DbProviderShortName = AuthPDatabaseTypes.SqlServer.ToString()
    };
    dto.SetConnectionStringNames(
        shardingService.GetConnectionStringNames());
    return View(dto);
}

NOTE: The name of the database and the sharding entry is created by the ShardingOnlyTenantAddDto’s FormDatabaseInformation method which uses time as the database name, e.g. 20230808101149-892. If you want to use the tenant name then the FormDatabaseInformation method can be overridden by you.

3.3 How the AuthP’s 6 “Sign up now” feature handles sharding-only tenants

Earlier I said that the IOptionsMonitor limitation made it impossible to use the “Sign up now” feature to create a new sharding-only tenant (think “controlled by tenant user” type). Once the IOptionsMonitor limitation was overcome it wasn’t that hard to create a service that handles the tenant and its sharding entry.

The AuthP repo contains a DemoShardOnlyGetDatabaseForNewTenant which handles sharding-only tenant and the required sharding entry. The Example7 app in the repo has the “Sign up now” and uses the DemoShardOnlyGetDatabaseForNewTenant. The screenshot below shows this working – note that allows the user to decide where they wanted their data located.

Lots of new / updated documentation for AuthP version 6

When I came back to programming after 21-year break (I went to the dark side, e.g. I was a manager) I picked Microsoft because their documentation was so good. Therefore I know that documentation is (nearly) as important as the code.

So, with the release of version 6 of the AuthP I have added / updated the documentation about sharding. Here is a list.

DocumentWhat about
Multi-tenant explainedStart here if new to AuthP
Sharding explainedDescribes two sharding modes, new
How AuthP handles shardingDescribes internal code, new
Configuring shardingHow to setup sharding app, new
Extra help for shading‐only appsDocs on new features, new
Managing sharding databasesShowing / changing tenants, updated
UpdateToVersion6How to update app to AuthP 6

The documents give you the overview of what to do, but I also take the time to add comments on all the code. That’s because sometimes it easier read the code to really what it does.

Also, the AuthP’s repo now has new example called Example7 which implements a sharding-only multi-tenant application. The Example7 app uses all the new / updated features included in the new AuthP version 6 release.

Conclusion

When I started to look at improving Auth’s sharding-only tenant I kept finding things more and more issues and I wasn’t sure it was worth doing the update, so I did a twitter poll to find out what the users of the AuthP wanted. That poll clearly shows that developers preferred the sharding-only approach nearly twice more than a shared database approach. This feedback made my spend more time to make sharding-only really well.

I’m very happy about the AuthP version 6 update because it fixed the limitations in sharding-only multi-tenant applications and improved the admin of creating / deleting tenants. The only problem is that AuthP version 6 contains the breaking changes, which that means you have alter your the sharding multi-tenant application when update to AuthP version 6 – see UpdateToVersion6 for the details.

There was always going to be some breaking changes, but I found my previous sharding code was hard to understand so I tided up that code at the same time. This adds extra breaking changes, but it turned out to fairly simple. For instance the update of the Example6 (hybrid multi-tenant) from version 5 to version 6 wasn’t hard – usually it was just changing the names of the services and methods, with the complex changed only in the internal code.

I hope AuthP’s user’s will find version 6 helps them, or anyone building a sharding-only multi-tenant application can gets some ideas for their project. Do give me feedback on version 6 and I am happy to answer your questions via the AuthP ‘s GitHub issues pages.

Happy coding.

Using custom databases with the AuthP library – Part1: normal apps

Last Updated: May 31, 2023 | Created: May 31, 2023

The AuthPermissions.AspNetCore library, referred to as AuthP library, provides enhanced Roles authentication and multi-tenant services to an .NET application. The AuthP library needs to store information in a database, and the previous version (4.2.0) supported SQL Server and PostgreSQL, but with release of AuthP version 5.0.0 you can use the main database provider that that EF Core supports. This feature is called “custom databases” and which allows you to use other database providers other than the build-in SqlServer or PostgreSQL database providers.

This article explains the steps you need to use a different database provider with a normal (i.e. not sharding / hybrid) multi-tenant application. A second article called “Use custom databases with the AuthP library – Part2: sharding / hybrid apps” will cover using the “custom databases” feature with sharding / hybrid multi-tenant application.

This article is part of the series that covers AuthP’s multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

  1. The database: Using a DataKey to only show data for users in their tenant
  2. Administration: different ways to add and control tenants and users
  3. Versioning your app: Creating different versions to maximise your profits
  4. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  5. Advanced techniques around ASP.NET Core Users and their claims
  6. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core
  7. Three ways to securely add new users to an application using the AuthP library
  8. How to take an ASP.NET Core web site “Down for maintenance”
  9. Three ways to refresh the claims of a logged-in user
  10. Use custom databases with the AuthP library – Part1: normal apps (this article)
  11. Use custom databases with the AuthP library – Part2: sharding / hybrid apps (coming soon)

TL;DR; – Summary of this article

  • The AuthP library is designed to make building .NET multi-tenant applications by providing a the backend design and admin features to get your application build more quickly.
  • The new AuthP version 5.0.0 and contains the “custom databases” feature (plus other features – see next section), where you can now use any of the main of EF Core database providers with AuthP, focusing on normal (i.e. not sharding / hybrid) multi-tenant applications.
  • To use the “custom databases” feature with a normal multi-tenant applications you need to go three stages:
    • Create an EF Core migration for your selected database provider
    • Create an extension method to register your custom database
    • Various minor changes to your tenant data to work with your custom database
  • There is a working example of a normal multi-tenant applications using Sqlite as the custom database. You can find this in the AuthP.CustomDatabaseExamples repo – look at projects with a name starting with “CustomDatabase1.”
  • This article compares a multi-tenant application using a build-in database and the same multi-tenant application using Sqlite as the custom database.

Summary of the new features in AuthP version 5.0.0

Before getting to the details of the new “custom databases” in AuthP version 5.0.0 I provide an overall list of all the new in this release:

  • BREAKING CHANGE in AuthP’s sharding / hybrid multi-tenant feature: If you are using AuthP’s sharding /hybrid features, then look at the UpdateToVersion5.md file for what you need to do.
  • This new release makes it easier to use, and change, the sharding / hybrid multi-tenant feature. The two items below cover the easy of use / change:
    • Easier to use: there is a new extension method called SetupMultiTenantSharding that sets up the sharding / hybrid multi-tenant feature. This makes it easier to set up this feature.
    • Easier to change: You can change the internal services / setting of the sharding / hybrid feature, e.g. one developer wants to store sharding data in a database, instead of in a json file, which this release allows. This is done by creating a new extension method containing the code in the SetupMultiTenantSharding extension method with your changes.
  • New feature: You can now use range of database providers (see list later) to use with the AuthP library. The table below shown the database providers AuthP 5.0.0 supports:
Supported database providers in V5.0.0Comments
Microsoft.EntityFrameworkCore.SqlServerBuilt-in – already in AuthP library
Npgsql.EntityFrameworkCore.PostgreSQLBuilt-in – already in AuthP library
Microsoft.EntityFrameworkCore.Sqlite 
Microsoft.EntityFrameworkCore.Cosmos 
Pomelo.EntityFrameworkCore.MySqlPomelo Foundation Project
MySql.EntityFrameworkCoreMySQL project (Oracle)
Oracle.EntityFrameworkCoreOracle

NOTE: The AuthP library uses Giorgi Dalakishvili’s EntityFramework.Exceptions library to detect concurrency and unique errors, and Giorgi only supports the main EF Core database providers, i.e. SqlServer, PostgreSQL, SQLite, MySQL, MySQL.Pomelo, and Oracle.

The rest of this article covers using a “custom database” with a single tenant database, and Part2 article will show to use “custom databases” with tenants in many databases (i.e. sharding / hybrid approach).

Setting the scene – how AuthP’s multi-tenant features uses databases  

The AuthP’s multi-tenant feature provides nearly all the backend services / admin to build a multi-tenant application. I started building the AuthP library using one / two databases, which works for small / medium sized multi-tenant apps. Then in version 3.0.0 I added a “one db per tenant” approach (known as sharding) plus a hybrid design, which gives you more flexibility to handle both small and large tenants on the same multi-tenant app. The diagram below shows the four ways AuthP can handle databases to provide the right cost / performance for your multi-tenant project.

NOTE: If you aren’t creating a multi-tenant project, then you follow the first, “All in in one Db” approach with AuthP data and any project data in one database.

Up to this point the AuthP only supported SQL Server and Postgres database types, but a few developers asked if the I could support other database providers. So, when I found time, I created a new version of the AuthP I added the “custom database” feature that supports allows any database server that EF Core supports.

Introducing the AuthP.CustomDatabaseExamples repo

To help you, and to make sure the custom database feature works, I created a repo called AuthP.CustomDatabaseExamples which contains two examples: this article covering multi-tenant application that keep all tenant data in one database (see 1 and 2 in the Four Ways diagram).

To use a custom database you must change various code and migrations, which this article will explain. I chose Sqlite for my custom database type because the Individual User Accounts authentication supports Sqlite.

NOTE: Using Sqlite for the custom database examples requires extra code over other database providers. That’s because the Sqlite’s connection string doesn’t have a Server / host part but has a filepath in the data source part. Other databases, such as MySql, should be slightly easier as the connection string fully defines the server and database name.

The example which works with AuthP “add tenant data in one database” and has three projects, all starting with CustomDatabase1.

Steps in building a normal AuthP application with a custom database

We start with an application using AuthP which uses one / two databases, i.e. 1 and 2 in the previous “Four Ways” diagram. Here are the steps to use a custom database are listed below, which will detailed later.

  1. Create a migration of the AuthPermissionsDbContext for your custom database.
  2. Create an extension method to register your custom database.
  3. Other, non-AuthP things you need to think about

These stages are explained below.

1. Create a migration of the AuthPermissionsDbContext for your custom database

To create or update the AuthPermissionsDbContext you need to create an EF Core migration, and each database provider needs its own migration based on your chosen custom database type.

If you are using AuthP’s build-in database providers, then the AuthP NuGet package contains migrations for SQL Server and Postgres. But when using the custom database feature, then you need to create the migration for your custom database type. There are many ways to create a EF migration, but personally I use a IDesignTimeDbContextFactor<TContext>.   The code below comes from the CustomDatabase1’s AuthPermissionsDbContextSqlite class.

public class AuthPermissionsDbContextSqlite :
     IDesignTimeDbContextFactory<AuthPermissionsDbContext>
{
    // The connection string must be valid, but the  
    // connection string isn’t used when adding a migration.
    private const string connectionString = 
        "Data source=PrimaryDatabase.sqlite";

    public AuthPermissionsDbContext CreateDbContext(string[] args)
    {
        var optionsBuilder =
            new DbContextOptionsBuilder<AuthPermissionsDbContext>();
        optionsBuilder.UseSqlite(connectionString);

        return new AuthPermissionsDbContext(optionsBuilder.Options, 
            null, new SqliteDbConfig());
    }
}

The following lines you to change for your custom database provider

  • Lines 6 and 7: You must have a connection string in the correct format for your custom database type, but if you are creating a migration then the database won’t be accessed. 
  • Line 13: you need to use the correct “Use???” method for your custom database provider.
  • Line 16: The third parameter to the AuthPermissionsDbContext class allows you to add any custom-specific set of EF Core model commands to be run at the start of the AuthPermissionsDbContext’s  OnModelCreating method.  The main use is to set up concurrency tokens to capture concurrent changes to the same table. See the SqliteDbConfig class for Sqlite concurrent change commands.

NOTE: Setting up the Sqlite concurrent change in the EF Core migration is a bit harder than from other database types. That’s because you need to add extra trigger code – see this article on what Sqlite needs and see the …Version5.cs in the SqliteCustomParts Migrations folder.

2. Extension method to register your custom database

You need to create an extension method to register your custom database. You do this by copying one of existing extension methods already in the AuthP code, such as UsingEfCoreSqlServer, and alter six parts:

  1. You set the AuthPDatabaseType to the enum AuthPDatabaseTypes.CustomDatabase
  2. Change the AuthPermissionsDbContext to your custom database provider.
  3. Link to your assembly containing the AuthPermissionsDbContext migration.
  4. Update the EntityFramework.Exceptions to your custom database provider.
  5. Add new code to register custom database configuration code.
  6. Optional: Update the RunMethodsSequentially code to provide a global lock

The diagram below shows how you would take the UsingEfCoreSqlServer extension method and alter it to become your custom database extension method (NOTE click to get a bigger version of the diagram).

The AuthP.CustomDatabaseExamples repo has a UsingEfCoreSqlite extension method in the SqliteSetupExtensions class which sets up a Sqlite database as the custom database. This has gone through the six steps shown in the diagram above.

3. Other, non-AuthP things you need to think about

There are a few other things to do that use your custom database outside the AuthP library. The main one is the tenant part provides some sort of application that users will use. In the AuthP library there is one that mimics an application where you can entry invoices. Another example is managing shops sales/stock. If you want to use your custom database in your tenant data, then you need to set that up too.

The first two options in the “Four Ways” diagram show that you two ways to handle the tenant part of the outside sharding.

  1. “All in one Db”: your tenant data is within the same database, so it must use the same custom database.
  2. “Separate AuthP / tenant data”: In this case your tenant data doesn’t have to use the same custom database that AuthP uses. 

Comparing two AuthP multi-tenant examples to see the changes

I created an example multi-tenant application using Sqlite as the custom database, by copying an existing multi-tenant application that used the built-in SqlServer – see Example3 in the AuthP repo. This allows me to compare the changes to my new Sqlite to show what has changed.

I created a new repo called AuthP.CustomDatabaseExamples and copy / updated an  example multi-tenant application, using three projects wholes names all starting with “CustomDatabase1.”. There are three projects are:

Projects 
CustomDatabase1.InvoiceCodeCode for the tenant data / features
CustomDatabase1.SqliteCustomPartsContains Sqlite code / migration
CustomDatabase1.WebAppASP.NET Core providing a runnable web app

3a. Example3.InvoiceCode -> CustomDatabase1.InvoiceCode

The main changes are to so using Sqlite for the invoice code. Here are the changes:

  • …AppStart.RegisterInvoiceServices – changed to use Sqlite
  • InvoicesDesignTimeContextFactory (IDesignTimeDbContextFactory) to create a migration for the Invoice DbContext to Sqlite (The AuthP Example3 Invoice DbContext used the build-in SqlServer). See my comments on the end of this class which provides one way to create the migration.
  • You need to create a Sqlite Migration for the Invoice DbContext using the InvoicesDesignTimeContextFactory detail above.

3b. New CustomDatabase1.SqliteCustomParts project

This project contains the Sqlite extras that you need to use Sqlite with AuthP. They are:

3.c. Example3.MvcWebApp.IndividualAccounts -> CustomDatabase1.WebApp

The big change in the ASP.NET Core project is changing is the Program class. In the Program class I have added #regions to show what has changed.

The other change is in the appsettings.json file where you need to provide the Sqlite connection string, which is quite different from other database connection strings.

Conclusion

I haven’t had many requests for the “custom database” feature for AuthP, but like the support for multiple languages in AuthP many found it useful once its there.

Creating the “custom database” feature for a normal (non-sharding) applications was fairly quick to create, but doing the same to the sharding / hybrid applications it turns out to be quite more complex. I decided to release AuthP version 5.0.0 without the sharding / hybrid “custom database” feature because this release contains other improvements that people have asked for.

Watch this space for Part2 of the “custom databases” article to see how to use “custom database” feature when building sharding / hybrid multi-tenants applications. Happy cod

A pattern / library for methods that return a status, including localization

Last Updated: January 26, 2023 | Created: January 26, 2023

This article is about a pattern / library that allows you to create methods that can return a status, that is a success / fail value, plus the error messages – I call this “returning a status”. It then goes on to describe another library called Net.LocalizeMessagesAndErrors (shortened to Localize-ME) that returns a status where the errors can be in multiple languages (known as localization).

This article talks about a library called GenericServices.StatusGeneric (shortened to StatusGeneric) that implements the “return a status” pattern. Then it talks about a different library called Net.LocalizeMessagesAndErrors (shortened to Localize-ME) that contains an implementation of “return a status” pattern, but handles multiple languages (known as localization) of the error messages.

This article is part of series about handling localization. The full list of the articles in this series are:

  1. Improving the support of multiple languages in .NET applications
  2. Applying the improved multiple languages support to .NET applications
  3. A pattern / library for methods that return a status, including localization (this article)

TL;DR; – Summary of this article

  • The “return a status” pattern is useful wherever a method could return an error.
  • The StatusGeneric library provides a simple, but powerful implementation of the “return a status” pattern and this article provides information on how to use the StatusGeneric library.
  • A second library, Localize-ME, contains an implementation of  the “return a status” pattern, where the error’s message can be returned in different languages. The localization part uses Localize-ME library, which has extra features on top of the .NET localization services.

1. Introduction to the StatusGeneric library

In 2014 I created a library using Entity Framework (known as EF6) which contains the “return a status” pattern inside it. Then in 2019, when .NET and EF Core were stable, I built a number of libraries that used the “return a status” pattern, so I created the standalone StatusGeneric library so that I could use it in lots of places / libraries. So far, this library has been downloads > 200,000 times so obviously others find it useful too.

The following subsections will introduce you to the StatusGeneric library and how you can use it, starting with what the returned status contains.

The returned status: IStatusGeneric and IStatusGeneric<out T>

The key of the “return a status” is the IStatusGeneric, which defines what the returned status contains. This list below defined each property in the

I created an interface, which I refer to as a status, that is returned to the calling method. This interface, called IStatusGeneric, has the following properties:

  • A IReadOnlyList Errors property where errors are stored. Each error is contained in a ValidationResult class, which contains the error message and optionally the name of the member that error is linked to (ASP.NET Core uses this to show the error next to the actual entry that has an error.
  • A boolean IsValid property, which is true if there are no errors.
  • A boolean HasErrors property, which is true if there are errors.
  • A string Message property, which can contain a success message if there aren’t any Errors or contains “Failed with nn errors” if there are in the Errors property.

I also created the IStatusGeneric<out T> interface, which adds a Result property to the IStatusGeneric for methods that want to return a value as part of the status. The Result property is set to default if there are errors to ensure that

How to use the StatusGeneric library in your code

The first step is to add the StatusGeneric’s NuGet package to the project you want use the StatusGeneric library. Then you can create a method that returns a status, as shown below. Below the code I give a list of the various parts in the code:

public IStatusGeneric CheckNull(string? month)
{
    var status = new StatusGenericHandler();
    status.Message = "All went well";

    if (month == null)
        return status.AddError("input must not be null", 
             nameof(month));

    return status;
}

The list below contains a description of the use of the StatusGeneric library in the method above

  • Line 1: The method returns a class that matches the IStatusGeneric.
  • Line 3: You need to create a status using the StatusGenericHandler constructor. This sets up an empty status, i.e. has no errors.
  • Line 4: Optionally you can set a success Message. Note that if there are errors, then the Message contains the string “Failed with nn errors”;
  • Line 7: This adds a error to the status and then returns the status
    • Line 8: Optionally you can add the name of the member that has the error.
  • Line 10: This returns the status. In this case there are no errors, so it is valid.

NOTE: The name of the member that has the error (line 8) must have the actual name of the property in a class, which in the case above would be Month, not month. I use a method called CamelToPascal (see this extension class in the Localize-ME library) that changes the first character to a capital character.

And the code below provides an example of how use the CheckNull method in an ASP.NET Core application, with a list of the various parts after the code:

NOTE: You need to add the EfCore.GenericServices.AspNetCore NuGet package to obtain access to the  CopyErrorsToModelState method to copy and errors to the Razor pages.

[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult CheckNull(string? month)
{
    var status = _exampleMethods.CheckNull(month);
    if (status.IsValid)
        return RedirectToAction(nameof(Index), 
             new { message = status.Message });

    status.CopyErrorsToModelState(ModelState);
    return View();
}

The list below contains a description of how the CheckNull method could be used in an ASP.NET Core application:

  • Line 5: You call the CheckNull method and you get back a status, which is of type IStatusGeneric.
  • Line 6: If the status is valid, then you go back to the Index page.
  • Line 8: The success message from the CheckNull method to show to the user.
  • Line 9: Otherwise, the status has errors, so we want to show the error(s).
  • Line 10: Using the CopyErrorsToModelState method from EfCore.GenericServices.AspNetCore library the errors (which are stored using the ValidationResult class) are added to the ASP.NET Core’s ModelState.
  • Line 11: This returns to the Get action and the error(s) in the ModelState are shown to the user.

Different ways to add errors to the status

The last section provided a simple example of using the StatusGeneric library, but in real life the checking for errors can be must more complex. Here are some of the situations you might come across and how the StatusGeneric library:

  1. Using multiple checks to return all the errors to the user
  2. Combining errors from multiple statuses

1. Using multiple checks to return all the errors to the user

The example below show you might apply many checks so that the user gets all the errors in one go. This pattern is good for users as all the errors are returned at the same time.

public IStatusGeneric CheckPassword(string password)
{
    var status = new StatusGenericHandler();

    //series of tests and then return all the errors together
    //Good because the user gets all the errors at once
    if (password.Length < 10)
        status.AddError("A password must be 10 or more in length",
            nameof(password));
    if (!password.Any(char.IsUpper))
        status.AddError("A password must contain an upper case character",
            nameof(password));
    if (!password.Any(char.IsLower))
        status.AddError("A password must contain a lower case character",
            nameof(password));
    if (!password.Any(char.IsDigit))
        status.AddError("A password must contain an number",
            nameof(password));
    
    return status;
}

2. Combining errors from multiple statuses

Sometimes the testing for errors is best coded by called to other “return a status” methods so the StatusGeneric library has a CombineStatuses method. This method will copy the errors from one the called “return a status” method into the caller’s status.

The code below shows an example of logging in with tests on the email, password and the actual login part and, if successful, then returns the userId.

public IStatusGeneric<string> Login
    (string email, string password)
{
    var status = new StatusGenericHandler<string>();

    status.CombineStatuses(
        CheckValidEmail(email));

    if (status.HasErrors)
        return status;

    if (status.CombineStatuses(
            CheckPassword(password)).HasErrors)
        return status;

    var loginStatus = LoginUser(email, password);
    status.CombineStatuses(loginStatus);

    status.SetResult(loginStatus.Result);

    return status;
}

The list below contains a description of how the Login method works:

  • Line 1: The Login method will return the logged-in user’s userId.
  • Lines 6,7: The Login method calls a CheckValidEmail method that returns a status which is copying into the caller’s status via the CombineStatuses method.
  • Lines 9,10: This returns the combine status if the status has errors.
  • Lines 12 to 14: This shows shorter way to return on a combined status that has errors.
  • Lines 16: The LoginUser method returns a status that contains (if there are no errors) the logged-in user’s userId.
  • Line 17: Its status is combined to pick up any errors.
  • Line 19: This sets the string Result to send back with the method’s status. NOTE: if there are errors the Result is set to default, which for a string is null.

Real-world examples of using the StatusGeneric library

I built a large library called AuthPermissions.AspNetCore (shortened to AuthP) which allow developers to create multi-tenant applications (and other features). Up to version 4.0.0 the AuthP  library uses the StatusGeneric library, and here is a link to AuthTenantAdminService in AuthP 4.0.0 version, which shows it handles errors (tip: start at line 126, because that’s where the error handling starts).

2. How to use the Localize-ME library

And in the end of 2022 I created another library, referred to as to Localize-ME, that includes a version that supports the StatusGeneric’s interfaces, but handles multiple languages of the error messages (known as localization). I created this library to add localization to my AuthP library and because because the AuthP already used the StatusGeneric library I added the StatusGenericLocalizer / StatusGenericLocalizer<T> classes to replace the StatusGenericHandler / StatusGenericHandler<T> classes.

The design of the StatusGenericLocalizer (and its IDefaultLocalizer service) is to have a default message / error within your code, in your selected language you define when register the IDefaultLocalizer service. This has two benefits, a) the code is easier to understand and b) if localization isn’t turned on it can still provide message / error (useful in NuGet packages).

NOTE: The Localize-ME library was build to add features that overcome certain restrictions in the .NET localization services – see this article for more on this.

The Localize-ME library has comprehensive documentation so this section is really about understanding the differences between the StatusGeneric library and the StatusGenericLocalizer class and its IDefaultLocalizer service in the Localize-ME library.

How the StatusGenericLocalizer is different from the StatusGeneric?

The code below does exactly as the StatusGeneric shown in the “How to use the StatusGeneric library in your code” section ?LINK? but it’s a lot longer than the StatusGeneric version. That’s because we need to provide the service and keys to display the Message and Errors in different languages. The list after the code explains the different parts from the original StatusGeneric version ?LINK?.

public IStatusGeneric CheckNull(string? month)
{
    var status = new StatusGenericLocalizer(
         _defaultLocalizer);
    status.SetMessageString(
        "Success".ClassMethodLocalizeKey(this, false), 
        "All went well");

    if (month == null)
        return status.AddErrorString(
            "NullParam".JustThisLocalizeKey(this), 
            "The input must not be null.", 
            nameof(month).CamelToPascal())

    return status;
}

The list below contains a description of the use of the StatusGenericLocalizer class in the method above:

  • Lines 3,4: The StatusGenericLocalizer constructor needs a IDefaultLocalizer service. This requires to localize your application and  register the IDefaultLocalizer service on startup.
  • Lines 5 to 7: This adds a default success Message with two parts:
    • Line 6:  This creates a unique key (see this document) to lookup the message/error in the localization resource files.
    • Line 7: This is the message to use if the culture (language) of the user / app matches the default culture. Note that if there are errors, then the Message contains the string “Failed with nn errors”;
  • Lines 10 to 13: This adds an error to the status and then returns the status
    • Line 10: This a method that adds an error in the form of a constant string. Other methods allow FormattableStrings, which can contain dynamic data in the error.
    • Line 11: Creates a key to lookup the error message in the localization resource files.
    • Line 12: The error string in the default culture.
    • Line 13: Optionally you can add the name of the member that has the error. In this case the Localize-ME library contains an implementation of the CamelToPascal method, which makes the member name used in a class.

Real-world examples of using the Localize-ME library

As I said I have a library called AuthP and I released version 4.1.0 with localization via the Localize-ME library. In the “Real-world examples of using the StatusGeneric library” section ?LINK? I gave you a link to the AuthTenantAdminService in AuthP 4.0.0, before the localization.

This link to the AuthTenantAdminService comes from the main branch, which does the same as the AuthP 4.0.0 version, but it supports localization (tip: start at line 128, because that’s where the error handling starts).

Conclusions

The “returning a status” method pattern is simple, but powerful pattern. I have used this pattern so many times that I build “return a status” code in 2014. Then in 2019, when .NET was stable, I released the StatusGeneric library with an improved version over the 2014 implementation due to feedback of using the first version for many years.

What I didn’t do well on StatusGeneric library was documentation, and this article is one way to fix that issue, although I have also improved the StatusGeneric’s README file a few times to provide better documentation.

The Localize-ME library DOES have good documentation right from the start, with this page for the StatusGenericLocalizer classes. It needed the documentation because localization is MUCH more complex than the “return a status” pattern, but if you want to support multiple languages then you need it.

Happy coding.

Applying an improved multiple languages library to .NET applications

Last Updated: January 16, 2023 | Created: January 16, 2023

The last article covered why I added extra features to the .NET support of multiple languages  (known as localization in .NET) and via the Net.LocalizeMessagesAndErrors library (shortened to Localize-ME library). This article provides the details of how you would use the Localize-ME library in to add localization to your .NET application. The full list of the articles in this series are:

  1. Improving the support of multiple languages in .NET applications
  2. Applying the improved multiple languages support to .NET applications (this article)
  3. A library / pattern for methods that return a status, including localization (coming soon)

TL;DR; – Summary of this article

  • This article assumes you know the terms / concepts of localization. If you don’t, then go to the “A brief view of how .NET localization works” which introduces to the localization concepts.
  • The Localize-ME library adds extra localization features focused on a) making it easier to localize an existing application, and b) stop localization from making your code harder to understand.
  • The Localize-ME library provides two services
    • SimpleLocalizer service is good in small / simple applications and is simple to use.
    • DefaultLocalizer service is good from large applications with many localizations because it uses a more formal pattern that makes it easier to find / set up the localized messages.
  • This article provides eight steps to localize an ASP.NET Core application using either, or both Localize-ME services. Each step provides a summary of what you need to do with links to the Localize-ME documentation for the definitive details.

Super-quick explanation on the Localize-ME library provides

The Localize-ME library adds extra localization features to improve localizing your application. The big difference from the .NET localization is that you keep your existing messages, error strings, etc. in your code (known as the default messages) while .NET localization would replace your messages with a key (referred to as localize key).

Keeping your existing messages in your application has lots of benefits, but the biggest are a) its easier to add localization an existing application, and b) your code is easier to understand and test.

To make this work you provide the language (referred to as culture) that your default messages are written in when the Localize-ME library is registered. This allows the Localize-ME services to return the message in your code when the user’s / app’s culture matches the default messages culture. If the user’s / app’s culture isn’t the same as the default messages culture, then it uses the .NET localisation services to lookup the message in the correct resource file for the required culture.

The diagram below shows this for French (LHS), which isn’t the default messages culture, and English (RHS), which is default messages culture.  The blue italic words in the diagram explains the two different routes for the two cultures.

Setting the scene – the best places to use the Localize-ME library

The Localize-ME library adds two localize services on top of the .NET localization services which provide new (better) ways to localize your application. From my experience I created two services, SimpleLocalizer and DefaultLocalizer, that both localize a message, but works better in different cases. The list below provides my take on where are best used.

1. The SimpleLocalizer service is good for using in Razor pages etc.

The SimpleLocalizer service provides the simplest approach to obtaining a localized message. Its features that make it simpler are:

  • Auto localize key: the SimpleLocalizer service uses your message as the lookup key (which I call localise key) which is unique, while the IStringLocalizer needs you to add a string that must be unique to your message.
  • Simpler injection: the SimpleLocalizer’s TResource part (the later section 3 describes the TResource part) for what is set on startup so you only need ISimpleLocalizer to get an instance, while the IStringLocalizer needs IStringLocalizer<TResource>

2. The DefaultLocalizer service is better on large localizations with backend projects

When I started adding localization to a medium size application, I found string constants for the localize key were hard to remember and hence error prone. The DefaultLocalizer service uses extension methods to create the localize key, which has the following benefits:

  • Formal localize key design: The localize key has a {class}, {method} and {localKey}, which tells you what class and method the message came from in your code.
  • Auto fill in of {class} & {method}: the extension methods will fill in the {class} & {method} parts of the localize key for you.
  • Short, but unique localize key: The library has various ways to make short, but unique localize keys. See this section from the Localize-ME documentation on how that works.

The common features of both the SimpleLocalizer and DefaultLocalizer are:

  • Your code is easier to understand: the .NET localization services replace your messages with a localise key. The SimpleLocalizer / DefaultLocalizer services keep your message
  • Better missing entry handling: If the the .NET localization services can’t find the localized message, then it returns the localize key which isn’t that useful to the user. While SimpleLocalizer / DefaultLocalizer services returns the default message, which might be in the wrong language but can translated by the user (it also logs a warning with very detailed information what was missing – see this section in the Localize-ME documentation.

Things that the Localize-ME library doesn’t provide

The .NET localization services contain features that I don’t try to provide in the Localize-ME library. They are:

Using the Localize-ME library in an ASP.NET Core application

The list below contains the eight steps for adding localization to an .NET application, with examples from an ASP.NET Core application. Each step provides a summary and examples for its steps, with links to the Localize-ME documentation which contains the more detailed information

  1. Setting up the .NET localization
  2. Setting up the Localize-ME services
  3. Adding your resource files and TResource classes
  4. Getting an instance of the SimpleLocalizer
  5. Getting an instance of the DefaultLocalizer
  6. Using the SimpleLocalizer service
  7. Using the DefaultLocalizer service
  8. How to unit test your localized code

1. Setting up the .NET localization

The Localize-ME library relies on the .NET localization services so we start with this (the Localize-ME adds its extra features, which are described later).

On startup you need to register the .NET localization, its resource files (see section 3 later about resource files) and, set up how to obtain the user’s / app’s language (known as culture in .NET). The Localize-ME documentation contains information on how to set this within an ASP.NET Core application in some detail with lots of links to useful information and three example applications, so go to that document for the full information

I did want to point out how the .NET localization obtains user’s / app’s culture, which is needed to return the messages in the correct language. By default the parameter called RequestCultureProviders within the .NET localization options has three ways to try to obtain the culture, which are used in turn until it gets a culture  – see list below:

  1. QueryStringRequestCultureProvider – uses culture data in the query string, which allows you to create a URL that will set the culture.
  2. CookieRequestCultureProvider – looks for a culture cookie, which is useful if want users to select the language they want.
  3. AcceptLanguageHeaderRequestCultureProvider – this picks up data from the browser to set the culture.

These are the main approaches to get the user / app cultures, and they are described here.

2. Setting up the Localize-ME services

I have made the setting up of the Localize-ME library as simple as possible. Here are the two registrations you need to add to your setup code in the Program class.

The code below registers the DefaultLocalizer – click here for information on the two parameters.

builder.Services.RegisterDefaultLocalizer(
    "en", supportedCultures);

The code below registers the SimpleLocalizer – click here for information on the optional parameter

builder.Services.RegisterSimpleLocalizer
    <HomeController>();

NOTE: The DefaultLocalizer service must be registered for the SimpleLocalizer service to work.

3. Adding your resource files and TResource classes

Resource files hold the messages, in a specific language, in the Value column while the localise key is held in the Name column. The .NET localization services uses the culture to pick the correct resource file and then the localize key obtain the correct message to show to the user.

You link a resource file to your localization service via a class, known as a TResource class, in the ASP.NET Core project. The resource file uses part of the TResource class’s FullName to the start of its filename. For instance, if you used the HomeController class as a TResource class and the language is USA English then the resource file name would be (but see this link to see the other file formats)

Controllers.HomeController.en-US.resx

Resource files aren’t the easiest part of the localization service, so I suggest you read the Localize-ME All about resource files documentation. This explains how to register them, and the different ways you can organise the resource files.

The other problem is finding all the localise keys and the appropriate localized message and then adding into the resource files. I give two approaches that I created when I was applying this library in to my AuthP library. See:

4. Getting an instance of the SimpleLocalizer

Obtaining an instance of the SimpleLocalizer service is easy, as you have already defined the TResource class on startup.

  • If you are in C# code, then you use the interface ISimpleLocalizer via dependency injection to get SimpleLocalizer service.
  • If you are in a Razor page (cshtml), you use “@inject ISimpleLocalizer SimpleLocalizer”.

NOTE: See this Localize-ME document for example code and more on ISimpleLocalizerFactory service.

5. Getting an instance of the DefaultLocalizer

The way to get an instance of the DefaultLocalizer service is the same as getting the SimpleLocalizer service, but you need to the TResource class to define which resource file group this service should look up for localized message.

The simplest approach is to use a dependency injection with the IDefaultLocalizer<TResource> interface, e.g. IDefaultLocalizer<HomeController>.

But if you have backend code in other projects you can’t do that because the TResource class must be in the ASP.NET Core project and your backend projects can’t link to these TResource because that would create a circular reference. In this case you need the IDefaultLocalizerFactory service and some options.

To use the IDefaultLocalizerFactory service you need to register singleton class (shown as MyOptions in the code below) which contains the Type of TResource class(es) you need in your backend code, and then use the IDefaultLocalizerFactory service, as shown below within your backend code.

public class MyBackendCode 
{
    private readonly IDefaultLocalizer _defaultLocalizer;

    /// <summary>
    /// ctor
    /// </summary>
    public MyBackendCode(MyOptions options,
        IDefaultLocalizerFactory factory)
    {
        _localizeDefault = factory.Create(
            options.BackendResourceType) 
    }
    //… rest of class left out
}

NOTE: See the Getting an instance of the IDefaultLocalizer service documentation for more details.

6. Using the SimpleLocalizer service

The SimpleLocalizer service is simple to use! It only has only two methods and the localise key is derived from the message (see this section in the Using SimpleLocalizer documentation on how that works). The first method is shown below and handles a string message.

<label for="month">
    @(SimpleLocalizer.LocalizeString(
        "Provide a string (can be null)", this))
</label>

The other method shown below handles FormattableStrings, where you can provide dynamic data into the localized message.

<h1>
    @(SimpleLocalizer.LocalizeFormatted(
       $"List of {Model.NumBooks} books", this))
</h1>

NOTE: These two examples come from Razor pages, using the “@inject ISimpleLocalizer SimpleLocalizer” approach, but you can also use dependency injection within a ASP.NET Core Controller or Page.

7. Using the DefaultLocalizer service

The DefaultLocalizer service has only two methods:

Both methods take in two parts

  1. The localize key that uses a formal design that can contain the {callingClass}, {method} and {localKey}.
  2. The default message, either in a string or a FormattableString.

The localize key is created by one of the localize key extension methods which contain various combinations of the {callingClass}, {method} and {localKey} – this link shows you the various methods / combinations and what situation each one is used for.

This makes calling a DefaultLocalizer method is a bit more work than the SimpleLocalizer method calls, but the extra effort provides you with a localise key that is easy to understand and easily track back to where you made the localize call.

NOTE: There is an interesting section about DefaultLocalizer localize key creation called the balance between readable localize key and being unique, which provides two ways to create both short and unique localize keys.

8. How to unit test your localized code

Once you change your code to use the Localize-ME library, then you will need to provide a ISimpleLocalizer or IDefaultLocalizer service to runs your tests. I recommend using a stubbing out approach (see this Microsoft document about stubbing) in your unit tests as because the stub to return the default message in your code, which makes it easier to update your unit tests and the tests are easier to understand.

The Localize-ME library contains stubs for the ISimpleLocalizer or IDefaultLocalizer services, called StubSimpleLocalizer and StubDefaultLocalizer respectively. The code below shows the using StubDefaultLocalizer (the StubSimpleLocalizer works the same)

[Fact]
public void TestStubDefaultLocalizer()
{
    //SETUP
    var defaultLoc = new StubDefaultLocalizer();

    //ATTEMPT
    var message = defaultLoc.LocalizeStringMessage(
        "MyLocalizeKey".MethodLocalizeKey(this),
        "My message");

    //VERIFY
    message.ShouldEqual("My message");
    defaultLoc.LastKeyData.LocalizeKey.ShouldEqual(
        "TestStubDefaultLocalizer_MyLocalizeKey");
}

NOTE: The unit testing your localized code document gives more information on the StubSimpleLocalizer and StubDefaultLocalizer classes.

8a. Logging Localize-ME localization during unit testing

I also created a more powerful IDefaultLocalizer stub called StubDefaultLocalizerWithLogging, which returns the default message, which optionally logs the full information of the localization data to a database. This provides a quick way to look at the localized messages, and it can find certain problems.

This stub is much more complex to use, but it does provide a very useful list of the localised messages. This helps in checking the localize keys and also speeded up the process of building the resource files – see the this section from the first article where I explain why I found it so useful, but here is a screenshot of a section of the logged localization data. Note the PossibleErrors column which has found an existing entry in the database with the same localize key, but the message format is different. NOTE Click the screenshot to get a bigger version.

NOTE: There is detailed documentation about how to setup and use the StubDefaultLocalizerWithLogging class.

Conclusions

I had a requirement from a developer to add localization to my AuthP library so I started looking at .NET localization services. The .NET localization didn’t have the features to provide an optional localization feature (i.e. your code will still works without resource files) to my AuthP library as it was. So, I started to create the Localize-ME library that makes localization optional in my AuthP library.

Once I knew I had to create a new library, then I could reimagine how I would like to apply localization in a .NET application. For instance, I made it possible to keep your messages are in your non-localized application which means that your code is easier to update and understand. While the .NET localization approach, which would move the messages to the resource files and replace them with a localize key, makes the code (a bit) harder to understand.

Unfortunately, I didn’t come up a way to remove the manual / tedious job of building resource files, but you might like to look at section 8a about capturing the localize data while running your unit tests. Personally, I found this very useful in providing the data to help in building resource files.

Other improvements came as I started to use the Localize-ME library in different ways, from a test application. updating the AuthP library and creating another test example in the AuthP library. Each usage was different which highlight different issues, and each issue often provided new approaches or features. This means it took way longer that I thought it would create the library, but I’m pleased with the final result. I hope the Localize-ME library will help you too.

Extra note: I ran a twitter poll on whether users of AuthP library would use the new localization feature the votes were 10 to 1 in favour of needing the localization feature. This makes sense as the AuthP library’s features is helping developers to create multi-tenant applications, which could be used in any country or countries.

Happy coding.

Improving the support of multiple languages in .NET applications

Last Updated: January 16, 2023 | Created: January 2, 2023

One of users of my AuthPermissions.AspNetCore library (shortened to AuthP in this article) asked for support for multiple languages – known in .NET as Localization. I looked at the .NET localization solution and it lacked some features I needed for my situation. I took some time to work out to use the .NET localization code and in the end, I built a small library called Net.LocalizeMessagesAndErrors which wraps the .NET version with a interface that adds a some extra features.

This article explains how this new library it easier to add / manage multiple languages in your applications, with articles showing how to use this new localization library. The full list of the articles in this series are:

  1. Improving the support of multiple languages in .NET applications (this article)
  2. Applying an improved multiple languages library to .NET applications.
  3. A library / pattern for methods that return a status, including localization (coming soon)

TL;DR; – Summary of this article

  • This article provides a super-quick introduction to .NET localization feature, as some of the concepts weren’t obvious to me at the start.
  • My problem was if I added the .NET localization to my AuthP library, then no one could use the AuthP library unless they had set up the .NET localization with resource files, which is lot of work.
  • My solution was to build a service that wraps around the .NET localization service and provides extra features. Specifically, you can build applications without .NET localization, and it will still work. This feature will also help developer who need to add localization to an existing application, as you can apply localization in stages.
  • Once I started looking at localization, I found several ways to either make the code easier to understand, or easier to use. The result is the Net.LocalizeMessagesAndErrors library.
  • After the quick explanation how the Net.LocalizeMessagesAndErrors’s IDefaultLocalizer service works I detail six localization challenges and how I got around them, plus a tip on how to setup localization resource files.  

A brief view of how .NET localization works

At first, I was confused by how to support multiple languages because I has no idea what the terms means and how they work together. Therefore, here is a short introduction from me with links at the end to other articles that I found very useful:

  • Localization means providing different languages, e.g. English, German, Mandarin, in your application – (I like the name Multilingual support as its more obvious, but I use localization because that what Microsoft calls it).
  • Globalization is about showing dates and numbers in the correct format, with some compare / order string methods.
  • With .NET localization you store the different messages in resource files in your application. Each resource file has a name based on:
    • A name, usually taken from a class’s Fullname, e.g. Controllers.HomeController
    • A name representing the language it contains, eg. “en-GB”
    • And has extension of .resx.
  • A resource file has entries with a Name (which I call the localize key) and Value which holds you’re message in the correct language. The Name / Value entries in the resource file works like a dictionary, with the Name being the key.
  • You add a resource file for each language (known as culture) and in each resource you would add an entry for each message (Value) you want to show, with a unique localize key (Name) to use as the lookup.
  • You also need to setup the localization services – see this for how to setup an in ASP.NET Core application and the other links below.
  • You would get a localize service, like IStringLocalizer<TResource>, to obtain the localized message. There are three parts to get the localised message:
    • The start of the resource filename is defined by the TResource’s FullName.It then adds the current culture Name from the user, cookie, or other source (depends on what you setup).Your .NET service which takes your localize key, e.g. _localizer[“YourKey”], which return a string containing the entry found in the selected resource file.
    • You can also have formatted messages, such as $”Date is {0}”, which would need extra data, e.g. _localizer[“YourKey”, DateTime.Now].

Once I understood the names / concepts the Microsoft’s documentation of .NET localization made much more sense to me. Here are links to articles about .NET localization that I found useful:

Super-quick explanation on the IDefaultLocalizer works

This article is about why I implemented the Net.LocalizeMessagesAndErrors library and what new features that it contains, but here is an overview of the IDefaultLocalizer service to help you understand the extra features this service provides.

In the nutshell the IDefaultLocalizer service lets you to put strings like “Hello!” or FormattableString like $"The date is {DateTime.Now}" in your code (I use the term message for these two types), which makes your code easier to understand. Have look at the diagram below and read the blue italic words which explains how the IDefaultLocalizer service works.

This article doesn’t tell you how to use the Net.LocalizeMessagesAndErrors library, but it highlights the main change – there is a message in each call. If you want more information on how use the library then see the “How to add multiple languages to a ASP.NET Core using the Net.LocalizeMessagesAndErrors library” (coming soon) or the Net.LocalizeMessagesAndErrors documentation.

The localize challenges I found and how I fixed them in my library

I spent a lot of time trying to come up with ways to use the .NET localization to work with my AuthP library, but it just didn’t work for me. Some of the problems were around adding localization to a NuGet package, but the biggest issue was the massive changes I would have to make to the AuthP library if I changed over to .NET localization.

The list below gives the localize challenges I found and how I overcame them. The list is in order with the biggest challenges first. They are:

  1. I didn’t want to turn all the messages / errors into just a localize key.
  2. I wanted a NuGet that works without having to setup the .NET localization.
  3. I didn’t like .NET’s localization’s handling of missing resource files / entries.
  4. I wanted a better format for the localise key to help creating unique strings.
  5. I wanted to unit test without setting up localization.
  6. I wanted to check the localized messages with their localize key.
  7. Tip: Use Excel (or other app) to setup the data for the resource files.

1. I didn’t want to turn all the messages / errors into just a localize key

As of version 4.0.0 of the AuthP library has 100+ messages over five projects. Most of these messages are error messages while the rest are success messages. Here are an example of success and error message:

  • "Successfully added the new user."
  • $"There is already a Role with the name of {0}"

If I used .NET localization these messages would be turned into a localize key, which from my view has the following downsides:

  • The messages make great comments and turning into just a localise key messages would make the code harder to understand.
  • It’s a lot of work to move these messages to a resource file, and the messages are much harder to update.

My solution was to leave the current success / error messages where they are and define them as generic English (culture “en”) – I call these messages as default messages. I already have a common pattern for my methods / services which handles success and error message, so it was easy to update the code to pass the default messages to my localization wrapper called DefaultLocalizer. The process the DefaultLocalizer follows are:

  1. On registration of the DefaultLocalizer service, I define the culture of the default messages, in this case “en”.
  2. If the user’s / app’s culture started with default culture, then the default message is returned without having to use the .NET localization.
  3. If the user’s / app’s culture doesn’t start with default culture, the DefaultLocalizer service uses the .NET localization to obtain the message from the resource files.

Here is an example of my improved common method to handle localization showing a success message and an error message.

public class ExamplesOfStatusGenericsLoc<TResource>
{
    private readonly IDefaultLocalizer<TResource> 
        _defaultLocalizer;

    public ExamplesOfStatusGenericsLoc(
        IDefaultLocalizer<TResource> defaultLocalizer)
    {
        _defaultLocalizer = defaultLocalizer;
    }

    public IStatusGeneric CheckNull(string? month)
    {
        var status = new StatusGenericLocalizer(_defaultLocalizer);
        status.SetMessageString(
            "Success".ClassMethodLocalizeKey(this, false), 
            "Successful completion.");

        if (month == null)
            return status.AddErrorString(
                "NullParam".JustThisLocalizeKey(this), 
                "The input must not be null.");

        return status;
    }
//...rest of class left out

NOTE: If any errors are added to the status, then the Message is changed to “Failed with {n} errors”. That’s just in case the success Message is incorrectly shown.

2. I wanted a NuGet that works without having to setup the .NET localization

If I just applied the .NET localization to the AuthP NuGet library, it would mean everyone that used this library they would have to set up .NET localization with resources etc. The library is already complex and with the extra needed to understand / setup .NET localization would put off developers from using this library.

The solution I added is into the DefaultLocalizer service is to return the default message if the .NET localization hasn’t been setup. This means when the localization version of the AuthP library is released:

  • The AuthP library doesn’t get any more complex unless the developer what’s to use this new localization feature.
  • Developers that are already using the AuthP library can upgrade to the localization version without needing to change their code.

I hate to think what new and existing users would think if they had to set up .NET localization to use the AuthP library!

NOTE: You might not be creating a NuGet like I has, but if you are adding localization to an existing application, then this approach allows you to add localization in stages. That might be pretty useful.

3. I didn’t like .NET’s localization’s handling of missing resource files / entries

The .NET localization will return the localize key if no entry is found in the resource files. This typically doesn’t provide a good experience for the user. The DefaultLocalizer service can provide the default message which isn’t in the correct language, but easy for the user to translate.

The other issue of missing resource files / entries is reporting. The NET localization does provide a ResourceNotFound parameter, which will be true if the localized message isn’t found, but if you want a log / event then you need to add that to each localization call. On the other hand, the DefaultLocalizer service provides a very detailed log – a example is shown below.

The message with the localizeKey name of 
'MissingResourceEntry_MissingEntry' 
and culture of 'fr' was not found in the 
LocalizedWebApp.Resources.Controllers.HomeController' 
resource. 
The message came from 
DefaultLocalizerController.MissingResourceEntry, line 38.

This provides everything you need to correct this problem, including the class, method, and line number of where the localization came from.

4. I wanted a better format for the localise key to help creating unique strings

The .NET localization service allows you to use any string as the localize key, and its up to you to make sure it is unique. You can use string constants, e.g. “HelloMessage” for the localise key, but when I build (and used!) the DefaultLocalizer service I found string constants were hard work and error prone.

My view is that string constants are fine for small applications, but for larger applications the localize key needs a standard format and methods to help the developer to create unique localize keys quickly. My solution has a format of “{className}_{methodName}_{localKey}”, with the className and methodName being optional. The table below shows are three main versions that are used, with the first one used on 90% cases.

Localise key stringUnique
“MyClass_MyMethod_SetByDev”Unique in the class and method – most used
“MyClass_SetByDev”Unique in the class – useful for common errors
“SetByDev”It’s the developer’s job to ensure it is unique

To implement this localize key format I have created a set of extension methods that can automatically fills in the “{className}” and “{methodName}” for you. This has two advantages:

  • Easier for the developer to create a unique localize key.
  • The developer can work out where the localize key was created.
  • You can cut / paste your localize code and the localise key will automatically change to the new class & method parts of the localize key.

Here are two examples taken from the ExamplesOfStatusGenericsLoc method shown earlier in this article:

  • "Success".ClassMethodLocalizeKey(this, false)
  • "NullParam".JustThisLocalizeKey(this),

You can get a full set of the extension methods in the “Creating localize keys” document also cover some of the problems and solutions of the balance between readable localize key and being unique in this section.

5. I wanted to unit test without setting up localization.

As I said the AuthP library has five project containing code and I have nearly 400 unit tests, of which a third check errors or success messages. If I used .NET’s localization on its own, then I could easily stub out (see this Microsoft document about stubbing) the .NET’s localize methods but would still have to change many of the unit tests to use the localize key instead of the actual error / success messages. It’s more work and makes the unit tests less easy to understand as the actual error / success strings are gone.

Because the DefaultLocalizer can return the default messages it’s easy to create a DefaultLocalizer  stub can return the actual error / success strings. The Net.LocalizeMessagesAndErrors repro contains several stubs, but in this case, you need the StubDefaultLocalizer class.

The StubDefaultLocalizer class has the same methods as the DefaultLocalizer class, but it a) returns the default message, and b) holds the localize key data of the last localize. This allows the unit test to continue in the same way, but if I want to you can check on the localizer key. See the code below which shows how the StubDefaultLocalizer class works.

[Fact]
public void TestStubDefaultLocalizer()
{
    //SETUP
    var defaultLoc = new StubDefaultLocalizer();

    //ATTEMPT
    var message = defaultLoc.LocalizeStringMessage(
        "MyLocalizeKey".MethodLocalizeKey(this),
        "My message");

    //VERIFY
    message.ShouldEqual("My message");
    defaultLoc.LastKeyData.LocalizeKey.ShouldEqual(
        "TestStubDefaultLocalizer_MyLocalizeKey");
}

This works fine, but I found another type of stub that solved another issue I came across, as described in the next section.

6. I wanted to check the localized messages with their localize key

Once I starting to localize my AuthP library, which has ~110 localization I soon found I really needed an overview of all the localizations to check on localise key uniqueness, format, duplicates etc. Stepping though the code to fine each message is hard work and its easy to miss one.

So, I thought – can I write the localize information to a database when running my unit tests. At that point I created StubDefaultLocalizerWithLogging class, which returns the default message, but (optionally) logs the full information of the localization data to a database. This provides a quick way to look at the localized messages, and it can find certain problems.

For each use of a DefaultLocalizer usage it logs the localize key, culture, the message and where the localised entry was created (the full list of what is in the log can be found in the LocalizedLog class, which has 9 parameters).

The screenshot below is a section of the logged localization data. Note the PossibleErrors column which has found an existing entry in the database with the same localize key, but the message format is different. NOTE Click the screenshot to get a bigger version.

I have found quite a few of localization issues by looking through the sorted data. I also I found the logged list very useful when building resource files for other languages because it gives me the Name (localize key) and the Value (string / format) that needs translating. My unit tests only find 75 localized messages when in fact that there are ~110 localized messages. For the 35 localize message ones that aren’t logged I had to go three extra steps to set up the entry in the resource file(s):

  1. Find the code that created the localized message.
  2. Work out what the localize key is.
  3. Copy the message format.

These three manual steps are tedious and error prone. It enough to make me improve my unit test coverage 😊.

The only downside of logging to a database is the unit tests are slower – in the Net.LocalizeMessagesAndErrors library that has ~100 the unit tests which take ~1.5 seconds without logging to the database, but ~2 seconds with logging to the database. In the AuthP library, which has nearly 400 tests the difference between log to database being on / off is a smaller percentage.

Thankfully you can turn the database logging on or off by setting the `SaveLocalizesToDb` to true or false respectively – see the documentation for the StubDefaultLocalizerWithLogging here.

7. Tip: Use Excel (or other app) to setup the data for the resource files.

This isn’t anything to do using DefaultLocalizer, but I found that adding entries to a resource files isn’t a nice process in Visual Studio (VS Code, with the ResX Editor extension, is better). In the end used Excel to entry the resource Name / Value and then turn in into a .csv file. The code below (taken from my AuthP repo) converts CVS to a resource file.

public void CreateResxFileFromCSV()
{
    var csvFilePath = "filepath to csv file";
    var resxFilePath = "filepath to EMPTY resource file";
    
    //see https://joshclose.github.io/CsvHelper/getting-started/#reading-a-csv-file
    using (var reader = new StreamReader(csvFilePath))
    using (var csv = new CsvReader(reader, 
          CultureInfo.InvariantCulture))
    {
        var records = csv.GetRecords<CsvInputOfResx>();
        //see https://learn.microsoft.com/en-us/dotnet/core/extensions/work-with-resx-files-programmatically#create-a-resx-fil

        using (ResXResourceWriter writer = 
             new ResXResourceWriter(@resxFilePath))
        {
            foreach (var entry in records)
            {
                writer.AddResource(entry.Name, entry.Value);
            }
        }
    }
}

I find this is especially useful if you need to change / add to your resource file, as its much easier to search / change in Excel.

Conclusions

The Net.LocalizeMessagesAndErrors is relatively small (the DefaultLocalizer is only has ~100 lines of code), but it took me more than five weeks of work! That’s because when I started to use the library I found a load of improvement – I got to local version 1.0.0-preview034 before I had finished. The result is that the library is much easier to use when updating an existing application to support multiple languages, and hopefully nicer to work with.

The changes I added came from applying the library to a) my AuthP library, b) adding a demo ASP.NET Core app within the library repo (see LocalizedWebApp), c) localizing the Example1 ASP.NET Core in my AuthP library and d) writing the Net.LocalizeMessagesAndErrors documentation (writing the docs always shows me any bad interfaces).

In the following articles I will show how to use the Net.LocalizeMessagesAndErrors library to build localized .NET applications. There also good documentation for this library now that contains all the details if you want to try it out now.

Happy coding.

ASP.NET Core: Three(+1) ways to refresh the claims of a logged-in user

Last Updated: October 25, 2022 | Created: October 18, 2022

An ASP.NET Core application uses claims to hold the logged-in user’s authentication and authorization data. These claims are created on login and stored in a cookie or a JWT Token for quick access. This makes access to the claims is very fast, but downside is claims are fixed. Most of the time the “fixed claims” approach works fine, but there are some circumstances where you might need to update the user’s claims.

This article describes three different ways to dynamically change a user’s claims for cookie and/or JWT Token authentication. The article also introduces some of the deeper parts of ASP.NET Core’s authorization and middleware, which might help you in understanding how ASP.NET Core works.

NOTE: This article covers some similar topics in the Advanced techniques around ASP.NET Core Users and their claims, but includes new approaches provided by a new distributed cache Net.DistributedFileStoreCache library (referred to as FileStore cache in this article). The first example has a small improvement, while the other two approaches are new and work with JWT Token authentication.

This article is part of the series that covers .NET multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

TL;DR; – Summary of this article

  • ASP.NET Core creates HttpContext.User on login, which contains user information (like their name and email) in claims. The User, with its claims, is stored in a cookie or a JWT Token for quick access. You can think of the cookie / JWT Token as a kind of cache.
  • If you have data a) is used in almost every HTTP request, and b) it takes some time calculate, then it’s a good candidate to calculate/add it as a claim during the login. See this article on how to do add your own claims on login.
  • By default, the claims in a logged-in user won’t change until they log out and log back in again. Normally the “fixed claims” is fine but have various situations where I need to update a user’s claims around managing multi-tenant users.
  • Therefore, I have needed to create ways to refresh the claims of a logged-in user. In this article I describe three approaches:
    • 1. Update user claims via cookie event: This is a relatively easy way to update the user’s claims when using is the standard way to update an already logged-in user’s claims, but it only works for cookie authentication – see this section on this approach.
    • 2. Replace a user claim on a change: This uses middleware to update a claim when the logged-in claim is out of date. This approach it works with both cookie and cookie authentication – see this section on this approach.
    • 3. On-demand add a new claim: This uses middleware to add a new claim not already in your JWT Token or Cookie. This is useful if you have secret / personal data that you don’t want in a JWT Token because the data isn’t encrypted – see this section on this approach.
    • 4. Extra, IClaimTransformation: After I released this article Mohammed Ahmed Hussien (twitter: @shoogn17) said that the IClaimTransformation interface can also help on change a user’s claims, so I have added something about using IClaimTransformation instead of the middleware approach.
  • All these examples rely on a distributed cache called Net.DistributedFileStoreCache I created for these types of situations. This get a cache value in ~25 nanoseconds, but adding a cache value is slow-ish ( > 1 milliseconds). The very fast cache read means using these approaches won’t slow down your application.

Setting the scene – three types of dynamically updating a logged-in claims

I have created the AuthPermissions.AspNetCore library (shortened to AuthP in this article) which contains a) an improved ASP.NET Core Roles authorization system and b) features to help create an ASP.NET Core multi-tenant database system. The AuthP’s improved Roles authorization adds a Roles/Permissions claim and if the multi-tenant feature is activated, then an DataKey claim is also added.

In certain circumstances these AuthP’s Roles/Permissions and DataKey claims may change, and to handle this I have developed code to dynamically change a user’s claims. Here are two that I have found:

  • In an AuthP multi-tenant application there is code to move a tenant’s data from one database to another, thus changing tenant DataKey. This requires the DataKey claim of all the user’s linked to the moved tenant. You can try this out on AuthP’s Example4 hierarchical multi-tenant application and Authp’s Example6 sharding multi-tenant application.
  • The AuthP library version of ASP.NET Core Roles allows a Role to be dynamically changed, which means that an admin user can what pages / Web APIs are in a Role. If a Role is changed there might be security issues, so the user’s Roles/Permissions claim needs updating. You can try this out in the AuthP’s Example2 WebApi application that uses the JWT Token authentication.
  • The third approach is useful if you need a secret or personal value claim when using JWT Token authentication. You shouldn’t be added to the JWT Token because the token data isn’t encrypted. For instance, various personal privacy laws stop you from adding Personal Identifiable Information (PII) values in a JWT Token. You can try this out in the AuthP’s Example2 WebApi application that uses the JWT Token authentication.
  • At the end I also cover the IClaimTransformation interface that allows you to change a user’s claims. The IClaimTransformation service is an alternative from using middleware approach used in examples 2 and 3

Introducing the three examples of updating a user’s claims

The three examples of updating a user’s claims use different approaches to implementation. This allows you to choose the approach that works for you, but even if you never need these approaches seeing how they use events and ASP.NET Core middleware might help you understand the ASP.NET Core code a bit more.

The three approaches in this article assume you have added extra claims to your users on login. This is  described in this section of the article called “Advanced techniques around ASP.NET Core Users and their claims”. I recommend you read this article if you aren’t aware how to add extra claims to a user on login.

All the solutions rely on a distributed cache called Net.DistributedFileStoreCache I created for these types of situations (can find out more about the FileStore cache in this article). The FileStore cache’s key feature is that it as a very fast read time, measured in nanoseconds, which is important if you want application to be fast because each implementation is called on every HTTP request. It also a distributed cache, so it will work on web sites using multiple instances.

Here is a list of the three examples with a quick summary, their pros and cons and a comment on performance:

  1. Update user claims via cookie event: Cookie authentication only.
    1. Summary: This is the standard way to update an already logged-in user’s claims.
    1. Pros: Can handle any type of change because it can change all the claims.
    1. Cons: Doesn’t work with JWT Token authentication
    1. Performance: It re-calculate all the extra claims, but only for logged-in users. Very efficient as the cookie is updated to the new claims.
  2. Replace a user claim on a change: JWT Token and cookie authentication
    1. Summary: This uses middleware to update a claim when the logged-in claim is out of date.
    1. Pro: Can work with both JWT Token and cookie authentication.
    1. Cons: Could need a big cache file if lots of users.
    1. Performance: It re-calculates one claim for all users, not just logged-in users. If large number of users, then this can be slower than the third example.
  3. On-demand add a new claim: JWT Token and cookie authentication
    1. Summary: This adds a new claim not already in your JWT Token or Cookie.
    1. Pro: Hide secret data, e.g. you shouldn’t add a Personal Identifiable Information (PII) claim when using JWT Token.
    1. Cons: None
    1. Performance: It has a very good performance because the claim only calculated if a user is logged in, but it would be slower than example 2 if a database change altered many users.

NOTE: At the end I talk about using services that follow the IClaimTransformation interface, which could be used in example 2 and 3 instead of adding middleware.

1. Update user claims via cookie event: Cookie authentication only

This example only works with cookie authentication only but its fairly simple and is fast, i.e. it doesn’t add much extra time to each HTTP request and It’s also quite easy to adapt to different situations. The downside is you can’t use this with a JWT Token.

The two parts of this implementation are:

  • Detect Change: The code links to EF Core’s StateChanged event and detects a change to the entries that would change the claim. On such an event it writes the current UTC DateTime to the “LastChanged” entry in the cache – see the left side of the diagram below.
  • Apply to User: The code links to the cookie’s OnValidatePrincipal event and if the user’s “LastChanged” claim DateTime is older that the “LastChanged” DateTime from the cache,  then it will update the user’s claims and also create a new authentication cookie – see the right side of the diagram below.

The figure below shows how this example works.

The main code to implement contains three pieces of code:

  1. The event code to detect the database changes that require an update to user’s claims
  2. The GlobalChangeTimeService which sets / gets the “LastChanged” entry
  3. The event code called by cookie’s OnValidatePrincipal to check / update a user’s claims

With two support services

Let’s now look at the main code:

1.3 The event code to detect the database changes

I’m using EF Core which has a several events to track what is happing to the database. In this case I used EF Core’s ChangeTracker.StateChanged (see this useful article about EF Core events). The code can be found in the TenantKeyOrShardChangeService class inherits the IDatabaseStateChangeEvent interface and much be register via the ASP.NET Core DI provider. The constructor of the application’s DbContext contains an extra parameter containing an IEnumerable<IDatabaseStateChangeEvent> that contains any registered classes that have the IDatabaseStateChangeEvent interface.

In this example I am looking for two changes to the Tenant entity:

  • The ParentDataKey property being modified – this would change the DataKey claim, which defines the tenant filer key (lines 13 to 14).
  • The DatabaseInfoName property being modified – this is used in sharding and would change the DatabaseInfoName claim, which defines what database to use (lines 15 to 16).

The code below is taken from the TenantKeyOrShardChangeService class and shows the code to register its event listener, and the actual event handler.

public void RegisterEventHandlers(
    AuthPermissionsDbContext context)
{
    context.ChangeTracker.StateChanged += 
        RegisterKeyOrShardChange;
}

private void RegisterKeyOrShardChange(object sender, 
    EntityStateChangedEventArgs e)
{
    if (e.Entry.Entity is Tenant
        && e.NewState == EntityState.Modified
        && (e.Entry.OriginalValues[nameof(Tenant.ParentDataKey)] != 
               e.Entry.CurrentValues[nameof(Tenant.ParentDataKey)] ||
            e.Entry.OriginalValues[nameof(Tenant.DatabaseInfoName)] !=
               e.Entry.CurrentValues[nameof(Tenant.DatabaseInfoName)])
        )
    {
        _globalAccessor.SetGlobalChangeTimeToNowUtc();
    }
}

The SetGlobalChangeTimeToNowUtc method is called if a modification is found. This method comes from the GlobalChangeTimeService class which is described next section.

1.2 The GlobalChangeTimeService class

The GlobalChangeTimeService class provides a thin wrapper around the FileStore cache and has two methods that set and get the “ChangeAtThisTime”  entry in the FileStore cache. Also handles the DateTime conversions. The methods are:

  • SetGlobalChangeTimeToNowUtc(), which sets the cache entry with the name “ChangeAtThisTime” with a value of DateTime.UtcNow.DateTimeToTicks()
  • GetGlobalChangeTimeUtc(), which returns a DateTime from the cache entry with the name “ChangeAtThisTime”, or DateTime.MinValue if that entry hasn’t been set.

1.3 The event code called by cookie’s OnValidatePrincipal

The event code can be found in the SomethingChangedCookieEvent class and needs to register it in your Program class using the code shown below

builder.Services.ConfigureApplicationCookie(options =>
{
    options.Events.OnValidatePrincipal = 
      SomethingChangedCookieEvent
          .UpdateClaimsIfSomethingChangesAsync;
});

The code shown below comes from the SomethingChangedCookieEvent class. The steps are:

  1. Lines 4 to 7: Gets the current user’s claims and the latest time a Global Change was found.
  2. Lines 9 to 11: If the user’s claims are older that the Global Change time it needs to update the user’s claims.
  3. Lines 19 to 25: This updates the AuthP’s claims via its AuthP’s ClaimsCalculator which will recalculate the extra claims, including the claim that holds the last time the claims were updated.
  4. Lines 27 to 29: This a) creates a new User (of type ClaimsPrincipal) for this HTTP request, and b) in line 30 it says the authentication cookie should be updated with these new claims.
public static async Task UpdateClaimsIfSomethingChangesAsync
    (CookieValidatePrincipalContext context)
{
    var originalClaims = context.Principal.Claims.ToList();
    var globalTimeService = context.HttpContext.RequestServices
       .GetRequiredService<IGlobalChangeTimeService>();
    var lastUpdateUtc = globalTimeService.GetGlobalChangeTimeUtc();

    if (originalClaims.
           GetClaimDateTimeTicksValue(EntityChangeClaimType) 
        < lastUpdateUtc)
    {
        //Need to refresh the user's claims 
        var userId = originalClaims.GetUserIdFromClaims();
        if (userId == null)
            //this shouldn't happen, but best to return
            return;

        var claimsCalculator = context.HttpContext.RequestServices
            .GetRequiredService<IClaimsCalculator>();
        var newClaims = await claimsCalculator
            .GetClaimsForAuthUserAsync(userId);
        //Copy over unchanged claims
        newClaims.AddRange(originalClaims.
            RemoveUpdatedClaimsFromOriginalClaims(newClaims)); 

        var identity = new ClaimsIdentity(newClaims, "Cookie");
        var newPrincipal = new ClaimsPrincipal(identity);
        context.ReplacePrincipal(newPrincipal);
        context.ShouldRenew = true;
    }
}

private static IEnumerable<Claim> 
    RemoveUpdatedClaimsFromOriginalClaims(
        this List<Claim> originalClaims, List<Claim> newClaims)
{
    var newClaimTypes = newClaims.Select(x => x.Type);
    return originalClaims
        .Where(x => !newClaimTypes.Contains(x.Type));
}

2. Replace a user claim on a change: JWT Token and cookie authentication

This approach uses middleware to replace a claim, not by updating the user’s claims in the first approach, but by updating the current HTTP User on every HTTP request. The pro of this approach is that it works for JWT Token and cookie authentication, but the con is needs code to run on every HTTP request which cause some performance problems. I only considered this approach after I had created a the FileStore distributed cache, as it has a read time of ~25 nanoseconds.

Like the first example there are two parts to this approach:

  • Detect Change: The code links to EF Core’s StateChanged event and detects a change to the entries that would change the claim. In this case it calculates the new claim value for each affected user and stores each claim value in the cache – see the left side of the diagram below.
  • Apply to User: The extra middleware code runs after the authorization middleware and if a new claim for the current user is found in the cache, then it replaces the out-of-date claim and creates a new ClaimsPrincipal – see the right side of the diagram below.

The figure below shows how this example works.

The main code to implement contains pieces of code:

  1. The event code to detect the database changes and add replacement claims to the cache.
  2. The middleware which updates the HTTP User if a replacement claim is found in the cache.

It also uses the Net.DistributedFileStoreCache library to provide a cache with a very fast read.

Let’s now look at the main code:

2.1 The event code to detect the database changes

In the first example the database event code just had to detect a change, so it’s used EF Core’s StateChanged event. In this example we need to detect a change and then calculate the updated claim once the database has been updated, which makes the code more complex. You can find the code in the RoleChangedDetectorService class, but because the code is quite long, I will describe the various parts with pseudo-code

The code in the AddPermissionOverridesToCache method calculates the claim for each effected Users and stores the new claim in the FileStore cache with a key containing the userId of user that the claim applies to.

The RoleChangedDetectorService inherits the the IDatabaseStateChangeEvent interface and much be register via the ASP.NET Core DI provider. The constructor of the application’s DbContext contains an extra parameter containing an IEnumerable<IDatabaseStateChangeEvent> that contains any registered classes that have the IDatabaseStateChangeEvent interface.

2.2. The middleware which updates the HTTP User outdated claim

The middleware code can be found in the UpdateRoleClaimMiddleware class which both provides the extension method to register the middleware, and the code that will be called on every HTTP request. Here is the code in you need to your Program class to add this code into the into the middleware pipeline (see highlighted line) – note that the UsePermissionsChange method must come after the UseAuthorization.

//other code left out
app.UseAuthentication();
app.UseAuthorization();
app.UsePermissionsChange();
//other code left out

The actual method in the UpdateRoleClaimMiddleware class that updates a claim if that claim has been updated is shown below, with this list explain what each part does and where that code is found:

  1. Lines 4 to 5: It only looks at logged-in user. Not logged-in requests are ignored.
  2. Lines 12 to 13: This looks for a replacement value for its Permissions’ claim value. If its null, then there no replacement and the current User is used.
  3. Lines 17 to 23: This gets the current User’s claims and replaces the Permissions’ claim with a new claim where its value is taken from the found cache.
  4. Lines 25 to 28:  This creates a new ClaimsIdentity containing the updated claim. This new user sent back to the outer code which assigns it to the HTTPContext’s  context.User property.
public static async Task<ClaimsPrincipal> ReplacePermissionsMiddleware(
    IServiceProvider serviceProvider, ClaimsPrincipal user)
{
    var userId = user.GetUserIdFromUser();
    if (userId != null)
    {
        //There is a logged-in user, find if the
        //FileStore cache contains a new Permissions claim
        var fsCache = serviceProvider.GetRequiredService
            <IDistributedFileStoreCacheClass>();

        var replacementPermissions = await fsCache.GetAsync(
            userId.FormReplacementPermissionsKey());
        if (replacementPermissions != null)
            //Replacement permissions claim found, so update the User

            var updateClaims = user.Claims.ToList();
            var found = updateClaims.FirstOrDefault(c =>
                c.Type == PermissionConstants.PackedPermissionClaimType);
            updateClaims.Remove(found); 
            updateClaims.Add(new Claim(
                PermissionConstants.PackedPermissionClaimType, 
                replacementPermissions));

            var appIdentity = new ClaimsIdentity(
                updateClaims, 
                user.Identity!.AuthenticationType);
            return new ClaimsPrincipal(appIdentity);
        }
    }
    
    return null; //no change to the current user
}

The ReplacePermissionsMiddleware method is called from the code that registers the middleware. If the method returns null, then the current HTTP User is unchanged as there was no update. If there is an update to the User’s claims the method returns a new User (ClaimsPrincipal), which is then assigned to the HttpContext.User property.

3. On-demand add a new claim in middleware: JWT Token and cookie

The final example is similar to the second example as its uses middleware, but it’s the middleware that calculates the claim rather than the database trigger. This approach is good when you have secret / sensitive claims that you don’t want to put in a JWT Token, because a JWT Token isn’t encrypted. For instance, various privacy rules say that Personal Identifiable Information (PII), e.g. user’s email address, when using JWT Token.

The two parts to this approach are:

  • On-demand Claim: The middleware code will look for claim value from the cache. If the cache value is empty, then it will access the database to get the latest value and store that in the cache. Finally, the code will add the claim to the HTTP User.
  • Detect Change: If a database change alters the cache value, then it removes any existing cache value, thus causing the middleware to recalculate the claim value for the user.

The figure below shows how this example works.

The main code to implement contains pieces of code:

  1. The middleware which adds a new claim to the HTTP User from the cache.
  2. The event code to detect the database changes and removes the cache value.

It also uses the Net.DistributedFileStoreCache library to provide a cache with a very fast read.

Let’s now look at the main code:

3.1 The middleware which adds a new claim

The middleware code can be found in the AddEmailClaimMiddleware class which both provides the extension method to register the middleware, and the code that will be called on every HTTP request. Here is the code in you need to your Program class to add this code into the into the middleware pipeline (see highlighted line) – note that the UseAddEmailClaimToUsers method must come after the UseAuthorization.

//other code left out
app.UseAuthentication();
app.UseAuthorization();
app.UseAddEmailClaimToUsers();
//other code left out

The actual method in the AddEmailClaimMiddleware class that adds the new claim is shown below, with this list explain what each part does and where that code is found:

  1. Lines 4 to 5: It only looks at logged-in user. Not logged-in requests are ignored.
  2. Lines 11 to 13: This looks for a replacement value for its Email claim value. If its null, then it needs to access the database for the latest email of the current HTTP User.
  3. Lines 16 to 26: This gets the current user email from the database and adds a cache entry so that the next time it doesn’t have to query the database.
  4. Lines 30 to 35:  This creates a new ClaimsIdentity containing the added email claim. This new user sent back to the outer code which assigns it to the HTTPContext’s  User property.
public static async Task<ClaimsPrincipal> AddEmailClaimToCurrentUser(
    IServiceProvider serviceProvider, ClaimsPrincipal user)
{
    var userId = user.GetUserIdFromUser();
    if (userId != null)
    {
        //There is a logged-in user, so cache contains a new Permissions claim
        var fsCache = serviceProvider.GetRequiredService
            <IDistributedFileStoreCacheClass>();

        var usersEmail = await fsCache.GetAsync(
            userId.FormAddedEmailClaimKey());
        if (usersEmail == null)
        {
            //Not set up yet, get the user's email 
            var context = serviceProvider.GetRequiredService
                 <AuthPermissionsDbContext>();
            usersEmail = context.AuthUsers
                 .Where(x => x.UserId == userId)
                 .Select(x => x.Email).FirstOrDefault();
            
            if (usersEmail == null)
                return null; //shouldn't happen, but could in certain updates

            //Add to cache so next time it will be quicker
            await fsCache.SetAsync(userId.FormAddedEmailClaimKey(), usersEmail);
        }

        //We need to add the Email from the cache
        var updateClaims = user.Claims.ToList();
        updateClaims.Add(new Claim(ClaimTypes.Email, usersEmail));

        var appIdentity = new ClaimsIdentity(updateClaims,
            user.Identity!.AuthenticationType);
        return new ClaimsPrincipal(appIdentity);
    }
    
    return null; //no change to the current user
}

The AddEmailClaimToCurrentUser method is called from the code that registers the middleware. If the user is logged in the method will return a new User (ClaimsPrincipal) with the email claim added, which is then assigned to the HttpContext.User property. For user that aren’t logged in, then the method returns null, which means the current User should be used.

3.2. The event code to detect the change of a user’s email

The database event code is very simple – if a user’s email has changed, then make sure the cache entry linked to the user’s email is removed. That means the next time that user accesses the web app the AddEmailClaimMiddleware will recalculate the user’s email (and add the cache entry).

The code below come from the EmailChangeDetectorService class.

public void RegisterEventHandlers(AuthPermissionsDbContext context)
{
    context.ChangeTracker.StateChanged += 
         delegate(object sender, EntityStateChangedEventArgs e)
    {
        if (e.Entry.Entity is AuthUser user
            && e.NewState == EntityState.Modified
            && e.Entry.OriginalValues[nameof(AuthUser.Email)] 
               != e.Entry.CurrentValues[nameof(AuthUser.Email)]
            )
            //Email has changed, so we remove the current cache value
            _fsCache.Remove(user.UserId.FormAddedEmailClaimKey());
    };
}

The EmailChangeDetectorService inherits the IDatabaseStateChangeEvent interface, and much be register via the ASP.NET Core DI provider. The constructor of the application’s DbContext contains an extra parameter containing an IEnumerable<IDatabaseStateChangeEvent> that contains any registered classes that have the IDatabaseStateChangeEvent interface.

4. Extra: Using IClaimTransformation to update the user’s claims

I wanted to add this extra part to talk about services that follow the IClaimTransformation interface to update the user’s claims. Like the middleware approach the IClaimTransformation services run on every HTTP request after the AuthenticateAsync middleware.

I haven’t used IClaimTransformation services but when Mohammed Ahmed Hussien (twitter: @shoogn17) pointed it out I can see that it might be easier for people to use because:

  • You can use the normal constructor dependency injection to inject other services you need in your code.
  • Unlike the middleware approach a IClaimTransformation service only runs if there a logged-in user, which means you don’t have to add code to ignore non-logged in users.

This section from the Microsoft docs gives you a simple example of how you would create and register a IClaimTransformation service.

I haven’t compared the middleware and IClaimTransformation service for performance, but I would expect both of them are fast, assuming your code doesn’t access a database every HTTP request. (I found a couple of articles about IClaimTransformation which has database accesses on every HTTP request. I don’t recommend that for performance reason, which is why I use the FileStore cache to keep the claim update fast.)

Conclusions

This article gives a detailed looks at three ways to update the claims of a logged-in user to a ASP.NET Core application. The three examples provide a wide range of approaches that cover most claim update situations. And each example can be tweaked to make them perform better for specific changes: for instance, the first example updates all logged-in users, but I could be changed to be more selective on which users are updated.    

I start with the cookie-only approach which I used many years ago on my first ASP.NET Core application for a client. Its pretty simple and performs very well when changes are infrequent. I have used this for many years, including in client’s applications, and it well tried and tested. The only change I have added in this article is the use of the FileStore distributed cache, which makes it a bit faster.

For many years I didn’t have a viable solution for changing the claims when using JWT Token authentication, which is a problem as many frontend frameworks work better with a JWT Token. It wasn’t until I build the FileStore distributed cache, which has a very fast read time, e.g. ~25 nanoseconds (see FileStore distributed cache benchmark for full data), that I found an approach that has a good per-HTTP request performance.

Examples 2 and 3 offer two approaches that work with both cookie and JWT Token authentication, with their implementations almost the reverse of each other: example 2 calculates the updated claims within the database event, while example 3 calculate the claim in the middleware code. These different approaches also provide different performance parameters – see the performance section in the list of the three approaches in introduction section for more on this.

NOTE: These three examples are in the AuthP’s repro examples and be tried by running various ASP.NET Core applications. The Update user claims via cookie event version can be found in AuthP’s Example4 hierarchical multi-tenant application and Authp’s Example6 sharding multi-tenant application. The last two examples work with JWT Token, so both middleware versions are in the AuthP’s Example2 WebApi application that uses the JWT Token authentication.

Happy coding.

How to take an ASP.NET Core web site “Down for maintenance”

Last Updated: October 20, 2022 | Created: September 20, 2022

If you have an e-commerce or business web app used by lots of users, then you really don’t want that app to be “down” (e.g. “site offline” or “site not found”) because it’s bad for business. But at the same time some database changes are just too complex to allow users to access a database while the data being changed. This article describes a way to momentary divert users during a database is changed, which means that the database change code has exclusive access, and any change has the smallest effect on your logged-in users.

I designed this approach for multi-tenant applications, especially when using sharding. In these sorts of applications a single tenant might need changing or moved and the code to do that needs exclusive access – see this Microsoft article which describes the split and merge processes, which are two examples of changes that need exclusive access.

This article is part of the series that covers .NET multi-tenant applications in general. The other articles in “Building ASP.NET Core and EF Core multi-tenant apps” series are:

  1. The database: Using a DataKey to only show data for users in their tenant
  2. Administration: different ways to add and control tenants and users
  3. Versioning your app: Creating different versions to maximise your profits
  4. Hierarchical multi-tenant: Handling tenants that have sub-tenants
  5. Advanced techniques around ASP.NET Core Users and their claims
  6. Using sharding to build multi-tenant apps using EF Core and ASP.NET Core
  7. Three ways to securely add new users to an application using the AuthP library
  8. How to take an ASP.NET Core web site “Down for maintenance” (This article)
  9. Three ways to refresh the claims of a logged-in user

TL;DR; – Summary of this article

  • The feature described is solves a problem that can arise in multi-tenant applications, that is it can temporarily stop users from accessing a tenant’s data while a complex change is applied to the tenant data. A “complex change” might be moving a tenant’s data to another database.
  • The solution uses ASP.NET Core’s middleware to intercept every HTTP request and checks that that data that the user might access isn’t “down”, i.e. that data is being changed and mustn’t accessed. If the data the user uses is “down” they are diverted to a “down for maintenance – back soon” page.
  • Because the middleware is called on every HTTP request, I have used the FileStore distributed cache, which has a read time of ~25 ns, which means this feature doesn’t slow down the application.
  • I have implemented the code in version 3.4.0 of my open-source AuthPermissions.AspNetCore library – see the “Down for Maintenance” documentation. But the design and code of this feature can be copied to any ASP.NET Core application.

Setting the scene – why did I need this feature

I have a library called my AuthPermissions.AspNetCore library (referred to as AuthP in this article) that helps developers to build complex multi-tenant applications and it includes sharding, that is each tenant has their own database. One of the best ways to manage lots of databases is Azure SQL Server elastic pools but the suggested elastic pool support library is not supported any more. So, if I wanted to use SQL Server elastic pools, then I needed to build code that implements the split-merge code.

I had built most of the features needed, like defining a tenant and the keys for each tenant and sharding, in version 3.0.0 of the AuthP library, but the last missing feature is the ability to stop users from accessing a tenant while it is changed / moved (I used the term move for both split and merge). That’s because if a user is accessing the tenant data at the same time as a, then the user might get the wrong data or more crucially, it can cause data loss during a move.

The diagram below shows the process I need to build if I want to successfully change / move a tenant’s data while the application is still running.  Note that only tenant user linked to “tenant 123” are diverted while users not linked to “tenant 123” would work normally.

NOTE: In the AuthP library the keys to a tenant data key(s) are held in the user’s claims, which means that after a change / move the user’s tenant claims(s) need updating. The AuthP library has a feature called “update claims on tenant change” – click the link to go to the documentation.

It turns out that the solution to implement this “down” process is to use ASP.NET Core’s Middleware. You can intercept a user and divert them to another page / url if a move / change is in action by adding an extra middleware in the correct place. I call a divert a “down” because the tenant is “down for maintenance” while the change / move is being executes.

The downside of the added the extra middleware is that the code is called on every HTTP request. This means the middleware needs to be is fast, otherwise you will slow down your whole application for a few, infrequent change / move diverts. I solved this by creating the FileStore distritributed cache, which has a very fast read time (e.g. ~25 ns).

Read on to see how this works and how you could use it.

Design aims: what database changes do I want to cover?

The main “down” feature is temporarily diverting users accessing a tenant database while a change / move is being applied, but I also found some other added some extra diverts as well, which are listed below:

  • Manual, application “down”: Allows an admin user to manually “down” the whole application. Every user apart from the admin who took the app “down” will be diverted to a page with an explanation and expected time when the app will be available.
  • Manual, tenant database “down”: Allows an admin user to manually “down” a tenant database, thus diverting all users linked to the tenant database to a page saying, “stopped by admin”.  Access to the tenant can be restored by an admin manually removing this this “down”.
  • Tenant database Delete: This permanently diverts all users linked to the deleted tenant to a page saying, “the tenant is deleted”. This is a permanent divert, but it can be removed manually. 

Here is a diagram that shows how the“ down for maintenance” feature can be implemented in ASP.Net Core.

The rest of the article describes each step in “down for maintenance” feature, with references to the code in my AuthP library. The steps are:

  1. Startup: Registering the services
  2. Adding a StatusController (or an equivalent Web API)
  3. Using the ISetRemoveStatus service to set / remove a “down” state
  4. Understanding the “down for maintenance” middleware
  5. Other things to consider when moving a tenant database

1. Startup: Registering the services

There are two parts to setup the register the “down for maintenance” feature:

  • Registering the “down for maintenance” services
  • Adding the “down for maintenance” middleware.

Both parts are applied in the ASP.NET Core Program / Startup code. First is the registering of the FileStore cache, which holds the various “down” statuses, and the SetRemoveStatus class, which provide simple methods to add / remove “down” statuses. The code below is added in the startup section that registers services with the .NET dependency injection provider.

//previous code left out
builder.Services.AddDistributedFileStoreCache(options =>
{
    options.WhichVersion = FileStoreCacheVersions.Class;
}, builder.Environment);

builder.Services.AddTransient
     <ISetRemoveStatus, SetRemoveStatus>(); 

The “down for maintenance” middleware is added in the “app” part of the ASP.NET Core startup code – see the highlighted line that adds the extra middleware.

var app = builder.Build();
//other app code left out

app.UseAuthentication();
app.UseAuthorization();
app.UseDownForMaintenance();

//other code left out

The important thing is that the “down for maintenance” middleware is added AFTER the UseAuthorization method. That’s because the “down for maintenance” middleware needs assess to the user’s claims.

2. Create a Controller / web APIs to handle the “down for maintenance”

You need pages / APIs to handle the following:

  • For the admin users
    • Look at all the current “downs” and have the ability to remove any
    • Manually set the app “down” (with messages for the users)
    • Manually set a tenant “down”
  • For diverted users
    • App Down
    • Tenant down while being updated
    • Tenant down by admin
    • Tenant is deleted

In the Example4 web site (hierarchical tenant design) and Example6 web site (single-level + sharding) I have a controller called StatusController that contains the actions / pages listed above. Please look at the Example4’s StatusController for an example of what you need to create.

NOTE: the diverted pages are hard coded into the RedirectUsersViaStatusData class, while the controller’s name can be changed. If you want to have different urls for the diverted pages, then you need to copy the code and register your version of the RedirectUsersViaStatusData class.

3. Using the ISetRemoveStatus service to set / remove a “down” state

The SetRemoveStatus class contains the code to set, remove and display the “down” statues in the FileStore distributed cache. There are many types of diverts and this service creates the cache key which defines the type of divert that the user should be diverted to.

The AppDown divert is easy because it has one divert, but the tenant divert is more complex because a) it has three divert types and b) a divert is unique to a tenant. Each “down” entry in FileStore distributed database has a unique key name, which allows you to have multiple “downs” at once. And in the case of a tenant down the FileStore entry’s value is the tenant key, which is used to detect if the user is linked to a tenant that is in a “down” state.

The ISetRemoveStatus service makes it easy for the developer to wrap your change / move code with a “down” at the start and remove the “down”” at the end. The code below shows an example of how the ISetRemoveStatus service would work, with the “down” and remove “down” code highlighted.

[HttpPost]
[ValidateAntiForgeryToken]
[HasPermission(Example6Permissions.MoveTenantDatabase)]
public async Task<IActionResult> MoveDatabase(
    ShardingSingleLevelTenantDto input)
{
    var removeDownAsync = await _upDownService
        .SetTenantDownWithDelayAsync(
              TenantDownVersions.Update, input.TenantId);
    var status = await _authTenantAdmin
        .MoveToDifferentDatabaseAsync(input.TenantId, 
              input.HasOwnDb, input.ConnectionName);
    await removeDownAsync();

    return status.HasErrors
        ? RedirectToAction(nameof(ErrorDisplay),
              new { errorMessage = status.GetAllErrors() })
        : RedirectToAction(nameof(Index), 
              new { message = status.Message });
}

As you can see you define what type of tenant change via the TenantDownVersions enums. The ISetRemoveStatus service handles creating the key name for the actual “down” entry in the FileStore distributed database. The “down” entry key string is designed to make finding / filtering the “down” values to work quickly, so the key string is a bit complex. The figure below shows the various combinations of key names to provide a) define what type of divert it is, and b) is unique name for each tenant.

NOTE: For a tenant “down” entry the value is the tenant’s unique key, while for the AppDown the value contains a message, expected time, and UserId of the user that “downed” the whole app.

4. Understanding the “down for maintenance” middleware

The middleware code (see RedirectUsersViaStatusData class) is called on every HTTP request, and its job is to quickly let through a user if there isn’t an “down” status that effects the current user. There are three stages in this middleware to cover each part of the filter. They are:

NOTE: I use the term admin user (see this link) to define a user who is managing the application. These types of users have a) access to high-level admin features and b) aren’t linked to a tenant.

STAGE 1: Allowed URLs get through

The middleware allows two types of URLs.

  • You can login and logout. I added this when I “downed” the app and then rerun the app, at which point I couldn’t log to remove the “App down”!
  • I allow access to the Status controller. This allows an admin user and manually turn off a “down” if anything goes wrong.

STAGE 2: Handle AppDown

The AppDown feature stops all users from using the application’s features, apart from the admin user that “downed” the app. This means that the admin user can check / fix the problem before removing the “down” on the app.

This feature is there for situations where the application’s software or data that can’t be updated by the normal deploy / migrate approach. You will rarely need the AppDown feature, but it’s there for emergencies.

STAGE 3: Handle Tenant “down”

The main usage of the middleware is to managing changes to a tenant’s data and the code uses the start of the “down” key to detect which types of divert is needed. The three types are:

  • Tenant down while being updated
  • Tenant down by an admin use (known as tenant “manual down”)
  • Tenant is deleted (this stops user trying a tenant that doesn’t exist)

NOTE: An example of the code to take a tenant “down” while being updated can be found in section 3.

The middleware code isn’t complex, but it’s a bit hard to follow so I have provided a flowchart to show how the three stages are handled. The important thing is the middleware is very fast (via using the FileStore distributed cache) at letting though users when no “down” is active.

NOTE: The RedirectUsersViaStatusData class has comments starting with the three STAGES shown in the flowchart.

5. Other things to consider when moving a tenant database

The tenant “Down for Maintenance” feature solves the most complex issue of ensuring that the tenant data isn’t accessed during the data is moved. But there are some extra issues you need to consider which the AuthP library already has solutions for. The issues are:

  1. Updating the tenant user’s DataKey claims on a move
  2. An internal hierarchical move needs to “down” two parts of the tenant data
  3. The admin access to tenant data feature needs extra code in the middleware

5.1. Updating the tenant user’s DataKey claims on a move

If you are moving a database in a sharding multi-tenant application or moving data in a hierarchical multi-tenant application, then the information used by the user to access the tenant data will change. Therefore, you MUST update the information used by the user to access the tenant data.

In the AuthP library the user’s key to a tenant data is held in the user’s claims which makes the user access very fast (see this section of an earlier article). But that means that the tenant claims need to be updated when the DataKey changes, and AuthP has feature that detects a change to the tenant DataKey parts and then makes sure all the logged-in users have their claims updated – see the AuthP “update claims on tenant change” documentation on how this works.

5.2. An internal hierarchical move needs to “down” two parts of the tenant data

The AuthP hierarchical multi-tenant has a move feature where a section of the hierarchical data can be moved to another part of the hierarchy – known as the parent (see this example). In this case you need to “down” both the section to be moved and the section that the moved too.

For this reason, the SetTenantDownWithDelayAsync method has an optional parameter called parentId. If the parentId is not zero, then it will also “down” the parent during the hierarchical move. The code below shows the code, with the extra parentId parameter highlighted.

var removeDownAsync = await _upDownService
    .SetTenantDownWithDelayAsync(
        TenantDownVersions.Update, input.TenantId, 
        input.ParentId);
var status = await _authTenantAdmin
    .MoveHierarchicalTenantToAnotherParentAsync
        (input.TenantId, input.ParentId);
await removeDownAsync();

5.3. The “admin access to tenant” data feature needs extra code in the middleware

The AuthP library provides a feature that allows admin / support users (i.e. users not linked to a tenant) to temporary gain access to a tenant’s data (see the admin access to tenant documentation for more information).

This is implemented by using a cookie to contain the tenant DataKey, but the “down for maintenance” middleware doesn’t contain code to handle that. While giving admin user a way to access the tenant’s data is useful if a problem occurs in the change / move, but admin must be aware of any tenant change / move and not try to access that tenant (or turn off the “admin access to tenant” feature).

Conclusions

Back in 2015 I wrote an article about how to take an ASP.NET MVC5 web site “Down for maintenance” and now in 2022 I this article provides a version for an ASP.NET Core application. The basic approach of using middleware is the same, but this latest approach also contains features to handle multi-tenant applications.

Both the older ASP.NET MVC5 version and the latest ASP.NET Core are designed to be quick. This focus on high performance is because the code is run on every HTTP request. Both versions use a shared file to work across multiple instances of the web applications, for instance when you use Azure’s scale-out. But the new version has much more complex needs, with tenant-level “down” features, which required a more sophisticated approach, which is handled by the FileStore distributed cache acting as a fast-read / slow-write database.

With this feature added to version 3.4.0 of the AuthP library you can safely manage tenants while users are accessing your multi-tenant application.

How to turn an ASP.NET Core appsettings.json file into a fast-read database

Last Updated: September 22, 2022 | Created: September 15, 2022

This article describes a way to create a database using ASP.NET Core appsettings.json configuration feature. The big benefit of this approach is the read is blistering fast, something like 1,000 times quicker than a SQL Server database query. The downside is the write is relatively slow (e.g. >1 ms write) so this approach is best for situations where you have lots more reads than writes. I use this approach in an ASP.NET Core applications where certain data are read on every HTTP requests.

TL;DR; – Summary of this article

  • There is a way to use ASP.NET Core Configuration feature to create a type of database that has is much faster (~1,000 times faster) read than a typical database, but the write is slower than a database (small database = ~2 times slower, 400kb size database = ~10 times slower).
  • A good use for this type of database is where you have lots of reads and rare writes. I created this approach to handle a database query on every HTTP request.
  • This article describes the five steps to implement a database using ASP.NET Core Configuration feature.
  • There is a library called Net.DistributedFileStoreCache (shortened to FileStore cache) which provides a pre-build version of this approach. You might like to refer to these articles:
    • A .NET distributed cache with a ~25 nanosecond read time!
    • How to change/move databases on a live ASP.NET Core web site, which uses the FileStore cache as a database.

Setting the scene – why did I use an appsettings.json file as a database?

My AuthPermissions.AspNetCore library (shortened to AuthP) provides back-end code for building multi-tenant applications using ASP.NET Core and EF Core, and in version 3 of the AuthP library I added support for sharding. To implement sharding I needed to create a connection string that points to the database server+database on every HTTP request from a tenant user.

Also, there were couple of extra features that I wanted my sharding implementation to support

  • Should work with Azure’s SQL Server elastic pools. Azure elastic pooling provides a cost-effective way to have lots of databases (see this document on why this is useful).
  • Support geographically placed database servers to improve performance when you have users that are geographically spread out.
  • Good security: the connection strings contains Username/Password of the servers, so for security reasons I store the database strings in Azure.

The diagram below shows how the implementation of the sharding is changed to support these three extra features:

The sharding code gets the sharding data for a specific tenant which contains the name of the connection string linked to a database server and the name of the database on that database server. From these two parts it forms the composite connection string needed to access the tenant database. This isn’t that complex procedure, but it does to read in the sharding data (name of connection string and name of the database) on every HTPP request for a tenant user.

I could go with a database access, but I’m really trying to make this library very fast, so I started to look at ASP.NET Core Configuration features i.e. appsettings.json and IOptions because I know that the read of configuration data is really fast because the data is cached by ASP.NET Code configuration code.

Therefore, I created a appsettings.json type file which I could update and then used the Configuration IOptionsSnapshot<T> method to get the latest data my settings file. (see Microsoft Options Pattern docs for more info on this type of read). The diagram below shows the whole process.

This might seem very complex, but it’s:

  1. Very fast: something like 1,000 times quicker than using a database access.
  2. Secure: The connection string isn’t in any of your code or your claims.
  3. Doesn’t use a database: This means your tenant databases won’t be slowed by lots of small queries.

The rest of this article describes the steps needed to create a generic fast-read database by using ASP.NET Core Configuration feature. In the steps I show examples from the sharding feature described above, with links to the code in the AuthP’s open-source repo. That way you have working code examples of how I used this approach.

Steps to turn an appsettings.json file into a fast-read database

NOTE: I refer to the json file which will be used as database as the database json file in this article.

The steps are to implementing a database json file are:

  1. Create a json file to act as a database
  2. Make sure the database json file isn’t overwritten
  3. Register the database json file on startup
  4. Write to the database json file within a distributed lock
  5. Use IOptionsSnapshot<T> to read the database json file

1. Define a json file to act as a database

The first thing to do is work out what data you need to store the database json file for your application. Once you have decided on the data you need, then you must implement the dats by using class(es) that contain parameters that can be serialized / deserialized to json by the .NET Text.Json library.

For my sharding settings I have a List of the class called DatabaseInformation, which has four properties (all of type string) that define a specific settings of each sharding server+database. The code below shows the type of json the sharding settings file would contain.

{
  "ShardingDatabases": [
    {
      "Name": "ShardWest1",
      "DatabaseName": "West1",
      "ConnectionName": "WestServer",
      "DatabaseType": "SqlServer"
    },
    {
      "Name": "ShardWest2",
      //… rest of the content has been left out 
    }
  ]
}

NOTE: The name of the section / array used in your database json file must be unique across all the configuration json files.

2. Make sure the database json file isn’t overwritten

A normal appsettings.json file gets overwritten when an application is deployed. But because we want to use json file as a database, then you don’t want the file to overwritten. I do two things to make sure the database json file isn’t overwritten.

First, I use a filename which includes the environment name, e.g. Debug, Staging, Production, so my implementation the filename is $“shardingsettings.{EnvironmentName}.json”. This means that filename used developing the application in Debug mode can’t overwrite your Production database json file.

But the most important thing to do (but easy to forget) is to set the file’s “Copy to Output Director” property to “Do not copy”. This stops the database json file being copied in your deployment. You can manually set this via file properties, but I prefer to add a ItemGroup to the ASP.NET Core .csproj file, as shown below.

<ItemGroup>
	<Content Update="shardingsettings.Production.json">
		<CopyToOutputDirectory>Never</CopyToOutputDirectory>
	</Content>
</ItemGroup>

3. Register the database json file on startup

There are two parts to registering database json file on startup. They are:

  1. Register the database json file to the ASP.NET Core’s Configuration
  2. Register your IOptions access via the Configure<T> service

3.1 Register the database json file to the ASP.NET Core’s Configuration

To register your database json file to be part of the Configuration you use the AddJsonFile method. The code below goes in the Program class and registers my shardingsettings file.

var shardingFileName = 
    $“shardingsettings.{builder.Environment.EnvironmentName}.json”
builder.Configuration.AddJsonFile(shardingFileName, 
      optional: true, reloadOnChange: true); 

You need to think what happens when you first deploy using a database json file. In this case I set the optional parameter to true, which means the application can start without the file. If you use IOptionsSnapshot<T> (see next subsection on IOptionsSnapshot<T>) it will return null if the database json file isn’t there or doesn’t have any json in it, but once you create the file the application will start tracking the file and the IOptionsSnapshot<T> Value will be non-null.

NOTE: The other approach is set the optional parameter to false and ensure that there is a json file exists. But if the optional parameter is false, then if the json file isn’t there, then the application will fail on startup. This means you need to create on startup if no file exists.  

3.2 Register your IOptions access via the Configure<T> service

You must register a Configure<T> service, where T is the class which defines the json content of the database json file, to allow you use the IOptions access to the data inside. This is done by registering a class to a specific part of the configuration setting via a class.

In my shardingsettings file I use a collection of data, so my options class looks like this:

public class ShardingSettingsOption
{
    public List<DatabaseInformation> 
          ShardingDatabases { get; set; }
} 

And configured by the code below, which will look for a json array in all the registered json files with the name of ShardingDatabases.

builder.Services.Configure<ShardingSettingsOption>
      (builder.Configuration);

4. Write to the database json file within a distributed lock

To update the data in the database json file you need to read in the current json, add your change and write back out. This read->update->write process is fairly easy to implement – Have a look at my AccessDatabaseInformation class for an example of what this would look like.

While the update part of the code is straightforward, we do need to handle simultaneous updates, because one update could overwrite another update. This type of simultaneous updates is rare, but because they can occur, we need to handle this. This means we need to wrap the update process with some code that would stop other updates from running until the current update has finished.

If you are only running one instance of your ASP.NET Core application, then you could use a .NET lock. But my library is designed with high performance applications where multiple instances of the application are running at the same time (Azure calls this scale out), so I need a distributed lock. I use an excellent library called DistributedLock.  

The DistributedLock library uses a global resource, such as a database, to form a lock across all the running instances. The code below (adapted from the Acquire section of the DistributedLock Readme)

var myDistributedLock = 
     new SqlDistributedLock(name, connectionString); 
using (myDistributedLock.Acquire())
{
	//Run the read->update->write process within this lock
} // this releases the lock

5. Use IOptionsSnapshot<T> to read the database json file

Finally, you can access the information in the database json file via ASP.NET Core’s IOptionsSnapshot<T> method. The code below is a simplified version of the AuthP’s ShardingConnections constructor. When the ShardingConnections service is created it uses the IOptionsSnapshot<T> method to get the data in the database json file, in this case my sharding settings file. (see Microsoft Options Pattern docs for more info).

As I showed in the setting the scene section using the IOptionsSnapshot<T> method in the code below reads in the current sharding settings.

private readonly ShardingSettingsOption _shardingSettings;
public ShardingConnections(
     IOptionsSnapshot<ShardingSettingsOption>
     shardingSettingsAccessor, AuthPermissionsOptions options,
     … other parameters left out)
{
    _shardingSettings = shardingSettingsAccessor.Value
        //If no sharding settings file, 
        //then add the default sharding setting
        ?? new List<DatabaseInformation>
       {
           DatabaseInformation.FormDefaultDatabaseInfo(options)
       };
}

Note that if the sharding settings file doesn’t exist the IOptionsSnapshot<T> Value will be null, and you need to work out in that case. You could return the null, but often the best solution is to create an empty collection or similar. In the AuthP’s sharding settings shown above a new deployment always has a single, default DatabaseInformation, which is formed from the multi-tenant setup information contains.

Conclusions

Creating a fast-read / slow-write database using ASP.NET Core’s Configuration / IOptionsSnapshot<T> might not be the first approach you would think of for creating a database, but in situations where you want a very fast read where the data changes rarely. For instance, the AuthP sharding feature is a very good fit to this approach because it needs two read queries (one to get the sharing data and another to get the database server connection string) on every HTTP tenant user request read with rare changes to the sharding data.

I also created a library called Net.DistributedFileStoreCache (shortened to FileStore cache) which implement a Distributed cache. This library uses the approach that ASP.NET Core Configuration / IOptionsSnapshot<T>  but uses .NET’s FileSystemWatcher class instead to IOptionsSnapshot<T>. The FileStore cache has a ~25 ns. read time and a write time > 1 ms. – see the FileStore cache full performance figures here.

I use the FileStore cache in database mode in the article “How to change / move databases on a live ASP.NET Core web site” because this feature needs multiple reads on every HTTP request. Using FileStore cache removes the extra ~1 ms. that a database might have.

If nothing else you have learnt more about ASP.NET Core’s Configuration / IOptionsSnapshot<T>, and you have learnt a new way to store data with a different performance from a normal database.

Happy coding.

A .NET distributed cache with a ~25 nanosecond read time!

Last Updated: October 14, 2022 | Created: August 8, 2022

This article described a .NET distributed cache library called Net.DistributedFileStoreCache (referred to as FileStore cache in this article). The FileStore distributed cache library is very quick to read a cache value (known as Get), for instance, FileStore cache take ~25 ns. to Get a cache value, but a SQL Server distributed cache would take at least 0.1 ms – that means FileStore cache is 4,000 faster on Gets! Typically, you don’t need that speed, but I have a situation where I needed to read lots of cache value in every HTTP request, so I built this FileStore cache.

The other positive of the FileStore cache is its very easy to setup and use. The FileStore cache design is based on ASP.NET Core’s appsetting.json files so it stores the cache entries in a json file which is shared by all the instances of the application. This also means you don’t have to set up, or pay, for a database for your caching 😊.

The downside of the FileStore cache is it’s slower than database caches when adding a new cache value (known as Set). For instance, to Set a cache value to an existing 100 cache values takes ~1.5 ms., while a database would normally execute a Set in less than one millisecond. And it gets worse as the cache gets bigger, e.g. to Set a cache value to an existing 10,000 cache values takes ~9 ms. So, the FileStore cache library is useful when your application needs fast cache reads and only a few cache writes. Also, it’s NOT a good fit for large cached values, like a image, as the bigger the cache gets the slower the update will be.

The other (smaller) downside is the FileStore cache doesn’t implement the IDistributedCache’s SlidingExpiration feature, because that would make the read performance slow. But the FileStore cache does support the two AbsoluteExpiration versions.

Read on to understand how the FileStore cache works so you can decide if it would be useful to your application.

TL;DR; – Summary of this article

  • The FileStore cache library provides a .NET distributed cache that has a very fast – it only takes ~25 ns. to Get of one entry in a cache containing 10,000 entries, but it is slow when you use Set / Remove (100 entries = ~1.3 ms., 1,000 entries = ~1.7 ms., 10,000 = ~7.9 ms.)
  • The FileStore distributed cache relies on a json file that all the the instances of the application can accesse, just like your ASP.NET Core appsettings.json files (see Azure’s Scale-Out approach). This make it’s easy to setup because it just relies on a json file in your applications directory so you don’t have to set up, or pay, for a database for your SQL Server cache, Redis cache, or NCache cache.
  • The main FileStore cache implementation has a value of type string, mainly because the values are stored in a json file. This Version is known as String.
  • There are three other versions which have different cache value types / interfaces convert their cache value to a string and then call the String version:
    • Version=Class: This inherits the String version and adds three method types that serializes the class to a json string. Useful when you want to store a complex data I the cache.
    • Version=Bytes: This has a value of type byte[] with extra features over the IDistributedCache interface.
    • Version=IDistributedCache: The Bytes version is accessed via the IDistributedCache interface. Useful if you have an existing caching using the IDistributedCache interface.
  • Net.DistributedFileStoreCache is an open-source library under the MIT license and a NuGet package. The documentation can be found in the GitHub wiki.

Useful links to general info on caching

Setting the scene – why and how I built the FileStore cache library.

The articles / documentation about a software cache talk about improving performance and scalability. For instance, in my book “Entity Framework Core in Action” I use two different cache approaches to improve performance of displaying and searching first 100,000 books and then ½ million books (see this article for the details and link to a video).

But I want to talk about to another use of caches, especially distributed cache, where they are used to manage application data that changes and all the instances of the application need to access. I have used a distributed cache in this way to:

But recently I used the ASP.NET Core’s appsetting.json file as a simple distributed cache when adding sharding to the multi-tenant part of the AuthPermissions.AspNetCore library. In this case I configured a separate json file containing the list of database information used ASP.NET Core’s IOptionsSnapshot service to read it. This means I can update the json file and then get the latest data when I use IOptionsSnapshot – and IOptionsSnapshot is very quick, which is good as that data is accessed every HTTP request.

IOptionsSnapshot is great, but it turns out there are some issues to fix if its going to properly implement a distributed cache. For instance, if two instances update the json file at the same time, then there would be a problem. At best result would be that one instance throws an IOException, but the worse result is one of changes is lost and you aren’t aware of it. This breaks the “coherent (consistent) across requests to multiple servers” rule of a distributed cache.

I tried to fix these problems using the IOptionsSnapshot but I found a number of situations that I couldn’t fix. But the idea is good, so I went back to the basic .NET features needed to make a true distributed cache. The two .NET features that I used to make a true distributed cache with fast read are:

  1. File Locking: locking a file during an update means no other processes can read or write the file until the updated file has been saved. If another process tries to access a locked file, then it throws an exception which the library catches and retries the access after a delay. This makes sure that the FileStore cache implements the coherent (consistent) across requests to multiple servers” rule.
  2. FileSystemWatcher: the FileSystemWatcher class will alert each the instance of the application that the cache file has been changed, which means the (hidden) local static cache is out of date. On the next time the application code accesses the FileStore cache will read the file (within a lock) and update the local static cache before executing the application’s request.

The diagram below shows the four steps that happens when the FileStore cache goes through an update (e.g, Set or Remove).

NOTE: Even if you have only one application instance to the four steps are still run as a single application can be running multiple processed at the same time.

DESIGN NOTES

  • The FileSystemWatcher class is known to create two events when a file is changed and this can happen in the FileStore cache. For that reason, the library invalidates the local cache and then waits for another access to the FileStore cache: that way it’s less likely that the read of the FileStore cache file will runs twice.
  • For performance reasons a cache entry’s AbsoluteExpiration has expired it will return null when accessed, but the cache entry is still in the FileStore cache file. Once any cache update (i.e. set, remove, or reset) is executed, then the expired cache entries still present entries are removed. This keeps the read performance of the library high.
  • As stated earlier the FileStore cache doesn’t implement the IDistributedCache’s SlidingExpiration to keeps the performance of the library high.  I could have used a similar approach to the AbsoluteExpiration in the last design note, but it would only be based on only local accesses.

The end result is that the Net.DistributedFileStoreCache library is an cache that meets the requirements of a distributed cache. It also contains some extra useful features over the default IDistributedCache interface (described later).

The four different FileStore cache interfaces

The primary .NET interface for distributed caches is IDistributedCache which has only four methods types: Set, Get, Remove and Refresh, plus async versions of each, with the key being a string and the value is a byte[]. For instance, the sync Get method has the following signature:

public byte[] Get (string key);

But because I am storing the data in a json file its much better to have the value of type string, called DistributedFileStoreCacheString, which contains the primary code. But to make the library useful to users already using the IDistributedCache I added other versions where the value is byte[]. Also, I added class that added a few extra features to the primary …String version. The figure below gives you a summary of the four interfaces the FileStore cache library provides.

The table below shows the which interface is registered is defined by the WhichVersion parameter in the FileStore’s options – they are:

Version nameRegistered InterfaceValue type
StringIDistributedFileStoreCacheStringstring
ClassIDistributedFileStoreCacheClassstring + class
BytesIDistributedFileStoreCacheBytesbyte[]
IDistributedCacheIDistributedCachebyte[]

Registering the distributed FileStore Cache

the AddDistributedFileStoreCache extension within FileStore cache library allows to register the FileStore cache version you want to use as a service. It also sets up / find the FileStore cache file name and location using your environment information, which is simpler (and cheaper) compared to using a distributed cache that uses a database. The code below shows how you would this in an ASP.NET Core’s Program startup.

builder.Services.AddDistributedFileStoreCache (options =>
    {
        options.WhichInterface = String;
    }, builder.Environment);

The registration code above would register a service with the interface IDistributedFileStoreCacheString. It also uses the ASP.NET Core environment class to create an different cache filename based on your EnvironmentName, for instance the file name would be “FileStoreCacheFile.Development.json” when in a development environment, and file is stored directory defined by the environment’s ContentRootPath FilePath. This makes sure that your cache file is in the right place and your development c and doesn’t interfere with your production cache file.

NOTE: Go to the Register the FileStore Cache documentation for more information on the setup and the various options you might need.

The performance of the distributed FileStore Cache

I measure the performance of the FileStore cache String version by the excellent BenchmarkDotNet library. My performance tests both reads and writes of the cache on a cache that already has 100, 1,000 and 10,000 cached values in it. The performance tests where run on an Intel Core i9-9940X CPU 3.30GHz.

The full performance figures are available in the repo’s README file but here is a summary.

Read performance

  • Reads a single cache value took ~25 ns at the three levels of cache size.
  • Getting a Dictionary of ALL the cache key/values took ~80 ns at the three levels of cache size.

Write performance

The time taken to add a cache value to cache goes up as the size of the cache is – see table below. This makes sense as unlike a database you are reading and then writing ALL the cache values into a file. The async versions are slower than the sync versions, but it does release a thread while reading and writing.

Cache number / size100 / 4.6 kb1,000 / 40.1 kb10,000 / 400.0 kb
AddKey (Set)1.3 ms.1.7 ms.7.9 ms.
Add 100 (SetMany)1.4 ms.1.7 ms.8.3 ms.
AddKeyAsync (SetAsync)1.7 ms.2.3 ms.9.0 ms.

NOTE: If you want to make sure your use of the FileStore cache is the best it can be, then I recommend you read the Tips on making your cache fast document.

Using the distributed FileStore Cache

If you are familiar with any of .NET’s distributed cache libraries like Distributed SQL Server cache, Distributed Redis cache, Distributed NCache cache etc. then it will the same, but without of the hassle / cost of setting up a database.

The biggest difference if you use the String version is that the value is a string instead of byte[]. But each FileStore version has some other differences, as detailed below:

  • All the versions apart from the IDistributedCache version has two extra method types:
    • GetAllKeyValues/Async methods, which returns all the key/values as a directory. I find this useful if I want to load multiple cache values as it only takes ~85 ns.
    • ClearAll, which drops all the key/values. Useful for testing and stackoverflow has may questions on how to clear out all the key/values in a cache.
  • To keep the read performance FileStore cache doesn’t support SlidingExpiration. This has three effects:
    • All four versions will throw an exception if you try to set the SlidingExpiration option via the DistributedCacheEntryOptions parameter in the Set/SetAsync.
    • The two FileStore cache versions that use the byte[] type for the value has a Refresh / RefreshAsync method, but if you call them it will throw an exception. This makes these two versions compatible to the IDistributedCache interface.
    • In the FileStore cache versions that use a string type for the value do not contain the the Refresh / RefreshAsync methods.

The code below uses the String version and shows the use of the GetAllKeyValues (see line 12) and the Set method (line 30).

public class HomeController : Controller
{
    private readonly IDistributedOptionsCacheString _fsCache;

    public HomeController(IDistributedOptionsCacheString optionsCache)
    {
        _fsCache = optionsCache;
    }

    public IActionResult Index()
    {
        var allKeyValues = _fsCache.GetAllKeyValues();
        foreach (var key in allKeyValues.Keys)
        {
            logs.Add($"Key = {key}, Value = {allKeyValues[key]}");
        }

        return View(logs);
    }

    public IActionResult AddCache()
    {
        return View();
    }

    [HttpPost]
    [ValidateAntiForgeryToken]
    public IActionResult AddCache(string key)
    {
        _fsCache.Set(key, DateTime.Now.ToString("s"), null);
        return RedirectToAction("Index");
    }
}

NOTE: The 1.1.0 version of FileStore cache adds the following extra commands:

  • SetMany / SetManyAsync which adds multiple entries in one go, which is quicker than individual Set / SetAsync calls.
  • The ClearAll method has optional parameters to add multiple entries into the cache after the cache has been cleared.

Looking at the IDistributedFileStoreCacheClass version

The reason for building the FileStore cache library was to improve my AuthPermissions.AspNetCore library and its handling of multi-tenant applications. In this library I have some data in the appsettings.json files which would be better in a distributed cache, but they have multiple parameters.

I saw this article which suggested some extension methods to turn a class into a json string before saving that json string in a cache, and I though that would be really useful for me. Rather than creating extension methods I built a small class called DistributedFileStoreCacheClass that inherits the string FileStore cache version that adds the following method types (the first threehave an async version):

NOTE: The <T> part has the following where cause: where T : class, new()

  • SetClass<T>(string key, T yourClass, …) – this serializes the “yourClass” into a json string and saved to the string to the FileStore cache.
  • SetManyClass<T>(List<KeyValuePair<string,T> manyEntries, ...) which adds multiple entries in one go, which is quicker than individual SetClass calls.
  • T? GetClass<T>(string key) – This reads in the string from the string FileStore cache and deserializes the string back to a class.
  • T? GetClassFromString<T>(string? jsonString) – This is useful if you want to obtain a class from a cache entry obtained by the GetAllKeyValues / Async method.

NOTE: Be aware that the cache does not hold the class type in the cache. It is up to you to use the same <T> on the GetClass as you used in the SetClass method.

Here is an example from my unit tests to show how it works:

[Fact]
public void DistributedFileStoreCacheSetClass_JsonClass_Example()
{
    //SETUP
    _distributedCache.ClearAll();

    //ATTEMPT
    _distributedCache.SetClass("test", new JsonClass2 { MyInt = 3, 
        MyClass1 = new JsonClass1 { MyInt = 1, MyString = "Hello" } }, null);

    //VERIFY
    var jsonClass2 = _distributedCache.GetClass<JsonClass2>("test");
    jsonClass2.ShouldBeType<JsonClass2>();
    jsonClass2.ShouldNotBeNull();
    //… other tests left out
}

The string below shows how your class is turned into a string to save in the cache.

"{\"MyClass1\":{\"MyInt\":1,\"MyString\":\"Hello\"},\"MyInt\":3}"

The byte[] value versions of the distributed FileStore cache

Because many existing usages of a distributed cache uses byte[] for the value. I created two versions that supports byte[] for the value, which then convert the byte[] value into a string and the calls  String version to access the FileStore cache.

There is one DistributedFileStoreCacheBytes class, but it can be registered either the IDistributedCache interface, or the IDistributedFileStoreCacheBytes interface which contains the extra methods, GetAllKeyValues/Async and ClearAll.

Conclusions

I needed a distributed cache that has a very fast read because wanted to read a lots of cache values on every HTTP request. If I used a database-based distributed cache could have taken at least a millisecond and more, but I remembered how fast the ASP.NET Core IOptionsSnapshot<T> was when it returns data in the appsettings.json file.

After quite bit of work, I found I couldn’t build a distributed cache using ASP.NET Core’s IOptionsSnapshot<T>. But it did guide me to a design that that a) implements a true distributed cache, b) which is blistering fast on reads, and c) is very easy to setup. From my point of view the slower update of the cache (e.g. Set, Remove) is a pity, but in my situation adding a new cache value happens very infrequently.   

In the end I put in extra features, like the byte[] values version, because there must be users already using the IDistributedCache interface. To my mind the FileStore cache’s simple setup and no extra costs (for Redis, SQL Server, etc.) may be more attractive than its super-fast read! Good luck and do let me know how you get on with this library if you use it.

Happy coding.