Updating many-to-many relationships in EF Core 5 and above

Last Updated: March 23, 2021 | Created: January 14, 2021

EF Core 5 added a direct many-to-many relationship with zero configuration (hurrah!). This article describes how to set this direct many-to-many relationship and why (and how) you might want to configure this direct many-to-many. I also include the original many-to-many relationship (referred to as indirect many-to-many from now on) where you need to create the class that acts as the linking table between the two classes.

You might be mildly interested that this is the third iteration of this article.  I wrote the first article on many-to-many on EF6.x in 2014, and another many-to-many article for EF Core in 2017. All of these got a lot of views, so I had to write a new article once EF Core 5 came out. I hope you find it useful.

All the information and the code come from Chapter 2 of the second edition of my book, Entity Framework Core in Action, which cover EF Core 5. In this book I build a book selling site, called Book App, where each book has two, many-to-many relationships:

  1. A direct many-to-many relationship to a Tag entity class (I refer to classes that EF Core maps to a database as entity classes). A Tag holds a category (for instance: Microsoft .NET or Web) which allows users to pick books by their topic.
  2. An indirect many-to-many relationship to Author entity class, which provides an ordered list of Author’s on the book, for instance: by Dino Esposito, Andrea Saltarello.

Here is an example of how it displays each book to the user – this is a fictitious book I used for many of my tests.

Overall summary and links to each section summary

For people who are in a hurry I have a ended each section with a summary. Here are the links to the summaries:

The overall summary is:

  • Direct many-to many relationships are super simple to configure and use, but by default you can’t access the linking table.
  • Indirect many-to many relationships takes more work to set up, but you can access the linking table. This allows you to put specific data in the linking table, such as an order in which you want to read them back.  

Here are the individual summaries (with links).

NOTE: All the code you see in this article comes the companion GitHub repo to my book Entity Framework Core in Action. Here is link to the directory with the entity classes are in, and many of code examples comes from the Ch03_ManyToManyUpdate unit test class and Ch03_ManyToManyCreate.

Setting the scene – the database and the query

Let’s start by seeing the finished database and how the query works. You can skip this, but maybe having an overall view of what is going on will help you when you are looking at the detailed part you are looking at the specific part you are interested in. Let’s start with the database.

This shows the two many-to-many – both have a linking table, but the direct many-to-many has its linking table created by EF Core.

Next, let’s see the many-to-many queries and how they relate to the book display in the figure below.

You can see that the Book’s Authors (top left) needs to be ordered – that Order property (a byte) is in the linking entity class. But for the Book’s Tags (bottom left), which don’t have an order, the query is much simpler to write because EF Core will automatically add the extra SQL needed to use the hidden linking table.

Now we get into the detail of setting up and using both of these types of many-to-many relationships.

Direct many-to-many setup – normal setup.

The setting up of the direct many-to-many relationship is done automatically (this is known as By Convention configuration in EF Core).  And when you create your database via EF Core, then it will add the linking table for you.

This is super simple to do – so much easier than the indirect many-to-many. But if you want to add extra data in the linking table, say for ordering or filtering, then you either alter the direct many-to-many or use the indirect many-to-many approach.

NOTE: The direct many-to-many relationship is only automatically configured if you have a collection navigational property on both ends. If you only want a navigational property on one end, then you will need to use the Fluent API configure (see next section), for instance …HasMany(x => x.Tags) .WithMany() where the Tags has no navigational property back to the Books.

Direct many-to-many setup: When you want to define the linking table

You can define an entity class and configure the linking table, but I will say that if you are going to do that you might as well use the indirect approach as I think it’s easier to set up and use.

Typically, you would only define the linking table if you wanted to add extra data. There are two steps in this process:

1. Creating a class to map to the linking table

Your entity class must have the two primary/foreign key from each ends of the many-to-many link, in this case it’s the BookId and the TagId. The code below defines the minimum properties to be the linking table – can add extra properties as normal, but I leave that you to do that.

public class BookTag
    public int BookId { get; set; }

    public string TagId { get; set; }

    //You can add extra properties here


    public Book Book { get; private set; }
    public Tag Tag { get; private set; }

You could add properties such as the Order property needed for the Author ordering, or maybe a property to use for soft delete. That’s up to you and doesn’t affect the configuration step that comes next.

2. Configuring the linking table in the OnModelCreating method

Now you have to configure the many-to-many linking class/table with the UsingEntity method in the OnModelCreating method in your application’s DbContext, as shown in the code below.

public class EfCoreContext : DbContext
    //Other code left out to focus on many-to-many
    protected override OnModelCreating(ModelBuilder modelBuilder) 
        //Other configuration left out to focus on many-to-many
        modelBuilder.Entity<Book>().HasMany(x => x.Tags)
                .WithMany(x => x.Books)
                    x => x.HasOne(x => x.Tag)
                    .WithMany().HasForeignKey(x => x.TagId),
                    x => x.HasOne(x => x.Book)
                   .WithMany().HasForeignKey(x => x.BookId));

You can see the EF Core document on this here.

NOTE: I really recommend an excellent video produced by the EF Core team which has a long section on the new, direct many-to-many, including how to configure it to include extra data.

Direct many-to-many usage – querying

Querying the direct many-to-many relationships is quite normal. Here are some queries

  • Load all the Books with their Tags
    var books = context.Books.Include(b => b.Tags).ToList()
  • Get all the books with the TagId (which holds the category name)
    var books = context.Books.Tags.Select(t => t.TagId).ToList()

EF Core will detect that your query is using a direct many-to-many relationship and add the extra SQL to use the hidden linking table to get the correct entity instances on the other end of the many-to-many relationship.

Direct many-to-many usage: Add a new link

To add another many-to-many link to an existing entity class is easy – you just add the existing entry into the direct many-to-many navigational collection property. The code below shows how to add an existing Tag to a book that already had one Tag already.

var book = context.Books
    .Include(p => p.Tags)
    .Single(p => p.Title == "Quantum Networking"); 

var existingTag = context.Tags         
    .Single(p => p.TagId == "Editor's Choice");


When you add the existing Tag into the Tags collection EF Core works out you want a linking entry created between the Book and the Tag. It then creates that new link.

A few things to say about this:

  • You should load the existing Tags using the Include method, otherwise you could lose any existing links to Tags.
  • You MUST load the existing Tag from the database to add to the Tags navigational collection. If you simply created a new Tag, then EF Core will add that new Tag to the database.

ADVANCE NOTES on navigational collection properties

Point 1: Let me explain why I say “You should load the existing Tags…” above. There are two situations:

  • If you add an empty navigational collection on the initialization of the class, then you don’t have add the Include method, as an Add will work (but I don’t recommend this – see below).
  • If your navigational collection is null after construction, then you MUST load the navigational collection, otherwise your code will fail.

Overall, I recommend loading the navigational collection using the Include method even if you have navigational collection has been set to an empty collection because the entity doesn’t match the database, which I try not to do as a future refactor might assume it did match the database.

Point 2: If you are adding a new entry (or removing an existing linking relationship) in a collection with LOTs of items in the collection, then you might have a performance issue with using an Include. In this case you can create (or delete for remove link – see below) the linking table entry. For a direct many-to-many relationship, you would need to create a property bag of the right form to add.

NOTE These ADVANCE NOTES also apply to the adding a new indirect many-to-many link.

Direct many-to-many usage: Remove a link

Removing a link to an entity that is already in the navigation property collection you simply remove that entity instance from the collection. The code below shows removing an existing Tag using the Remove method.

var book = context.Books
    .Include(p => p.Tags)

var tagToRemove = book.Tags
    .Single(x => x.TagId == "Editor's Choice");

This just like the adding of a link, but in this case EF Core works out you what linking entity that needs to be deleted to remove this relationship.

Direct many-to-many usage: Create Book with tags

To add tags when you first create a book you just add each tag to the Tags collection. The code below adds two existing Tags to a new book (note that I haven’t set up the Author – see this part for how you do that).

var existingTag1 = context.Tags.Single(t => t.TagId == "Tag1");
var existingTag2 = context.Tags.Single(t => t.TagId == "Tag2");
var newBook = new Book()
    Title = "My Book",
    //... other property settings left out
    //Set your Tags property to an empty collection
    Tags = new List<Tag>()

Indirect many-to-many setup – configuring the linking table

An indirect many-to-many relationship takes a bit more work, but it does allow you to use extra data that you can put into the linking table. The figure below shows the three entity classes, Book, BookAuthor, and Author, with define the many-to-many relationship.

This is more complex because you need to define the linking entity class, BookAuthor, so that you can add extra data in the linking table and also excess that extra data when you query the data.

EF Core will automatically detect the relationships because of all the navigational properties. But the one thing it can’t automatically detect is the composite primary key in the BookAuthor entity class. This code below shows how to do that.

protected override void OnModelCreating(ModelBuilder modelBuilder)
        .HasKey(x => new {x.BookId, x.AuthorId});

NOTE: Like the direct many-to-many configuration, if you leave out any of the four navigational properties, then it won’t set up that part of the many-to-many. You will then have to add Fluent API commands to set up the relationships.

Indirect many-to-many usage – querying

The indirect query is more complex, but that’s because you want to order the Author’s Names.

  • Load all the Books with their BookAuthor and Author entity classes
    var books = context.Books
         .Include(book => book.AuthorsLink).ThenInclude(ba => ba.Authors
  • Load all the Books with their BookAuthor and Author entity classes, and make sure the Authors are in the right order
    var books = context.Books
         .Include(book => book.AuthorsLink.OrderBy(ba => ba.Order))
         .ThenInclude(ba => ba.Authors
  • Get all the Books’ Title with the authors names ordered and then returned as a comma delimitated string
    var books = context.Books.Select(book => new
        Title = book.Title,
         AuthorsString = string.Join(", ",  

    book.AuthorsLink.OrderBy(ba => ba.Order)
              .Select(ba => ba.Author.Name))

NOTE: ordering within the Include method is also a new feature in EF Core 5.

Indirect many-to-many usage – add a new link

To add a new many-to-many relationship link you need to add a new instance of the linking entity class, in our example that is a BookAuthor entity class, and set up the two relationships, in this example filling in the Book and Author singleton navigational properties. This is shown in the code below, where we set the Order to a value that adds the new Author on the end (the first Author has an Order of 0, the second Author is 1, and so on).

var existingBook = context.Books                           
    .Include(p => p.AuthorsLink)                   
    .Single(p => p.Title == "Quantum Networking"); 

var existingAuthor = context.Authors          
    .Single(p => p.Name == "Martin Fowler");

existingBook.AuthorsLink.Add(new BookAuthor  
    Book = existingBook,  
    Author = existingAuthor,  
    // We set the Order to add this new Author on the end
    Order = (byte) book.AuthorsLink.Count

A few things to say about this (the first two are the same as the direct many-to-many add):

  • You should load the Book’s AuthorsLink using the Include method, otherwise you will lose any existing links to Authors.
  • You MUST load the existing Author from the database to add to the BookAuthor linking entity. If you simply created a new Author, then EF Core will add that new Author to the database.
  • Technically you don’t need to set the BookAuthor’s Book navigational property because you added the new BookAuthor instance to the Book’s AuthorsLink, which also tells EF Core that this BookAuthor is linked to the Book. I put it in to make it clear what the Book navigational does.

Indirect many-to-many usage – removing a link

To remove a many-to-many link, you need to remove (delete) the linking entity. In this example I have a book with two Authors, and I remove the link to the last Author – see the code below.

var existingBook = context.Books
    .Include(book => book.AuthorsLink
        .OrderBy(x => x.Order))
    .Single(book => book.BookId == bookId);

var linkToRemove = existingBook.AuthorsLink.Last();

This works, but you have the problem of making sure the Order values are correct. In the example code I deleted the last BookAuthor linking entity so it wasn’t a problem, but if I deleted any BookAuthor other than the last I should recalculate the Order values for all the Authors, otherwise a later update might get the Order of the Authors wrong.

NOTE: You can remove the BookAuthor by removing it from the Book’s AuthorsLink collection, like you did for the direct many-to-many remove. Both approches work.

Indirect many-to-many usage – Create Book with Authors

To add Authors when you first create a book you need to add a BookAuthor linking class for each author in the book, setting the Order property to define the order that the Authors should be displayed in. The code below adds two existing Authors to a new book.

var existingAuthor1 = context.Authors
    .Single(a => a.Name == "Author1");
var existingAuthor2 = context.Authors
    .Single(a => a.Name == "Author2");
var newBook = new Book()
    Title = "My Book",
    //... other property settings left out

    //Set your AuthorsLink property to an empty collection
    AuthorsLink = new List<BookAuthor>()
newBook.AuthorsLink.Add(new BookAuthor
    Book = newBook,
    Author = existingAuthor1,
    Order = 0  //First author
newBook.AuthorsLink.Add(new BookAuthor
    Book = newBook,
    Author = existingAuthor2,
    Order = 1  //second author


So, since EF Core 5, you have two ways to set up a many-to-many – the original indirect approach (Book-BookAuthor-Author) and the new direct (Book-Tags) approach.  The new direct many-to-many is really easy to use, but as you have seen sometimes using the original indirect approach is the way to go when you want to do more than a simple link between to entity classes.

If you didn’t find this link before, I really recommend an excellent video produced by the EF Core team which has a long section on the new, direct many-to-many, including how to configure it to include extra data.

All the best with your EF Core coding and do have a look at my GitHub page to see the various libraries I have created to help build and test EF Core applications.

Happy coding.

Introducing the EfCore.SoftDeleteServices library to automate soft deletes

Last Updated: January 13, 2021 | Created: January 12, 2021

Following on from my articles “EF Core In depth – Soft deleting data with Global Query Filters” I have built a library, EfCore.SoftDeleteServices (referred to as the Soft Delete library from now on), which provides services that automates the methods your need, that are: soft delete, find soft deleted items, un-soft delete, and hard delete a soft deleted item. In addition, this library provides solutions to various soft delete issues such as handling multiple query filter and mimicking the cascade delete feature that SQL database provide. 

NOTE: The library is available on GitHub at https://github.com/JonPSmith/EfCore.SoftDeleteServices and on NuGet as EfCore.SoftDeleteServices.

Also, Readers of my first article that starred the project should know I had a Git issue (my fault) and I renamed the first GitHub version to EfCore.SoftDeleteServices-Old and restarted the project. Please link to the new repo to make sure you are kept up to date.

TL;DR – summary

  • Soft delete is the term uses when you “hide” a row in your database instead of deleting a row. You can do implement this using EF Core Query Filter feature. See this link for more explanation.
  • The EfCore.SoftDeleteServices solves a lot of issues when implementing a soft delete feature.
    • It provides code that can automatically configure the Query Filters for your soft delete entities.
    • It is very configurable, with you deciding where you what to put the soft delete value – in a bool property, a bit in a [Flags] enum, a shadow property, a Domain-Driven Design with methods.
    • This library can handle Query Filter contains other filters, like multi-tenant control. It makes sure the other filters are still applied, e.g. that means you are never in a situation where the multi-tenant filter isn’t used. That’s a very good security feature!
    • It has a cascade soft delete feature mimic what a normal (hard) delete does. This is useful in places when another part of your code accesses the dependent relationships of an entity that was soft deleted. That stops incorrect results – see this section.
    • It provides a method to register your configuration to DI and will automatically registers the right version of the Soft Delete service for you to use.
  • The library has documentation and is available on NuGet at EfCore.SoftDeleteServices.

Setting the scene – what are the issues around implementing soft delete?

If you want to soft delete an entity instance (I use the term entity instance or entity class to refer to a class that has been mapped by EF Core to a database) by using EF Core’s Query Filter feature, then you need to do three things:

  1. Add a boolean property, say SoftDeleted, to your entity class.
  2. Configure a Query Filter on that entity class using the SoftDeleted property
  3. Build code to set/reset the SoftDeleted property
  4. Build code to find the soft deleted entities using the IgnoreQueryFilters method

NOTE: I show these four stages in this section of my “EF Core In depth – Soft deleting data with Global Query Filters” article.

None of these steps are hard to do, but if we are really trying to mimic the way that the database deletes things, then you do need to be a bit more careful. I covered these issues, with some solutions in the previous article, but here is the list:

  • If your Query Filter contains other filters, like multi-tenant control, then things get more complex (see this explanation).
  • It’s not a good idea to soft delete a one-to-one relationship, because EF Core will throw errors if you try to add a new version (see this explanation).
  • The basic soft delete doesn’t mimic what a normal (hard) delete does – a hard delete would, by default, delete any dependant rows too. This I solve by the cascade soft delete part of the Soft Delete library.

The EfCore.SoftDeleteServices library is designed to overcome all of these issues and gives you a few more options to. In the next section I will describe a simple example of using this service

An example of using the EfCore.SoftDeleteServices library

Let’s start with the most used feature – soft deleting a single entity. The starting point is to create an interface and then adding that interface to the entity classes which you want to soft delete. In this example I am going to add a boolean SoftDeleted property to the Book entity class.

1. Using an interface to define what entities you want to soft delete

public interface ISingleSoftDelete
    bool SoftDeleted { get; set;  }
public class Book : ISingleSoftDelete
    public int Id { get; set; }

    public bool SoftDeleted { get; set; }
    //… rest of class left out

2. Setting up Query Filters to your entities

You need to add a Query Filter to every entity class. You can write the code for each entity class, which I show next, but you can automate added a Query Filter to every entity class, which I show after.

The manual setup goes in the OnModelCreating in your application’s DbContext – see the code below.

public class EfCoreContext : DbContext
    //Other code left out to focus on Soft delete
    protected override OnModelCreating(
        ModelBuilder modelBuilder) 
        //Other configuration left out to focus on Soft delete
           .HasQueryFilter(p => !p.SoftDeleted);

But as I said I recommend automating your query filters by running code inside of your OnModleCreating method that looks at all the entity classes and adds a Query Filter to every entity class that has your soft delete interface, in this example ISingleSoftDelete. I have already described how to do this in this section of the article of soft deletion. You can also find some example code to do that in this directory of the Soft Delete’s GitHub repo

3. Configuring the soft delete library to your requirements

You need to create a configuration class which will tell the Soft Delete library what to do when you call one of its methods. The class below provides the definition for your entity classes with your interface (in this case ISingleSoftDelete), and how to get/set the soft delete property, plus other things like whether you want the single or cascade soft delete service and gets access to your application’s DbContext.

Your configuration must inherit either the SingleSoftDeleteConfiguration<TInterface> or the CascadeSoftDeleteConfiguration<TInterface> class – which one you use will define what service/features it provides.

NOTE: While I show the SoftDeleted property as a boolean type you could make it part of say a flag Enum. The only rule is you can get and set the property using a true/false value.

4. Setting up the Soft Delete services

To use the Soft Delete library, you need to get an instance of its service. You can create an instance manually (I use that in unit tests), but many applications now use dependency injection (DI), such as ASP.Net Core. The Soft Delete library provides an extension method called RegisterSoftDelServicesAndYourConfigurations, which will find and register all of your configuration classes and also registers the correct soft delete service for each configuration. The code below shows an example of calling this method inside ASP.Net Core’s startup method.

public void ConfigureServices(IServiceCollection services)
    //other setup code left out
    var softLogs = services

This will scan the assembly which has the ConfigSoftDeleted in and register all the configuration classes it finds there, and also registers the correct versions of the single or cascade services. In this example you would have three services configured

  • ConfigSoftDeleted as SingleSoftDeleteConfiguration<ISingleSoftDelete>
  • SingleSoftDeleteService<ISingleSoftDelete>
  • SingleSoftDeleteServiceAsync<ISingleSoftDelete>

A few features here:

  • You can provide multiple assemblies to scan.
  • If you don’t provide any assemblies, then it scans the assembly that called it
  • The method outputs a series of logs (see var softLogs in the code) when it finds/registers services. This can be useful for debugging if your soft delete methods don’t work. The listing below shows the output for my use of this library in my Book App.
No assemblies provided so only scanning the calling assembly 'BookApp.UI'
Starting scanning assembly BookApp.UI for your soft delete configurations.
Registered your configuration class ConfigSoftDelete as SingleSoftDeleteConfiguration<ISoftDelete>
SoftDeleteServices registered as SingleSoftDeleteService<ISoftDelete>
SoftDeleteServicesAsync registered as SingleSoftDeleteServiceAsync<ISoftDelete>

5. Calling the soft delete library’s methods

Having registered your configuration(s) you are now ready to use the soft delete methods. In this example I have taken an example from a ASP.NET Core application I build for my book. This code allows users to a) soft delete a Book, b) find all the soft deleted Books, and c) undelete a Book.

NOTE: You can access the ASP.NET Core’s Admin Controller via this link. You can also run this application by cloning the https://github.com/JonPSmith/EfCoreinAction-SecondEdition GitHub repo and selecting branch Part3.

public async Task<IActionResult> SoftDelete(int id, [FromServices] 
    SingleSoftDeleteServiceAsync<ISingleSoftDelete> service)
    var status = await service.SetSoftDeleteViaKeysAsync<Book>(id);

    return View("BookUpdated", new BookUpdatedDto(
        status.IsValid ? status.Message : status.GetAllErrors(),

public async Task<IActionResult> ListSoftDeleted([FromServices] 
    SingleSoftDeleteServiceAsync<ISingleSoftDelete> service)
    var softDeletedBooks = await service.GetSoftDeletedEntries<Book>()
        .Select(x => new SimpleBookList{
              BookId = x.BookId, 
              LastUpdatedUtc = x.LastUpdatedUtc, 
              Title = x.Title})

    return View(softDeletedBooks);

public async Task<IActionResult> UnSoftDelete(int id, [FromServices] 
     SingleSoftDeleteServiceAsync<ISingleSoftDelete> service)
    var status = await service.ResetSoftDeleteViaKeysAsync<Book>(id);

    return View("BookUpdated", new BookUpdatedDto(
        status.IsValid ? status.Message : status.GetAllErrors(),

The other feature I left out was the HardDeleteViaKeys method, which would hard delete (i.e., calls the EF Core Remove method) the found entity instance, but only if it had already been soft deleted.

NOTE: As well as the …ViaKeys methods there are the same methods that work on an entity instance.

Soft Delete library easily implement this example, but coding this yourself isn’t hard. So, let’s look at two, more complex examples that brings out some extra feature in the Soft Delete library. They are:

  • Handling Query Filters with multiple filter parts
  • Using cascade soft delete to ‘hide’ related information

Handling Query Filters with multiple filter parts

The EF Core documentation in Query Filters gives two main usages for Query Filters: soft delete and multi-tenancy filtering. One of my client’s application needed BOTH of these at the same time, which is doable but was a bit complex. While the soft delete filter is very important it’s also critical that at the multi-tenant part isn’t forgotten when using IgnoreQueryFilters method to access the soft deleted entities.   

One of the reasons for building this library was to handle applications where you want soft delete and multi-tenancy filtering. And the solution only needs you to add one line to the Soft Delete configuration – see lines 12 and 13 in the code below.

public class ConfigSoftDeleteWithUserId : 
    public ConfigSoftDeleteWithUserId(
        SingleSoftDelDbContext context)
        : base(context)
        GetSoftDeleteValue = entity => 
        SetSoftDeleteValue = (entity, value) => 
             entity.SoftDeleted = value;
        OtherFilters.Add(typeof(IUserId), entity => 
             ((IUserId)entity).UserId == context.UserId);

The OtherFilters.Add method allows you to define one or more extra filter parts, and when it filters for the GetSoftDeletedEntries method, or the Reset/HardDelete methods it makes sure these ‘Other Filters’ are applied (if needed).

To test this approach, I use my standard example of an application that sells books, where I want to soft delete a Book entity class (which has no multi-tenant part), and the Order entity class which has a multi-tenant part, so orders can only be seen by the user. This means the filter for finding each of the soft deleted entities are different.

Find soft deleted Book entity type, which doesn’t have the IUserId interface

context.Books.IgnoreQueryFilters.Where(b => b.SoftDeleted)

Find soft deleted on a Order entity type, which has the IUserId interface

context.Orders.IgnoreQueryFilters.Where(o => o.SoftDeleted && o.UserId = = context.UserId)

This is automatically done inside the Soft Delete library by dynamically building an expression tree. So the complex part is done inside the library and all you need to do is cut/paste the filter part and call the OtherFilters.Add method inside the configuration class.

Using cascade soft delete to also soft delete dependent relationships

One (small) problem with the single soft delete is it doesn’t work the same way as a normal (hard) delete. A hard delete would delete the entity in the database, and normally the database would also delete any relationship that can’t exist without that first entity (called a dependent relationships). For instance, if you hard deleted a Book entity that had some Reviews, then the database’s constraints would cascade delete all its Reviews linked to that Book. It does this to keep the referential integrity of the database, otherwise the foreign key in the Review table would be incorrect.

Most of the time the fact that a soft delete doesn’t also soft delete the dependent relationships doesn’t matter. For instance, not soft deleting the Reviews when you soft delete a Book most likely doesn’t matter as no one can see the Reviews because the Book isn’t visible. But sometimes it does matter, which is why I looked at what I would have to do to mimic the databases cascade deletes but using soft deletes. It turns out to be much more complex than I thought, but the soft delete library contains my implementation of cascade soft deleting.

NOTE: In the previous article I go through the various options you have when soft deleting an entity with dependant relationships – see this link.

While the single soft delete is useful everywhere, the cascade soft delete approach is only useful in specific situations. One that I came across was a company that did complex bespoke buildings. The process required created detailed quotes for a job which uses a hierarchical structure (shown as the “Quote View” in the diagram). Some quotes were accepted, and some were rejected, but they needed to keep the rejected quotes as a history of the project.

At the same time, they wanted to know if their warehouse had enough stock to build the quotes have sent out (shown as the “Stock Check View” in the diagram). Quote 456-1 was rejected which means it was cascade soft deleted, which soft deletes all the LineItems for Quote 456-1 as well. This means that when the Stock Check code is run it wouldn’t see the LineItems from Quote 456-1 so the Stock Check gives the correct value of the valid Quotes.

Using cascade soft delete makes the code for the Stock Check View much simpler, because the cascade soft delete of a quote also soft deletes its LineItems. The code below creates a Dictionary whose Key are the ProductSku, with the Value being how many are needed.

var requiredProducts = context.Set<LineItem>().ToList()
    .GroupBy(x => x.ProductSku, y => y.NumProduct)
    .ToDictionary(x => x.Key, y => y.Sum());

Running this code against the three quotes in the diagram means that only the dark green LineItems – the cascade soft deleted LineItems (shown in light green and a red triangle in it) aren’t included, which is what is needed in this case.

Solving the problem of cascade soft un-delete

There is one main difference between soft delete and the database delete – you can get back the soft delete data! That’s what we want, but it does cause a problem when using cascade soft delete/un-delete in that you might have already cascade soft deleted some relationships deeper down the in relationships. When you cascade soft un-delete you want the previous cascade soft delete of the deeper relationships to stay as they were.

The solution is to use a delete level number instead of a boolean. I cover this in more detail in the first article and I recommend you read this part of the article, but here is a single diagram that shows how the Soft Delete library’s cascade soft un-delete can return a entity and its dependent relationships back to the point where the last cascade soft was applied – in this case the LineItems in the red area will still be soft deleted.

Using the cascade soft delete methods

Using the cascade soft delete versions of the library requires you to:

  1. Set up an interface which adds a property of type byte to take the delete level number.
  2. Inherit the CascadeSoftDeleteConfiguration<TInterface> class in your configuration class. The code below shows you an example.
public class ConfigCascadeDelete : 

    public ConfigCascadeDelete(
        CascadeSoftDelDbContext context)
        : base(context)
        GetSoftDeleteValue = entity => 
        SetSoftDeleteValue = (entity, value) =>
            entity.SoftDeleteLevel = value;

There are the same methods as the single soft delete methods, but they contain the word “Cascade”, for instance SetSoftDeleteViaKeys becomes SetCascadeSoftDeleteViaKeys and so on. All the same features are there, such as handler multiple filters.


I have now released the EfCore.SoftDeleteServices and the library is available on GitHub at https://github.com/JonPSmith/EfCore.SoftDeleteServices and on NuGet as EfCore.SoftDeleteServices. It was quite a bit of work, but I’m pleased with the final library. I have already put it to work in my BookApp.UI example application.

My experience on working on client projects says that soft delete is a “must have” feature. Mainly because users sometime delete something they didn’t mean to do. Often the soft delete is shown to users as “delete” even though it’s a soft delete with only the admin having the ability to un-soft delete or hard delete.

Let me know how you get on with the library!

Happy coding.

EF Core In depth – Tips and techniques for configuring EF Core

Last Updated: October 27, 2020 | Created: July 17, 2020

This article is being more efficient at configuring your EF Core DbContext that runs fast and safe. As a working freelance developer, I’m always looking for ways to make me a more efficient/faster developer. While configuring a DbContext is really important there can be a lot of configuration code, but over the years I have found ways to minimise or automate much of the EF Core configurations. This article pulls together lots of configuration approaches I have learnt working with EF Core, and EF6 before that.

I do assume you know EF Core, but I start with a look at EF Core’s configuration of your application’s DbContext to make sure we have the basics before I dig into the various tips and techniques to make you faster and safer.

This article is part of a “EF Core In depth” series. Here is the current list of articles in this series:

Other older articles in this series are

This “EF Core In depth” series is inspired by what I found while updating my book “Entity Framework Core in Action” to cover EF Core 5. I am also added a LOT of new content from my experiences of working with EF Core on client applications over the last 2½ years.

NOTE: There is a GitHub repo at https://github.com/JonPSmith/EfCoreinAction-SecondEdition/tree/Part2 that contains all the code used in this article.

TL;DR – summary

  • EF Core builds a Model of the database based on the DbSet<T> classes and various configuration methods. It does NOT look at the actual database.
  • EF Core uses three approaches to configure your application’s DbContext
    • By Convention, which applied a set of rules to the properties types/names to work out a default configuration.
    • Attributes: It will look for certain annotations, like [MaxLength(123)] to add more configurations.
    • Fluent API: Finally, it runs OnModelCreating method in the application’s DbContext where you can place Fluent API commands.
  • Learning/following EF Core’s By Convention rules will save you a LOT of time and code. It is especially good at configuring one-to-many relationships.
  • If you want your database to be quick it is worth defining the SQL type a bit more tightly for certain NET types, like string, DateTime, decimal.
  • These are some more advanced entity types that are useful. I talk about owned types, Table-per-Hierarchy and table splitting.
  • When your application gets big your configuration can be split into per-class configurations, which makes it easier to find/refactor.
  • There is a really helpful technique that can automate some of the configuring. It allows you to define you own By Convention and have it applied to all classes/properties you have defined.

Setting the scene – what is happening when you configure your DbContext

To use EF Core you must create a class that inherits EF Core’s DbContext (I refer to this as your application’s DbContext). In this class you add DbSet<T> properties that set up the mapping between your classes (I refer to these as entity classes) and the database. The following listing is a very basic application’s DbContext without any extra configuration.

public class EfCoreContext : DbContext 
    public EfCoreContext(DbContextOptions<EfCoreContext> options)                    
        : base(options) {}                                         

    public DbSet<Book> Books { get; set; }                      
    public DbSet<Author> Authors { get; set; }                  
    public DbSet<PriceOffer> PriceOffers { get; set; }          

When I talk about “configuring EF Core”, or “configure your DbContext” I’m talking about a process the EF Core does on the first use of your application’s DbContext. At that point is creates a Model of the database you plan to access based on your entity classes mapped to the database and any EF Core configuration commands you have provided.

Just to be clear, it never looks at the actual database to build this Model; it only uses the entity classes and any EF Core configuration commands you have added. How EF Core’s Model of the database and the actual database need to match otherwise your application will fail when it tried to access the database.

NOTE: I cover the whole area of how to make changes to a database in the two articles Handling Entity Framework Core migrations: creating a migration – Part 1 and Handling Entity Framework Core migrations: applying a migration – Part 2

The following figure shows the process that EF Core goes through the first time you use your application’s DbContext (later instances of your DbContext use a cached version of the created Model).

EF Core uses three ways to pick up configure information

  1. By Convention: When you follow simple rules on property types and names, EF Core will autoconfigure many of the software and database features. For instance
    1. A property of NET type string will, by default, map to SQL NVARCHAR(max) NULL
    1. A property with the name Id or <ClassName>Id (e.g. BookId) will be the primary key for this entity class.
  2. Data Annotations: A range of .NET attributes, known as Data Annotations, can be added to entity classes and/or properties to provide extra configuration information. For instance
    1. Adding the attribute [MaxLength(100)] on string property will change the SQL to  NVARCHAR(100) NULL
    1. Adding the attribute [Required] on string property will change the SQL to NVARCHAR(max) NOT NULL.
  3. Fluent API: EF Core has a method called OnModelCreating that’s run when the EF context is first used. You can override this method and add commands, known as the Fluent API. For instance
    1. The command modelBuilder.Entity<Book>().Property(p => p.Price).HasIndex() would add a non-unique index to the Price column in the table mapped to the Book entity class.
    1. The command modelBuilder.Entity<Book>().Property(p => p.PublishedDate).HasColumnType(“date”) would change the SQL type from DATETIME2, which has a resolution of 100ns, to the much smaller SQL DATE type that is accurate to the one day.

Read on for tips on how to use these three approaches to a) write the minimum of configuration code and b) get a good database design.

Tip: Let EF Core do most of the configuring using By Convention rules

Most of you will already using the By Convention rules to set up the column names and types. If you are control over the database design, known as its schema, i.e. you can use whatever column names that suit you, which will save you from writing a lot of boring configuration code.

By when it comes to relationships some developers seem to what to define every relationship. When I first started using EF6 I did just that, but ended up with a lot of code! Once I understood EF Core’s By Convention rules (writing the EF Core In Action book taught me the rules!), then I rarely defined a relationship unless I want to change the delete behaviour (I talk about delete behaviour later). The relationships rules are pretty simple

  1. Name your primary key as Id or <ClassName>Id (e.g. BookId).
  2. Use the <ClassName>Id name on your foreign key, because that works with both primary key formats, i.e. Id or <ClassName>Id
  3. Set up the property that links the two entity classes (known as navigational property) using the entity class type (the name doesn’t matter), e.g. ICollection<Review> Reviews { get; set; }

Here is a figure showing a relationship that EF Core’s By Convention will define automatically.

Of course, there are some exceptions where you would need Fluent API commands.

  • EF Core can only configure a one-to-one relationship By Convention if both ends of the have navigational properties, otherwise it will think it’s a one-to-many relationship. But one-to-one relationships are a lot less used than one-to-many and many-to-many relationships.
  • If you want to change the delete rules from the By Convention value; for instance, what happens to the Reviews when the Book is deleted – in this case the Reviews would be deleted too. If you didn’t want that to happen then you would have to define the relationship using Fluent API commands and add the OnDelete command.
  • If you have two navigational properties going to the same class, for instance BillingAddress and DeliveryAddress both pointing to the Address entity class, then you do need to configure that manually (but an Owned type would be better for that).
  • Some very advanced things like setting the constraint name need Fluent API

Overall you want to let EF Core configure as much as you can as its quick and easy. So, learn the rules and trust in EF Core (but unit tests are also good!)

Making your database more efficient

Its easy to create classes, but entity classes need a little more attention to make sure the database is as fast as it can be. This requires a bit more work on your part. Here are some things to consider

1. string type properties

By default, will set the SQL type to NVARCHAR(MAX) NULL works OK, but do you need space for a 1Gbyte Unicode character string? Here are some suggestions:

  • Set the size of the string using [MaxLength(123)] attribute. NVARCHAR(NNN) is slightly quicker than NVARCHAR(MAX) and NVARCHAR(NNN) . NOTE The [MaxLength(123)] is also useful for front-end checking that the input isn’t too long.
  • If you filter or sort on a string, then adding an SQL index is useful. Use Fluent API command HasIndex() or the new EF Core 5 (preview 6) [Index(nameof(Title)] attribute. NOTE an index has a limit of 900 bytes, so your NVARCHAR must be 450 or lower.
  • Some strings are 8-bit ASCII, like URLs, so why send/return the other bytes. Use Fluent API command IsUnicode(false), which will turn the SQL type from NVARCHAR to VARCHAR.
  • Try adding the [Required(AllowEmptyStrings = false)] attribute on strings you expect to contain a string. The [Required] part will change the SQL type from NVARCHAR(MAX) NULL to NVARCHAR(MAX) NOT NULL (the AllowEmptyStrings = false part doesn’t affect the database; it is only used in any NET validations).

2. DateTime type properties

By default, NET’s DateTime type is saved as SQL DATETIME2, which has a resolution of 100ns and take up 7 bytes. In some cases that is great, but SQL DATE type is only 3 bytes. As well as saving bytes a sort or filter of a DATE type is going to be much quicker sort/filter than on a DATETIME2 type.

NOTE: If you save a DateTime that is using DateTimeKind.Utc, then you should know that the DateTimeKind of a DateTime is not preserved in the database. That matters if your front-end is going to send the data using JSON, as the JSON datatime string won’t end with a “Z” and your front-end might get the date offset right. You can fix this using EF Core’s ValueConverters – (add a comment to this article if you want to know how to do that).

3. decimal type properties

By default, a NET decimal type is saved as DECIMAL(18,2), which is SQL Servers default, which means it has 16 digits before the decimal point and 2 after the decimal point and takes up 9 bytes. If your dealing with money that might be too big, and DECIMAL(9,2) would work and that’s only 5 bytes.

On the other hand, if you’re dealing with percent, then having a precision of 2 decimal places might not be enough, and 16 digits before the decimal point is too much.

In both cases its worth changing the default precision (i.e. number of digits stored) and scale (i.e. number of digits after the decimal point). You can do that via the [Column(Datatype=”decimal(9,2)”)] or the Fluent API command HasColumnType(”decimal(9,2)”) command. But in EF Core 5 there is a really nice Fluent API called HasPrecision(9,2), which is easier.

4. Avoid expression body properties with EF Core

In a normal class having a property where it has code (referred to as expression body definition) as shown below is the right thing to do.

public class MyClass
    public string FirstName { get; set; }
    public string LastName { get; set; }

    public string FullName => $"{FirstName} {LastName}";

The problem comes when you want to sort/filter on the FullName – EF Core will throw an exception because there is no FullName column to sort/filter on. So you need to add an actual FullName property that will be mapped to the database, and you either set the properties via a constructor, or use EF Core’s backing fields to capture a software change to the FirstName/LastName and set the FullName

public class MyClassImproved
    private string _firstName;
    private string _lastName;

    public string FirstName
        get => _firstName;
            _firstName = value;
            FullName = $"{FirstName} {LastName}";

    public string LastName
        get => _lastName;
            _lastName = value;
            FullName = $"{FirstName} {LastName}";

    public string FullName { get; set; }

NOTE: Another option in EF Core 5 (preview 5) is stored (persisted) computed columns which allow you to have a FullName column that runs the SQL command FirstName + ‘ ‘ + LastName whenever the row is created or updates. Its efficient and SQL Server allows indexes on persisted computed columns too.

Let’s take about some more advanced entity types

Using normal entity classes with links other entity classes works, but there are some variants of classes that can make your life easier and can improve performance. Here are some specific EF Core class types

  • Owned types – useful for common data that used in lots of places, e.g. Address
  • Table per hierarchy (TPH)—This maps a series of inherited classes to one table; for instance, classes called Dog, Cat, and Rabbit that inherit from the Animal class
  • Table splitting – Lets you map multiple classes to a table. Useful if you want a Summary part and a Detailed part.

I have used Owned types a lot and its great for keeping a specific group of data together. I have also used TPH quite a bit on client systems where there is common data with a few differences – really worth looking at. I haven’t used table spitting much because I normally use Select queries to pick the exact properties/columns I want anyway.

I’m only going to cover the Owned types because this article is pretty long already, and I still want to show more things.

Owned entity types

Owned entity types are classes that you can add to an entity class and the data in owned types will be combined into the entity class’s table. To make this more concrete, think about an address. An address is a group of Street, City, State etc. properties that, on their own, aren’t that useful as they need to link to a company, a user, a delivery location and so on.  

The owned type class doesn’t have its own primary key, so doesn’t have an identity of its own but relies on the entity class that “owns” it for its identity. In DDD terms, owned types are known as value objects. This also means you can use owned type multiple times in an entity class – see the example OrderInfo class with two addresses in it.

public class OrderInfo
    public int OrderInfoId { get; set; }
    public string OrderNumber { get; set; }

    public Address BillingAddress { get; set; } 
    public Address DeliveryAddress { get; set; }

The address class must be marked as an Owned type either by the [Owned] attribute or via Fluent API. The code below uses the [Owned] attribute (highlighed)

public class Address                           
    public string NumberAndStreet { get; set; }
    public string City { get; set; }
    public string ZipPostCode { get; set; }                       
    public string CountryCodeIso2 { get; set; }

Now when you look at the SQL table generated by ED Core it looks like this

    [OrderInfoId] int NOT NULL IDENTITY,
    [OrderNumber] nvarchar(max) NULL,
    [BillingAddress_City] nvarchar(max) NULL,
    [BillingAddress_NumberAndStreet] nvarchar(max) NULL,
    [BillingAddress_ZipPostCode] nvarchar(max) NULL,
    [BillingAddress_CountryCodeIso2] [nvarchar](2) NULL
    [DeliveryAddress_City] nvarchar(max) NULL,
    [DeliveryAddress_CountryCodeIso2] nvarchar(max) NULL,
    [DeliveryAddress_NumberAndStreet] nvarchar(max) NULL,
    [DeliveryAddress_CountryCodeIso2] [nvarchar](2) NULL, 
    CONSTRAINT [PK_Orders] PRIMARY KEY ([OrderInfoId])

As you can see the two Address class data, BillingAddress and DeliveryAddress,is added to the Orders table.

Few things to know about owned types:

  • A property with a type that is Owned Type in an entity can be null, in which case all the columns in the table are null.
  • If an Owned Type contains a non-nullable property it is still stored in a nullable column in the database. That’s done to handle an Owned Type property being null
  • Nullable Owned Type properties were added in EF Core 3, but the SQL command wasn’t ideal. This is fixed in EF Core 5.
  • You can map an Owned Type for a separate table – I haven’t described that.

Tip: How to organise your configuration code

The Fluent API go in the OnModelCreating method in your application’s DbContext. For small projects that works fine, but once you start to get more and more Fluent API configrations it can get messy and hard to find. One solution is using the IEntityTypeConfiguration<TEntity> type. This allows you to have configurations for each entity class that needs it – see code below

internal class BookConfig : IEntityTypeConfiguration<Book>
    public void Configure
        (EntityTypeBuilder<Book> entity)
        entity.Property(p => p.PublishedOn)           
        entity.HasIndex(x => x.PublishedOn);                         

        //HasPrecision is a EF Core 5 method
        entity.Property(p => p.Price).HasPrecision(9,2);                       

        entity.Property(x => x.ImageUrl).IsUnicode(false);                        

        entity.HasQueryFilter(p => !p.SoftDeleted);   

        //one-to-one with only one navigational property must be defined

        entity.HasOne(p => p.Promotion)               
            .HasForeignKey<PriceOffer>(p => p.BookId);

You have a few options how to run these. You can call them inside your OnModelCreating method using the code below

protected override void OnModelCreating(ModelBuilder modelBuilder)
    modelBuilder.ApplyConfiguration(new BookConfig());
    modelBuilder.ApplyConfiguration(new BookAuthorConfig());
    //… and so on.

Or you can use the ApplyConfigurationsFromAssembly command to find and run all your IEntityTypeConfiguration<TEntity> classes. The code below assumes those classes are in the same project as your application’s DbContext

protected override void OnModelCreating(ModelBuilder modelBuilder)

Adding your configuration rules automatically

One of the big things I learnt is how to automatically apply Fluent API commands to certain classes/properties. For example I will show you is how to define your own By Convention rules, for instance any entity class property of type decimal and the parameter’s name contains the string “Price” sets the SQL type set to DECIMAL(9,2).

This relies on the modelBuilder.Model.GetEntityTypes() method available in the OnModelCreating method. This provides a collection of all the entity classes that EF Core has found at this stage, and within that you can gain access to the properties in each entity class.

The piece of code taken from an application’s DbContext contains two rules

  • Properties of type decimal, with “Price” in the parameter name is set to DECIMAL(9,2).
  • Properties of type string and the parameter name ends with “Url”, is set to VARCHAR
protected override void OnModelCreating(ModelBuilder modelBuilder)
    foreach (var entityType in modelBuilder.Model.GetEntityTypes()) 
        foreach (var entityProperty in entityType.GetProperties())  
            if (entityProperty.ClrType == typeof(decimal)           
                && entityProperty.Name.Contains("Price"))           

            if (entityProperty.ClrType == typeof(string)            
                && entityProperty.Name.EndsWith("Url"))             

Those are just two that I use, plus one to sort out DateTimeKind on DateTime properties whose parameter name ends in “Utc”. I also use Query Filter setup shown in the last article to set up all my Query Filters; that way I can’t forget to set up a Query Filter.


I have gone from explain how EF Core builds a Model of the database you want to access a using three different configuration approaches. Learning EF Core’s By Convention rules can cut down a LOT of configuration code, which saves you time and makes your configuration easier to understand.

I then talked about some of the NET types that might need to add some configuration code to make your database more efficient. I also touched on some of different types of class arrangement that EF Core can do, especially the Owned Type.

Finally, I cover different ways to configure your application’s DbContext, including a very helpful way to automate some of your configurations. I really like the automating configuration approach as it makes sure I haven’t missed anything, and it reduces the amount of configuration code I need to write.

Happy coding.

EF Core In depth – Soft deleting data with Global Query Filters

Last Updated: July 26, 2021 | Created: July 2, 2020

This article is about a way to seemingly delete data, but in fact EF Core hides it for you and you can get it back if you need to. This type of feature is known as soft delete and it has many good features, and the issues to be aware of too. In this article I use EF Core to implement the normal soft delete and a more powerful (and complex) cascade soft delete. Along the way I give you tips on how write reusable code to speed up your development of a soft delete solution.

I do assume you know EF Core, but I start with a look at using EF Core to make sure we have the basics of deleting and soft deleting covered before looking at solutions.

This article is part of a “EF Core In depth” series. Here is the current list of articles in this series:

Other older articles in this series are

This “EF Core In depth” series is inspired by what I found while updating my book “Entity Framework Core in Action” to cover EF Core 5. I am also added a LOT of new content from my experiences of working with EF Core on client applications over the last 2½ years.

NOTE: There is a GitHub repo at https://github.com/JonPSmith/EfCore.SoftDeleteServices that contains all the code used in this article.

UPDATE – new library

I have just released the library EfCore.SoftDeleteServices which I talk about in this article. See the new article Introducing the EfCore.SoftDeleteServices library to automate soft deletes to read how this library helps you implement soft delete in your applications.

TL;DR – summary

  • You can add a soft delete feature to your EF Core application using Global Query Filters (referred to as Query Filters for now on).
  • The main benefits of using soft delete in your application are inadvertent deletes can be restored and history is preserved.
  • There are three parts to adding the soft delete feature to your application
    • Add a new soft delete property to every entity class you want to soft delete.
    • Configure the Query Filters in your application’s DbContext
    • You create code to set and reset the soft delete property.
  • You can combine soft delete with other uses of Query Filters, like multi-tenant uses but you need to be more careful when you are looking for soft deleted entries.
  • Don’t soft delete a one-to-one entity class as it can cause problems.
  • For entity classes that has relationships you need to consider what should happen to the dependant relationships when the top entity class is soft deleted.
  • I introduce a way to implement a cascade soft delete that works for entities where you need its dependant relationships soft deleted too.

Setting the scene – why soft delete is such a good idea

When you hard delete (I use the term hard delete from now on, so its obvious what sort of delete I’m talking about) a row then it gone from your database. Also, hard deleting might also hard delete rows that rely on the row you just hard deleted (known as dependant relationships). And as the saying says “When it’s gone, then it’s gone” – no getting it back unless you have a backup.

But nowadays we are more used to “I deleted it, but I can get it back” – On windows it’s in the recycle bin, if you deleted some text in an editor you can get it back with ctrl-Z, and so on. Soft delete is EF Core’s version of windows recycle bin – the entity class (the term for classes mapped to the database via EF Core) is gone from normal usage, but you can get it back.

Two of my client’s applications used soft delete extensively. Any “delete” the normal user did set the soft delete flag, but an admin user could reset the soft delete flag to get the item back for the user. In fact, one of my clients used the terms “delete” for a soft delete and “destroy” for a hard delete. The other benefit of keeping soft-deleted data is history – you can see what changed in the past even its soft deleted. Most client’s keeps soft deleted data in the database for some time and only backup/remove that data many months (years?) later.

You can implement the soft delete feature using EF Core Query Filters. Query Filters are also used for multi-tenant systems, where each tenant’s data can only be accessed by users who belong to the same tenant. This means EF Core Query Filters are designed to be very secure when it comes to hiding things – in this case data that has been soft deleted.

I should also say there are some down sides of using soft delete. The main one is performance – an extra, hidden SQL WHERE clause is included in every query of entity classes using soft delete.

There is also a difference between how soft delete handles dependant relationships when compared with hard delete. By default, if you soft delete an entity class then its dependant relationships are NOT soft deleted, whereas a hard delete of an entity class would normally delete the dependant relationships. This means if I soft delete a Book entity class then the Book’s Reviews will still be visible, which might be a problem in some cases. At the end of this article I show you how to handle that and talk about a prototype library that can do cascade soft deletes.

Adding soft delete to your EF Core application

In this section I’m going to go through each for the steps to add soft delete to your application

  1. Add soft delete property to your entity classes that need soft delete
  2. Add code to your DbContext to apply a query filter to these entity classes
  3. How to set/reset Soft Delete

In the next sections I describe these stages in detail. I assume a typical EF Core class with normal read/write properties, but you can adapt it to other entity class styles, like Domain-Driven Design (DDD) styled entity classes.

1. Adding soft delete property to your entity classes

For the standard soft delete implementation, you need a boolean flag to control soft delete. For instance, here is a Book entity with a SoftDeleted property highlighted.

public class Book : ISoftDelete                   
    public int BookId { get; set; }
    public string Title { get; set; }
    //… other properties left out to focus on Soft delete

    public bool SoftDeleted { get; set; }

You can tell by its name, SoftDeleted, that if it is true, then its soft deleted. This means when you create a new entity it is not soft deleted.

You other thing I added was an ISoftDelete interface to the Book class (line 1). This interface says the class must have a public SoftDeleted property which can be read and written to. This interface is going to make it much easier to configure the delete query filters in your DbContext.

2. Configuring the soft delete query filters in your DbContext

You must tell EF Core which entity classes needs a query filter and provide a query which will be true if you want it to be seen. You can do this manually using the following code in your DbContext – see highlighted line in the following listing.

public class EfCoreContext : DbContext
    public EfCoreContext(DbContextOptions<EfCoreContext> option)                      
        : base(options)                                           
    //Other code left out to focus on Soft delete

    protected override OnModelCreating(ModelBuilder modelBuilder) 
        //Other configuration left out to focus on Soft delete

        modelBuilder.Entity<Book>().HasQueryFilter(p => !p.SoftDeleted);

That’s fine but let me show you a way to automate adding query filters. This uses

  1. The modelBuilder.Model.GetEntityTypes() feature available in the OnModelCreating method
  2. A little bit of generic magic to create the correct query filter

Here are two part:

1. Automating the configuring of the soft delete query filters

The OnModelCreating method in your DbContext is where you can configure EF Core via what are known as Fluent API configuration commands – you saw that in the last listing. But there is also a way you can look at each entity class and decide if you want to configure it.

In the code below you can see the foreach loop that goes through each entity class in turn. You will see a test to see if the entity class implements the ISoftDelete interface and if it does it calls a extension method I created to configure a query filter with the correct soft delete filter.

protected override void OnModelCreating(ModelBuilder modelBuilder)
    //other manual configurations left out       

    foreach (var entityType in modelBuilder.Model.GetEntityTypes())
        //other automated configurations left out
        if (typeof(ISoftDelete).IsAssignableFrom(entityType.ClrType))

2. Creating the AddSoftDeleteQueryFilter extension method

There are many configurations you can apply directly to the type that the GetEntityTypes method returns but setting up the Query Filter needs a bit more work. That’s because LINQ query in the Query Filter needs the type of the entity class to create the correct LINQ expression. For this I created a small extension class that can dynamically create the correct LINQ expression to configure the Query Filter and adds a index to that property.

public static class SoftDeleteQueryExtension
    public static void AddSoftDeleteQueryFilter(
        this IMutableEntityType entityData)
        var methodToCall = typeof(SoftDeleteQueryExtension)
                BindingFlags.NonPublic | BindingFlags.Static)
        var filter = methodToCall.Invoke(null, new object[] { });

    private static LambdaExpression GetSoftDeleteFilter<TEntity>()
        where TEntity : class, ISoftDelete
        Expression<Func<TEntity, bool>> filter = x => !x.SoftDeleted;
        return filter;

I really like this because I a) saves me time, and b) I can’t forget to configure a query filter.

3. How to set/reset Soft Delete

Setting the SoftDeleted property to true is easy – the user picks an entry and clicks “Soft Delete”, which send back the entities primary key. Then your code to implement that is.

var entity = context.Books.Single(x => x.BookId == id);
entity.SoftDeleted = true;

Resetting the SoftDeleted property is a little bit more complex. First you most likely want to show the user a list of JUST the soft deleted entities – think of it as showing the trash can/recycle bin for an individual entity class type, e.g. Book. To do this need to add the IgnoreQueryFilters method to your query which means you will get ALL the entities, ones that aren’t soft deleted and ones that are, but you then pick out the ones where the SoftDeleted property is true.

var softDelEntities = _context.Books.IgnoreQueryFilters()
    .Where(x => x.SoftDeleted)

And when you get a request to reset the SoftDeleted property this typically contains the entity classes primary key. To load this entry you need include the IgnoreQueryFilters method in your query to get the entity class you want to reset.

var entity = context.Books.IgnoreQueryFilters()
     .Single(x => x.BookId == id);
entity.SoftDeleted = false;

Things to be aware of if you use Soft delete

First, I should say that Query Filters are very secure, by that I mean if the query filter returns false then that specific entity/row won’t be returned in a query, a Find, an Include of a relationship etc. You can get around it by using direct SQL, but other than that EF Core is going to hide things that you soft delete.

But there are a couple of things you do need to be aware of.

Watch out for mixing soft delete with other Query Filter usages

Query Filters are great for soft delete, but Query Filters are even better for controlling access to groups of data. For instance, say you wanted to build a web application that to provide a service, like payroll, to lots of companies. In that case you need make sure that company “A” couldn’t see company “B” data, and vis versa. This type of system is called a multi-tenant application, and Query Filters are a perfect fit for this.

NOTE: See my article Part 2: Handling data authorization in ASP.NET Core and Entity Framework Core for using query filters to control access to data.

The problem is you are only allowed one query filter per entity type, so if you want to use soft delete with a multi-tenant system then you must combine both parts to form the query filter – here is an example of what the query filter might look like

    .HasQueryFilter(x => !x.SoftDeleted 
                       && x.TenantId == currentTenantId);

That work fine, but when you use the IgnoreQueryFilters method, say to reset a soft deleted flag, then it ignores the whole query filter, including the multi-tenant part. So, if you’re not careful you could show multi-tenant data too!

The answer is to build yourself an application-specific IgnoreSoftDeleteFilter method something like this.

public static IQueryable<TEntity> IgnoreSoftDeleteFilter<TEntity>(
    this IQueryable<TEntity> baseQuery, string currentTenantId)
    where TEntity : class, ITenantId
    return baseQuery.IgnoreQueryFilters()
        .Where(x => x.TenantId == currentTenantId)

This ignores all the filters and then add back the multi-tenant part of the filter. That will make it much easier to safely handle showing/resetting soft deleted entities

Don’t soft delete a one-to-one relationship

I was called in to help on a very interesting client system that used soft delete on every entity class. My client had found that you really shouldn’t soft delete a one-to-one relationship. The problem he found was if you soft delete a one-to-one relationship and try to add a replacement one-to-one entity, then it fails. That’s because a one-to-one relationship has a unique foreign key and that is already set by the soft deleted entity so, at the database level, you just can’t provide another one-to-one relationship because there is one already.

One-to-one relationships are rare, so it might not be a problem in your system. But if you really need to soft delete a one-to-one relationship, then I suggest turn it into a one-to-many relationship where you make sure only one of the entities has a soft delete turned off, which I cover in the next problem area.

Handling multiple versions where some are soft deleted

There are business cases where you might create an entity, then soft delete it, and then create a new version. For example, say you were creating invoice for order 1234, then you are told the order has been stopped, so you soft delete it (that way you keep the history). Then later someone else (who doesn’t know about the soft deleted version) is told to create an invoice for 1234. Now you have two versions of the invoice 1234. For something like an invoice that could cause a problem business-wise, especially if someone reset the soft deleted version.

You have a few ways to handle this:

  • Add a LastUpdated property of type DateTime to your invoice entity class and the latest, not soft-deleted, entry is the one to use.
  • Each new entry has a version number, so in our case the first invoice wold be 1234-1 and the section would be 1234-2. Then, like the LastUpdated version, the invoice with the highest version number, and is not soft deleted, is the one to use.
  • Make sure there is only one not soft-deleted version by using a unique filtered index. This works by creating a unique index for all entries that aren’t soft deleted, which means you would get an exception if you tried to reset a soft-deleted invoice but there was an existing non-soft deleted invoice already there. But at the same time, you could have lots of soft-deleted version for your history. Microsoft SQL Server RDBMS, PostgreSQL RDBMS, SQLite RDBMS have this feature (PostgreSQL and SQLite call it partial indexes) and I am told you can something like this in MySQL too. The code below is the SQL Server version of a filtered unique index.
CREATE UNIQUE INDEX UniqueInvoiceNotSoftDeleted  
ON [Invoices] (InvoiceNumber)  
WHERE SoftDeleted = 0  

NOTE: For handling the exception that would happen with the unique index issue see my article called “Entity Framework Core – validating data and catching SQL errors” which shows you how to convert a SQL exception into a user-friendly error string.

What about relationships?

Up to now we have been looking at soft deleting/resetting a single entity, but EF Core is all about relationships. So, what should I do about any relationships linked to the entity class that you just soft deleted? To help us, lets look at two different relationships that have different business needs.

Relationship example 1 – A Book with its Reviews

In my book “Entity Framework Core in Action” I build a super-simple book selling web site with books, author, reviews etc. And in that application, I can soft delete a Book. It turns out that once I delete the Book then there really isn’t another way to get to the Reviews. So, in this case I don’t have to worry about the Reviews of a soft deleted book.

But to make things interesting in chapter 5, which is about using EF Core with ASP.NET Core, I added a background task that counts the number of reviews. Here is the code I wrote to count the Reviews

var numReviews = await context.Set<Review>().CountAsync();

This, of course, gave the same count irrespective of whether the Book is soft deleted, which is different to what happens if I hard deleted the Book (because that would also delete the book’s Review). I cover how to get around this problem later.

Relationship example 2 – A Company with its Quotes

In this example I have many companies that I sell to and each Company has set of Quotes we sent to that company. This is the same one-to-many relationship that the Book/Reviews has, but in this case, we have a list of companies and AND a separate list of Quotes. So, if I soft delete a Company then all the Quotes attached to that company should be soft deleted too.

I have come up with three useful solutions to both soft delete relationships examples I have just described.

Solution 1 – do nothing because it doesn’t matter

Sometimes it doesn’t matter that you soft deleted something, and its relationships are still available. Until I added the background task that counts Reviews my application worked fine if I soft deleted a book.

Solution 2 – Use the Aggregates/Root approach

The solution to the background task Reviews count I used was to apply a Domain-Driven Design (DDD) approach called Aggregate. This says a that you get grouping of entities that work together, in this case the Book, Review, and the BookAuthor linking table to the Author. In a group like this there is a Root entity, in this case the Book.

What Eric Evans, who is the person that define DDD, says is you should always access the aggregates via the Root aggregate. There are lots of DDD reasons for saying that, but in this case, it also solves our soft delete issue, as if I only get the Reviews through the Book then when it is soft deleted then the Reviews count is gone. So, the code below is the replacement to go in background task Reviews count

var numReviews = await context.Books
                   .SelectMany(x => x.Reviews).CountAsync();

You could also do a version of the review count query to list the Quotes via the Company, but there is another option – mimicking the way that database handles cascade deletes, which I cover next.

Solution 3 – mimicking the way that cascade deletes works

Databases have a delete setting called CASCADE, and EF Core has two DeleteBehaviours, Cascade and ClientCascade. These behaviours causes the hard delete of a row to also hard delete any rows that rely on that row. For instance, in my book-selling application the Book is what is called the principal entity and the Review, and the BookAuthor linking table are dependant entities because they rely on the Book’s Primary key. So, if you hard delete a Book then all the Review, and BookAuthor rows link to that Book row are deleted too. And if those dependant entities had their own dependants, then they would be deleted too – the delete cascades down all the dependant entities.

So, if we duplicate that cascade delete down the dependant entities but setting the SoftDeleted property to true, then it would soft delete all the dependant too. That works, but it gets a bit more complex when you want to reset the soft delete. Read the next section for what you really need to do.

Building solution 3 – Cascade SoftDeleteService

I decided I wanted to write a service that would provide a cascade soft delete solution. Once I started to really build this, I found all sorts of interesting things to that I had to solve because we when we reset the soft delete we want the related entities to come it back to their original soft deleted state. I turns out that I bit more complex, so let’s first explore this problem I found with an example.

Going back to our Company/Quotes example let’s see what happens if we do cascade the setting of the SoftDeleted boolean down from the Company to the Quotes (hint – it doesn’t work in some scenarios). The starting point is we have a company called XYZ, which has two quotes XYZ-1 and XYZ-2. Then:

StartingXYZXYZ-1 XYZ-2
Soft delete the quote XYZ-1XYZXYZ-2
Soft delete Company XZ– none –– none –
Reset the soft delete on the company XYZXYZXYZ-1 (wrong!) XYZ-2

What has happened here is when I reset Company XYZ it also resets ALL the Quotes, and that’s not what the original state was. It turns out we need a byte, not a boolean so that we can know what to reset and what to keep still soft deleted.

What we need to do is have a soft delete level, where the level tells you how far down was this soft delete setting set. Using this we can work out whether we should reset the soft delete or not. This gets pretty complex, so I have a figure that shows how this works. Light coloured rectangle represent entities that are soft deleted, with the change from the last step shown in red.

So, you can handle cascade soft deletes/resets and it works really well. There are lots of little rules you cover in the code, like you can’t start a reset of an entity if its SoftDeleteLevel isn’t 1, because a higher-level entity soft deleted it, and I have tried to build in checks/workarounds to the issues.

I think this cascade soft delete approach is useful and I have built some prototype code to do this, but it’s going to take quite a bit more work to turn it into a NuGet library that can work with any system (here is my current list of things to do).

The soft delete library out now. Here are the some useful links


Well we have well and truly looked at soft delete and what it can (and cannot) do. As I said at the beginning, I have used soft delete on two of my client’s systems and it makes so much sense to me. The main benefits are inadvertent deletes can be restored and history is preserved. The main downside is the soft delete filter might slow queries down but adding an index on the soft deleted property will help.

I know from my experiences that soft delete works really well in business applications. I also know that cascade soft deletes would have helped in one of my client’s systems which had some hierarchical parts – deleting a higher level would then marked all child parts as soft deleted too which would make things faster when querying the data.

The EfCore.SoftDeleteSevices is out now. I had problems with the Git setup (my fault, not GitHub’s fault) so renamed the old repo to -Old and created a new EfCore.SoftDeleteServices repo. That means I lost all the stars people had added to say they wanted the library. Thanks to to starred the old repo you for your support.

Happy coding.

EF Core In depth – what happens when EF Core writes to the database?

Last Updated: July 31, 2020 | Created: June 13, 2020

This article is the second “under the hood” view of what happens when you use EF Core. The first was about reading from the database, and this article is about writing to the database. This is CUD part of the four CRUD (Create, Read, Update, and Delete) accesses.

I do assume you know EF Core, but I start with a look at using EF Core to make sure we have the basics covered before I dive into the depths of EF Core. But this is a “deep dive” so be ready for lots of technical detail, hopefully described in a way you can understand.

This article is part of a “EF Core In depth” series. Here is the current list of articles in this series:

Other older articles in this series are

This “EF Core In depth” series is inspired by what I found while updating my book “Entity Framework Core in Action” to cover EF Core 5. I am also added a LOT of new content from my experiences of working with EF Core on client applications over the last 2½ years.

NOTE: There is a companion GitHub repo at https://github.com/JonPSmith/EfCoreinAction-SecondEdition. There are unit tests that go with the content in this article – look for unit test class whose name start with “Ch06_”.

TL;DR – summary

  • EF Core can create a new entry in the database with new or existing relationships. To do this it has to set the right order to write out the classes so that it can get set the any linked class. This makes it easy for the developer to write out classes with complex links between them.
  • When you call the EF Core command Add to create a new entry many things happen
    • EF Core finds all the links from the added class to other classes. For each linked class it works out if it needs to create a new row in the database, or jus link to an existing row in the database.
    • It also fills in any foreign key, either with the actual key of an existing row, or a pseudo-key for links to new classes.
  • EF Core can detect when you change a property in a class you read in from the database. It does this by holding a hidden copy of the class(es) read in. When you call SaveChanges it compares each clear read in with its original value and only creates commands to change the specific class/property that was changed.
  • EF Core’s Remove method will delete the row in the database pointed to by the primary key value in the class you provide as a parameter. If the deleted class has relationships then the database can sort out what to do, but you can change the delete rules.

Setting the scene – the basics of EF Core writing to the database

TIP: If you already know EF Core then you can skip this section – it’s just an example of how you write to a database.

For the examples in my book I have created a small web application that sells books – think super-simple Amazon. In this introduction I am going to describe the database structure and then give you a simple example of writing to that database.

a. The classes/tables I’m going to work with

My Book App as I call it starts out in chapter 2 with the following five tables shown in the figure below. I chose this because a) its easy to understand because of Amazon etc. and b) it had each of the basic relationships that can exist between tables.

Theses tables are mapped to classes with similar names, e.g. Book, BookAuthor, Author, with properties with the same name as the columns shown in the tables. I’m not going to show the classes because of space, but you can see these classes here in my GitHub repo.

b. A look at what you need to access this database via EF Core

For EF Core to write to the database I have shown you need 5 parts

  1. A database server, such as SQL Server, Sqlite, PostgreSQL…
  2. A class, or classes, to map to your database – I refer to these as entity classes.
  3. A class which inherits EF Core’s DbContext class, which contains the setup/configuration of EF Core
  4. A way to create a database
  5. Finally, the commands to write to the database.

The unit test code below comes from the EfCoreinAction-SecondEdition GitHub repo and shows a simple example of writing out a set of books, with their authors, reviews etc. to a database.

public void TestWriteTestDataSqliteInMemoryOk()
    var options = SqliteInMemory.CreateOptions<EfCoreContext>();
    using (var context = new EfCoreContext(options))

        var book = new Book
            Title = "Test",  
            Reviews = new List<Review>() 
        book.Reviews.Add(new Review { NumStars = 5 }); 

        var bookWithReview = context.Books
            .Include(x => x.Reviews).Single()

Now, if we link unit test code to the list of 5 parts, it goes like this

  1. A database server – Line 5: I have chosen a Sqlite database server, and in this case the SqliteInMemory.CreateOptions method, which comes from my EfCore.TestSupport NuGet package, sets up a new, in-memory database (in-memory database are great for unit testing as you can set up a new, empty database just for this test – see chapter 17 of my book for more).
  2. A class, or classes – not shown, but there is a Book entity class, with relationships to an a Review entity class.
  3. A class inherits DbContext – Line 6:  the EfCoreContext class inherits the DbContext class and configures the links from the classes to the database (you can see this class here in my GitHub repo).
  4. A way to create a database – Line 8: because it’s a new database I use this command to create the correct SQL tables, keys, indexes etc. The EnsureCreated method is used for unit tests, but for real applications you most likely will use EF Core migrations.
  5. Commands to write to the database – Lines 17 and 18
    1. Line 17: the Add method tells EF Core that a new book with its relationships (in this case, just a Review), needs to be written to the database.
    1. Line 18: In this case the SaveChange method creates new rows in Books and Review tables in the database.

The last few lines after the //VERIFY comment are some simple checks that the books have been written to the database.

In this example you added new entries (SQL command INSERT INTO) to the database, but EF Core will also handle updates and deletes to the database. The next section covers this create example and then moves onto other examples of Create, Update and Delete.

What happens when EF Core writes in the SQL database?

I’m going to start with creating a new Book entity class, with one, new Review entity class. I chose this as the simplest write which has a relationship.  You have just as I did in the unit test above, but if you skipped this here it the important part again:

var book = new Book              
    Title = "Test",              
    Reviews = new List<Review>() 
book.Reviews.Add(new Review { NumStars = 1 });

To add these two linked entities to the database EF Core has to

  1. Work out what order it should create these new rows – in this case it has to create a row in the Books table so that it has the primary key of the Book.
  2. Copy any primary keys into the foreign key of any relationships – this this case it copies the Books row’s primary key, BookId, into the foreign key in the new Review row.
  3. Copy back any new data creating in the database so that the entity classes properly represent the database – in this case it must copy back the BookId and update the BookId property in both the Book and Review entity classes and the ReviewId for the Review entity class.

So, let’s see the SQL from this create, as shown in the following listing

-- first database access
-- This inserts a new row into the Books table. 
-- The database generates the Book’s primary key                            
INSERT INTO [Books] ([Description], [Title], ...)
VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6);     

-- This returns the primary key, with checks to ensure the new row was added
SELECT [BookId] FROM [Books]                        
WHERE @@ROWCOUNT = 1 AND [BookId] = scope_identity();

-- second database access
-- This inserts a new row into the Review table. 
-- The database generates the Review’s primary key
INSERT INTO [Review] ([BookId], [Comment], ...)
VALUES (@p7, @p8, @p9, @p10);

-- This returns the primary key, with checks to ensure the new row was added
SELECT [ReviewId] FROM [Review]
WHERE @@ROWCOUNT = 1 AND [ReviewId] = scope_identity();

The important point is that EF Core handles the problem of writing out the entity classes in the correct order so that it can fill in any foreign keys. This is simple example but for a client I had to build a very complex piece of data consisting of about 15 different entity classes, some new entity classes were added, some were updated and some removed, but one call to SaveChanges and EF Core will work out what to do, in the right order, to update the database. So, EF Core makes writing complex data to a database easy for the developer.

I mention this because I have seen EF Core code where the developer used multiple calls of the SaveChanges method to obtain the primary key from the first create to set the foreign key for the related entity. For instance.

var book = new Book              
    Title = "Test"
var review = new Review { BookId = book.BookId, NumStars = 1 }

That would have the same effect as the previous code, but it has a weakness – if the second SaveChanges fails, then you have partial updated database, which might not be a problem in this case, but in my client’s case (which was a security system) that could be very bad indeed!

So, the take-away from this is – you don’t need to copy primary keys into foreign keys, because you can use set up the navigational properties and EF Core will sort out the foreign keys for you. So, if you think you need to call SaveChanges twice, then it normally means you haven’t set up the right navigational properties to handle that case.

What happens in the DbContext when EF Core writes to the database?

In the last section you saw what EF Core does at the database end, but now you are going to look at what happens inside EF Core. Most of the time you don’t need to know this, but there are times that knowing this is very important – for instance if you are catching changes during a call to SaveChanges, then you only get its State before SaveChanges is called, but you only have the primary key of a newly created entity after the call to SaveChanges.

The example is a little more complex from the last one. In this example I want to show you the different way EF Core handles new instances of an entity class over an instance of an entity that has been read from the database. The code in listing below creates a new Book, but with an Author that is already in the database. The code has comments saying Stage 1, Stage 2 and Stage 3 and I then describe what happens after each stage using diagrams.

var author = context.Authors.First();                
var bookAuthor = new BookAuthor { Author = author }; 
var book = new Book                                  
    Title = "Test Book",                             
    AuthorsLink = new List<BookAuthor> { bookAuthor }



The next three figures show you what is happening inside the entity classes and their tracked data at each stage. Each figure shows the following data at the end of its stage.

  • The State of each entity instance at each stage of the process (shown above each entity class).
  • The Book and BookAuthor classes are brown to show that these are new instances of the classes and need to be added to the database, while the Author entity class is blue to represent that instance was read from the database.
  • The primary and foreign keys with the current value in brackets. If a key is (0), then it hasn’t been set yet.
  • The navigational links are shown as connections from the navigational property to the appropriate entity class that it is linked to.
  • Changes between each stage are shown by bold text or thicker lines for the navigational links.

The following figure shows the situation after Stage 1 has finished. This is your initial code that sets up a new Book entity class (left) with a new BookAuthor entity class (middle) which links the Book to an existing Author entity class (right).

The figure above show the condition of the three entity classes after Stage 1 has finished, that is your code has set up the entity classes to represent the new book you want add to the database. This is the starting point before you call any EF Core methods.

The next figure shows the situation after the line context.Add(book) is executed. The changes are shown in bold, and with thick lines for the added navigational links.

You may be surprised how much happened when the Add method was executed (I was!). What it seems to be doing is getting the entities as close to the position that they will be after the SaveChanges is called. Here are the things that happen when the Add method is called.

It sets the State of the entity provided as a parameter to Added – in this example that is the Book entity. Then it looks all entities linked to the entity provided as a parameter, either by navigational properties or by foreign key values. For each linked entity it does the following (NOTE: I don’t know the exact order they are done in).

  • If the entity is not tracked, that is its current State of Detached, it sets its State to Added. In this example, that is the BookAuthor entity – the Author’s State isn’t updated because that entity it is tracked. 
  • It fills in any foreign keys for the correct primary keys. If the linked primary key isn’t yet available, it puts a unique negative number in the CurrentValue properties the tracking data for the primary key and the foreign key. You can see that in figure above.
  • It fills in any navigational properties that aren’t currently set up – shown as thick blue lines in the figure above.

NOTE The call of the Add method can take some time! In this example, the only entities to link to are set by your code, but Add’s relational fixup stage can link to any tracked entity. But if have lots of tracked entity classes in the current DbContext it can take a long time. There are ways you can control this which I cover in chapter 14, EF Core performance tuning, in my book “Entity Framework Core in Action” second edition.

The final stage, stage 3, is what happens when the SaveChanges method is called, as shown in the next figure.

You saw in the “what happens in the SQL database” section that any columns that were set/changed by the database are copied back into the entity class so that the entity matches the database. In this example the Book’s BookId and the BookAuthor’s BookId where updated to have the key value created in the database. Also, now that all the entities involved in this database write now match the database their States are set to Unchanged.

Now that may have seemed a long explanation to something that “just works”, and many times you don’t need to know that. But when something doesn’t work correctly, or you want to do something complex like logging entity class changes, then knowing this is very useful.

What happens when EF Core updates the database?

The last example was about adding new entities to the database, but there was no update going on. So, in this section I am going to show what happens when you update something that is already in the database. The update relies on the normal query that I covered in the first article “what happens when EF Core reads from the database?”

The update is a simple one, just three lines, but it shows the three stages in the code: read, update, save.

var books = context.Books.ToList();
books.First().PublishedOn = new DateTime(2020, 1, 1);

The following figure shows the three stages.

As you can see, the type of query you use matters – the normal query loads the data and keeps a “tracking snapshot” of the data returned to the calling code. The returned entity classes are said to be “tracked”. If an entity class isn’t tracked, then you can’t update it.

NOTE: The Author entity class in the last section was also “tracked”. In that example the tracked state of the Author told EF Core that the Author was already in the database so that it wasn’t created again.

So, if you change any properties in the loaded, tracked entity classes, then when you call SaveChanges it compares ALL the tracked entity classes against their tracking snapshot. For each class it goes through all the properties that are mapped to the database, and any properties that link to other entity classes (known as navigational properties). This process, known as change tracking, will detect up every change in the tracked entities, both for non-relational properties like Title, PubishedOn, and navigational links, which will be converted to changes to foreign keys that link tables together.

In this simple example there are only four books, but in real applications you might have loaded lots entity classes all linked to each other. At that point the comparison stage can take a while. Therefore, you should try to only load the entity classes that you need to change.

NOTE: There is an EF Core command called Update, but that is used in specific cases where you want every property/column updated. EF Core’s change tracking is the default approach as it only updates the property/column that has changed.

Each update will create a SQL UPDATE command, and all these UPDATEs will be applied within a SQL transaction. Using a SQL transaction means that all the updates (and any other changes EF Coure found) are applied as a group, and if any one part fails then any database changes in the transaction are not applied. That isn’t important in our simple example, but once you start changing relationships between tables then its important that they all work, or all fail.

What happens when EF Core deleting data in the database?

The last part of the CRUD is delete, which in some ways is simple, you just call context.Remove(myClass), and in other ways its complex, e.g. what happens when you delete an entity class that another entity class relies on? I’m going to give you a quick answer to the first part, and a much longer to the second part.

The way to delete an entity class mapped to the database is to use the Remove method. Here an example where I load a specific book and then delete it.

var book = context.Books
    .Single(p => p.Title == "Quantum Networking");

The stages are:

  1. load the book entity class that you want to delete. This gets all its data, but for a delete you only really need the entity class’s primary key.
  2. The call to the Remove method set’s the State of the book to Deleted. This information is sorted in the tracking snapshot for this book.
  3. Finally, the SaveChanges creates a SQL Delete command that is sent to the database, with any other database changes, and in a SQL transaction (see update description for more on a transaction).

That looks straightforward, but there is something going on here that is important, but not obvious from that code. It turns out that the book with the title of “Quantum Networking” has some other entity classes (database: rows) linked to it – in this specific test case the book with the title of “Quantum Networking” has links to the following entity classes:

  • Two Reviews
  • One PriceOffer
  • One BookAuthor linking to its Author.

Now, the Reviews, PriceOffer and BookAuthor (but not the Author) entity classes are only relevant to this book – the terms we use is they are dependent on the Book entity class. So, if the Book is deleted, then these Reviews, PriceOffer, and any BookAuthor linking rows should be deleted too. If you don’t delete these, then the database’s links are incorrect, and a SQL database would throw an exception. So, why did this delete work?

What happens here is the database relationships between the Books table and the three dependent tables have been set up with a Delete rule called cascade delete. Here is an example of the SQL commands EF Core would produce to create the Review table.

    [ReviewId] int NOT NULL IDENTITY,
    [VoterName] nvarchar(max) NULL,
    [NumStars] int NOT NULL,
    [Comment] nvarchar(max) NULL,
    [BookId] int NOT NULL,
    CONSTRAINT [PK_Review] PRIMARY KEY ([ReviewId]),
    CONSTRAINT [FK_Review_Books_BookId] FOREIGN KEY ([BookId]) 
         REFERENCES [Books] ([BookId]) ON DELETE CASCADE

The highlighted part is the constraint (think it as a rule) that says the review is linked to a row in the Books table via its BookId column. On the end of that constraint you will see the works ON DELETE CASCADE. That tells the database that if the book it is linked to is deleted, then this review should be deleted too. That means that the delete of the Book is allowed as all the dependent rows are deleted too.

That is very helpful, but maybe to want to change the delete rules for some reason? In chapter four of my book  I add some business logic to allow a user to buy a book. I decided I didn’t want to allow a book to be deleted if it existed in a customer’s order. To do this I added some EF Core configuration inside the DbContext to change the delete behaviour to Restrict – see below

public class EfCoreContext : DbContext
    private readonly Guid _userId;                                   

    public EfCoreContext(DbContextOptions<EfCoreContext> options)                         
        : base(options)

    public DbSet<Book> Books { get; set; }
    //… other DbSet<T>s removed

    protected override void OnModelCreating(ModelBuilder modelBuilder
        //… other configurations removed 

            .HasOne(p => p.ChosenBook) 

Once this change to the configuration is migrated to the database the SQL DELETE CASCADE would be removed. That means the SQL constraint changes to a setting of NO ACTION. This means if you try to delete a book that is in a customer’s Order (with uses a LineItem table to keep each item in an order), then the database would return an error, which EF Core would turn into an exception.

This gives you a good idea of what is going on, but there is quite a bit more that I haven’t covered (but I do cover in my book). Here are some things about Delete that I haven’t covered.

  • You can have required relationships (dependent) and optional relationships and EF Core uses different rules for each type.
  • EF Core contains DeleteBehaviors that brings some of the work that the database would do into EF Core. This is useful to avoid problems when your entity classes’ relationships are circular – some databases throw an error if they find a circular delete loop.
  • You can delete an entity class by providing the Remove method with a new, empty class with just the primary key set. That can be useful when working with a UI/WebAPI that only returns the primary key.


So, I have covered the Create, Update and Delete part of the CRUD, with the previous article handling the Read part.

As you have seen the creation of new data in the database using EF Core is easy to use, but complex inside. You don’t (usually) need to know about what happens inside EF Core or the database, but having some idea is going help you take full advantage of EF Core’s cleverness.

Updates are also easy – just change a property or properties in the entity class you read in and when you call SaveChanges EF Core will find the changed data and build database commands to update the database to the same settings. This works for non-relational properties, like the book Title, and for navigational properties, where you could change a relationship.

Finally, we looked at a delete. Again, easy to use but a lot can be happening underneath. Also, look out for the next article where I talk about what is called “Soft Delete”. This is were you set a flag and EF Core won’t see that entity class anymore – it’s still in the database, but its hidden. Many of my clients have used this as it helps if their users inadvertently delete something because they can un-(soft) delete it.

I hope you found this useful and look out for more articles in this series.

Happy coding.

EF Core In depth – what happens when EF Core reads from the database?

Last Updated: January 28, 2021 | Created: May 31, 2020

This article gives an “under the hood” view of what happens when EF Core reads in data from a database. I look at two types of database read: a normal query and a query that contains the AsNoTracking method in it. I also show how a bit of experimenting on my part solved a performance problem that one of my client’s had.

I do assume you know EF Core, but I start with a look at using EF Core to make sure we have the basics covered before I dive into the depths of EF Core. But this is a “deep dive” so be ready for lots of technical detail, hopefully described in a way you can understand.

This article is part of a “EF Core In depth” series. Here is the current list of articles in this series:

Other older articles in this series are

This “EF Core In depth” series is inspired by what I found while updating my book “Entity Framework Core in Action” to cover EF Core 5. I am also added a LOT of new content from my experiences of working with EF Core on client applications over the last 2½ years.

NOTE: There is a companion GitHub repo at https://github.com/JonPSmith/EfCoreinAction-SecondEdition. This has a simple e-commerce site called Book App that you can run. Also, there are unit tests that go with the content in this article – look for unit test class whose name start with “Ch01_”, “Ch02_” etc.

TL;DR – summary

  • EF Core has two ways to read data from the database (known as a query): a normal LINQ query and a LINQ query that contains the method AsNoTracking.
  • Both types of query return classes (referred to as entity classes) with links to any other entity classes (known as navigational properties) loaded at the same time. But how and what they are linked to is different between the two types of queries.
  • The normal query also takes a copy of any data it reads in inside the application’s DbContext – the entity classes are said to be tracked. This allows the loaded entity classes to take part in commands to update the database.
  • This normal query also has some sophisticated code called relational fixup which fills in any links between the entity classes read in, and any other tracked entities.
  • The AsNoTracked query doesn’t take a copy so it isn’t tracked – this means its faster than a normal query. This also means it won’t be considered for database writes.
  • Finally, I show a little-known feature of EF Core’s normal query as an example of how clever it is in linking up relationship via its navigational properties.

Setting the scene – the basics of EF Core writing to the database

TIP: If you already know EF Core then you can skip this section – it’s just an example of how you read a database.

In my book I have created a small web application that sells books – think super-simple Amazon. In this introduction I am going to describe the database structure and then give you a simple example of writing to that database.

a. The classes/tables I’m going to work with

My Book App as I call it starts out in chapter 2 with the following five tables shown in the figure below. I chose this because a) the data/concepts are easy to understand because of sites like Amazon etc. and b) it has one of each of the basic relationships that can exist between tables.

Theses tables are mapped to classes with similar names, e.g. Book, BookAuthor, Author, with properties with the same name as the columns shown in the tables. I’m not going to show the classes because of space, but you can see these classes here in my GitHub repo.

b. A look at what you need to read this database via EF Core

For EF Core to write to the database I have shown you need 5 parts

  1. A database server, such as SQL Server, Sqlite, PostgreSQL…
  2. An existing database with data in it.
  3. A class, or classes, to map to your database – I refer to these as entity classes.
  4. A class which inherits EF Core’s DbContext class, which contains the setup/configuration of EF Core
  5. Finally, the commands to read from the database.

The unit test code below comes from the EfCoreinAction-SecondEdition GitHub repo and shows a simple example of reading in a set of four Books, with their BookAuthor and Authors entity classes from an existing database.

The example database contains four books, where the first two books have the same author, Martin Folwer.

public void TestBookCountAuthorsOk()
    var options = SqliteInMemory.CreateOptions<EfCoreContext>();
    //code to set up the database with four books, two with the same Author
    using (var context = new EfCoreContext(options))
        var books = context.Books
            .Include(r => r.AuthorsLink)
                .ThenInclude(r => r.Author)

        books.SelectMany(x => x.AuthorsLink.Select(y => y.Author))

Now, if we link unit test code to the list of 5 parts, it goes like this

  1. A database server – Line 5: I have chosen a Sqlite database server, and in this case the SqliteInMemory.CreateOptions method, which comes from my EfCore.TestSupport NuGet package, sets up a new, in-memory database (in-memory database are great for unit testing as you can set up a new, empty database just for this test – see chapter 17 of my book for more).
  2. An existing database with data – Line 6: I deal with writing to the database in the next article, for now just assume there is a database with four books, two of which have the same author.
  3. A class, or classes – not shown but classes found here, but there is a Book entity class, with relationships to an Author entity class, via a many-to-many linking entity class called BookAuhor.
  4. A class inherits DbContext – Line 7:  the EfCoreContext class inherits the DbContext class and configures the links from the classes to the database (you can see this class here in my GitHub repo).
  5. Commands to read from the database – Lines 10 to 13 – this is a query:
    1. Line 10 – the EfCoreContext instance called context gives you access to the database, and adding Books says you want to access the Books table.
    1. Line 11  – The Include is known as eager loading and tells EF Core that when it loads a Book is should also load all the BookAuthor entity classes that are linked to that book.
    1. Line 12  – The ThenInclude is part of the eager loading and tells EF Core that when it loads a BookAuthor it should also load the Author entity classes that are linked to that BookAuthor.

The result of all of this is a set of books, with normal properties, like the Title of the Book, filled in and the navigational properties that link the entity classes, like the AuthorsLink property in the Book, filled in with a reference to the correct instance of the entity class it links to.  And the last few lines after the //VERIFY comment are some simple checks that there are four books have, between them, three distinct authors.

This example is known as a query, and one of the four types of database accesses, which are known as CRUD (Create, Read, Update, and Delete). I cover the Create and Update in the next article.

How EF Core represents data when reading from the database

When you query a database EF Core goes thought various steps to convert from data returned by the database into entity classes with navigational links filled in. In this section we will look at those steps for two types of queries – a normal query (i.e. without AsNoTracking, also known as a read-write query) and a query with the AsNoTracking method added (known as a read-only query).

But first we look at the initial part which takes your LINQ command, converts it to the relevant commands for the database type you are using, and gets the data back. This is common to the two types of query we are going to look at. See the following figure for this first part.

There is some very complex code that converts your LINQ into database commands, but there really isn’t a lot to say other than if your LINQ can’t be translated you will get an exception from EF Core with a message that contains ‘could not be translated’. Also, when the data is coming back features like Value Converters may adapt the data.

NOTE: In chapter 6 of my book I cover some of the more complex LINQ commands and what you should do to help EF Core to translate to database commands.

This section has shown the first part of the query, where your LINQ is turned into database commands and returns all the correct values. Now we look at the second part of the query where EF Core takes the returned values and turns them into instances of the entity classes and filling in any navigational properties. There are two types of queries which we will look at.

  1. A normal query (read-write query)
  2. An AsNoTracking query, which has the AsNoTracking method added (read-only query).

1. Normal query – a read-write query

A normal query reads in the data in such a way that the data can be edited, which is why I refer to it as a read-write query. It doesn’t automatically update data (See next article for how to write to the database), but unless your query read-write then you won’t be about to update the data you have just read in.

The example I gave you in the introduction does a normal query that reads in the four example books with the links to their authors. Here is the query code part of that example

var books = context.Books
    .Include(r => r.AuthorsLink)
        .ThenInclude(r => r.Author)

Then EF Core goes through three steps to convert those values back into entity classes with navigational properties filled in. The figure below shows the three steps and the resulting entity classes with their navigational links.

Let’s look at three steps:

  1. Create classes and fill in data. This takes the values that came back for the database and fills in the non-navigational (known as scalar) properties, fields, etc. In the Book entity class this would be properties like BookId (Book’s primary key), Title, etc. – see bottom left, light blue rectangles.
          There can be a lot of other issues around here, such as how EF Core used constructors, backing fields, shadow properties, adapting data, client-side calculations to name but a few. Chapters 2, 6 and 7 cover these issues.
  2. Relational fixup, with Identity Resolution. The first step will have filled in the primary keys and foreign keys, which define how the data is connected to each other. EF Core then uses these keys to set up the navigational properties between the entity classes (shown as thick blue lines in the figure).
            The second stage is called Identity Resolution, which makes sure there is only one version of an tracked entity class with same type and same primary key. That’s why you only have one Author for the first two books, because the Author of both books are the same.
    This Identity Resolution linking feature goes beyond the entity classes just read in by the query, it looks at every tracked entity in the DbContext and fills in any navigational properties. This is a powerful feature, but if you have lots of tracked entities then it can take some time – that’s why the AsNoTracking query exists, to be quicker.
  3. Creating a tracking snapshot. The tracking snapshot is a copy of the entity classes that are passed back to the user, plus other things like a link to each entity class that it shadows – an entity is said to be tracked, which means it can be used in database writes.

2. AsNoTracking query – read-only query

An AsNoTracking query is a read-only query. That means anything you read in won’t be looked at when the SaveChanges method is called. The AsNoTracking option is there because it makes the query perform better. I cover this and other differences from a normal query in the next section.

Following the example in the introduction I alter the query code to add the AsNoTracking method below (see line 2)

var books = context.Books
    .Include(r => r.AuthorsLink)
        .ThenInclude(r => r.Author)

The LINQ query goes through the two of the three steps shown in the normal query figure above. The step that is left out is the 3. Tracking snapshot, and the relational fixup step is slightly different. The following figures shows the steps for an AsNoTracking query.

Let’s look at three steps:

  1. Create classes and fill in data. (same as normal query) This takes the values that came back for the database and fills in the non-navigational (known as scalar) properties, fields, etc. (known as scalar properties). In the Book entity class this would be properties like BookId (Book’s primary key), Title, etc. – see bottom left, light blue rectangles.
  2. Relational fixup, different from normal query. The first step will have filled in the primary keys and foreign keys, which define how the data is connected to each other. EF Core can use that to fill in the navigational properties (shown as thick blue lines) between the entity classes, but NOT looking outside the query to tracked entities.
    The AsNoTracked method also turn off the Identity Resolution stage to make the query quicker (Identity Resolution can take a long time). This is why you end up with four Authors instead of three in a normal query.
  3. Creating a tracking snapshot. (NOT RUN)

c. AsNoTrackingWithIdentityResolution

EF Core 5 added a new method called AsNoTrackingWithIdentityResolution. This makes the returned data not tracked, but it does do Identity Resolution.

d. Differences between normal and AsNoTracking queries

Now let’s compare the two query types and highlight the differences.

  1. The AsNoTracking query performs better. The main reason for having the AsNoTracking feature is about performance. The AsNoTracking query is
    1. Slightly faster and uses slightly less memory because it doesn’t have to create the tracking snapshot. The main performance improvement is due to not executing the Identity Resolution stage.
    1. Not having the tracking snapshot of the queried data improved the performance of SaveChanges because it doesn’t have to inspect the tracking snapshot for changes.
  2. The AsNoTracking query relational fixup only links entities in the query. In the normal query I already said that the Identity Resolution links both to entities in the query AND entities that are currently tracked. But the AsNoTracking query only filled in the navigational properties between entities in the query.
  3. The AsNoTracking query doesn’t always represent the database relationships. Another difference in relational fixup between the two types of queries is that the AsNoTracking query uses a quicker fixup without identity resolution. This can produce multiple instances for the same row in the database – see the blue Author entities and comment in the bottom right of the previous figure. That difference doesn’t matter if you are just showing the data to a user, but if you have business logic then the multiple instances doesn’t correctly reflect the structure of the data and could cause problems.

Useful relational fixup feature with hierarchical data

The relational fixup step is quite clever, especially in a normal query. This allows all sorts of clever things in a normal query and I wanted to show you how I used relational fixup to solve a performance problem I had in a client’s project.

I worked for a company where a lot of their dat was hierarchical, that is data that has a series of linked entity classes with an indeterminate depth. The problem was I had to parse the whole hierarchy before I could display it. I initially did this by eager loading the first two levels and then used explicit loading for deeper level. It worked but the performance was very slow, and the database was overloaded with lots of single database accesses.

This got me thinking, if the normal query relational fixup is so clever could it help me improve the performance of the query? – and it could! Let me give you an example using an example of employees in a company. The figure below shows you a possible hierarchical structure of a company we want to load.

NOTE: You can see the Employee class here, but the basic idea is it has Manger navigation Manager navigational property (single), which linked to their boss (or null if top person) and a WorksForMe navigational property (collection), which has all the employees that work for this employee (can be none). It also has employee info like their Name and what department(s) they work for.

You could use .Include(x => x.WorksForMe).ThenInclude(x => x.WorksForMe)… and so on, but it turns out that a single .Include(x => x.WorksForMe) is enough, as the relational fixup can work out the rest! That is surprizing, but very useful.

For instance, if I wanted to select all the people that work in Development (each Employee has a property with the name WhatTheyDo with a type Role which has the department(s) they work in) I could write this code.

var devDept = context.Employees                         
    .Include(x => x.WorksFromMe)                        
    .Where(x => x.WhatTheyDo.HasFlag(Roles.Development))

You can add the AsNoTrackingWithIdentityResolution to the query to obtain an untracked version of the query, but the query won’t work if you add the AsNoTracking method.

This creates one query that loads all the employees who work in Development, and the relational fixup filled in the WorksFoMe navigational property (collection) and the Manager navigational property (single) on all the employees in the returned employees. This improves both the time the query takes and reduced the load on the database server by only sending one query (compared with my original query that used explicit loading).

NOTE: You do need to work out which relationship to Include. In this case I have a Manager navigational property (single) and a WorksForMe navigational property (collection). It turns out that including the WorksForMe property fills in both the WorksForMe collection and the Manager property. But including the Manager navigational property means that the WorksForMe collection is only created if there are entities to link to, otherwise it is null. I don’t know why – that’s why I test everything to test what works.


You have seen two types of queries, which I called a) a normal, read-write query, and b) an AsNoTracking, read-only query. For each query type I showed you what EF Core does “under the hood” and the structure of the data read in. And the differences in how they work shows their strengths and weaknesses.

The AsNoTracking query is the obvious solution for read-only queries, as its faster than the normal, read-write query. But you should keep in mind its limitations of the relational fixup, which can create multiple instances of classes where the database only has one relationship.

The normal, read-write query is the solution for loading tracked entities, which means you can use them in Create, Update and Delete database changes. The normal, read-write query does take up more resources of time and memory, but is has some useful features such as linking automatically to other tracked instances of entity classes.

I hope you have found this article useful. You can find a must longer and detailed version of this in chapters 1 to 6 of my book Entity Framework in Action, section edition.

Happy coding.

Domain-Driven Design and Entity Framework Core – two years on

Last Updated: July 31, 2020 | Created: April 4, 2020

This article is about my experiences of applying a Domain-Driven Design (DDD) approach when working with Entity Framework Core (EF Core). I have now used DDD and my supporting libraries for two years on my own projects and client projects. Here are the bigger client projects where I used an DDD approach:

  • A six-month engagement where I architected the backend of a SASS system using ASP.NET Core/EF Core. My design used a lot of DDD concepts and my DDD libraries.
  • A four-month project design and build an adapter between two security systems – this had complex business logic.
  • A six-month engagement on an already started ASP.NET Core/EF Core application to take it to its first release. The project used a different DDD approach to the one I usually use, which taught me a lot.

This article looks at what I have learnt along the way.

TL;DR – summary

  • I really like DDD because I know exactly where the code for a given function is and, because DDD “locks down” the data, which means that the code is the only implementation of that function. The last sentence encapsulates the primary reasons I love DDD.
  • I have found that a DDD approach still works for with projects where the specification isn’t nailed down, or changes as the project progresses. Mainly because DDD functions are easy to find, test, and refactor.
  • But using an DDD approach does require more code to be written. The code is ‘better’ but using a DDD approach can slow down development. That isn’t what I, or my clients, want.
  • To offset the extra code of DDD I have built two libraries – EfCore.GenericServices for calling DDD methods in the classes and EfCore.GenericBizRunner for running business logic.
  • I love my EfCore.GenericServices and use it for lots of situations. It makes working with DDD-styled classes really easy. Maybe the best library I have built so far.
  • I find that business logic ranges from something that is super-simple up to super complex. I have found that I use three different approaches depending on the type and complex of the business logic.  
  • I have found my EfCore.GenericBizRunner library is useful but a bit ‘heavy’, so I tend to only it if the business logic is complicated.

Setting the scene – my approach to making DDD easier to use

I have used a DDD approach for many years, but it wasn’t until EF Core came out that I felt I could build property DDD-styled classed (known as domain entities in DDD, and I use the term entity classes in C#). Once I had that I was full on with DDD, but I found that came at a cost – it took me longer to write a DDD application than my previous approach using my EF6.x library called GenericServices.

As I wrote DDD code I was looking for the repetitive code in using DDD. With my experience of writing the original GenericServices library I knew where to look and I came out with a library called EfCore.GenericServices. This used EF Core and supported non-DDD in a similar way to the original GenericServices library by using AutoMapper’s object-to-object mapping. But the extra bit was its ability to work with EF Core entity classes by providing an object-to-method-call mapping for DDD-styled classes (I also created a EfCore.GenericBizRunner library for handling business logic, but that that works the same for non-DDD and DDD approaches).

The difference these libraries, especially EfCore.GenericServices, has made to my speed of development is massive. I would say I am back up to the speed of development I was with the non-DDD approach, but now my code is much easier to find, fix, and refactor.

NOTE: all my examples will come from a ASP.NET Core application I build to go with my book, “Entity Framework Core in Action”. This example e-commerce site that “sells” books (think super-simple amazon). You can see a running version of this application at http://efcoreinaction.com/ and the code can be found in this GitHub repo.

Why DDD take longer to write, and how can my library help?

Let’s really look at the extra code that DDD needs to work. In the diagram below I have a trivial requirement to update the publication date of a book.

You can immediately see that the DDD code is longer, by about 9 lines. Now you might say 9 lines isn’t much, but in a real application you have hundreds, if not thousands, of different actions like this, and that builds up. Also, some of it repetitious (and boring!), and I don’t like writing repetitious code.

My analysis showed that the process of calling a DDD method had a standard pattern:

  1. Input the data via a DTO/ViewModel
  2. Load the entity class via its primary key
  3. Call the DDD method
  4. Call SaveChanges

So, I isolated that pattern and built library to make it easier, let’s now compare the UpdateDate process again, but this time using my EfCore.GenericServices helping with the DDD side – see diagram below.

Now the DDD code is shorter than the non-DDD code, and all the repetitive code has gone! You can see that the call in the ASP.NET Core action has changed, but it’s the same length. The only extra line not shown here is you need to add ILinkToEntity<Book> to the DateDto class. ILinkToEntity<T> is an empty interface which tells EfCore.GenericServices which entity class the DTO is linked to.  

Also, EfCore.GenericServices has code to handle a lot of edge-cases, like what happens if the entity class isn’t found, and what if the data in the DTO doesn’t pass some validation checks, etc. Because it’s a library its worth adding all these extra features, which takes out other code you might have needed to write.

The pros and cons of putting EF Core code in your entity classes

However, there is an issue with EfCore.GenericServices that I needed to handle – I can load the main entity class, but some actions work on EF Core navigational properties (basically the links to other tables), and how do I handle that? – see EF Core docs where it defines the term navigation properties.

As an example of accessing navigational properties I want to add a Review to a Book (think Amazon reviews). The DDD approach says that a Review is dependant on the Book, so any changes to the Reviews collection should be done via a method in the Book entity class (the term for the Book/Reviews relationship in DDD is Root and Aggregates). The question is, we have the Book loaded, but we don’t have the collection of Reviews loaded, so how do we handle this?

In the first version of EfCore.GenericServices I gave the responsibly for handling the Reviews to the entity class method. This required the method to have access to the application’s DbContext, and here is one simple example of a method to add a new Review to a Book.

public void AddReview(int numStars, string comment, string voterName, 
    DbContext context) 
        .Collection(c => c.Reviews).Load();
    _reviews.Add(new Review(numStars, comment, voterName));                            

NOTE: I’m not explaining why I use a backing field, _reviews, in this example. I suggest you have a look at my article “Creating Domain-Driven Design entity classes with Entity Framework Core” for why I do that.

That works, but some people don’t like having the DbContext accessible inside an entity class. For instance, one my client’s project used a “clean architecture” approach with DDD. That means that the entity class has no external references, so the entity classes didn’t know anything about EF Core or its DbContext.

Early in 2020 I realised I could change the EfCore.GenericServices library to load related navigational properties by providing an IncludeThen attribute which defined what navigational property(s) to load. The IncludeThen is added to the DTO which has properties that match the method’s parameters (see this example in one of my articles) . This means I can write code in the DDD method that doesn’t need access to the application’s DbContext, as shown below.

public void AddReviewWithInclude(int numStars, string comment, string voterName)
    if (_reviews == null)
        throw new InvalidOperationException(
             "The Reviews collection must be loaded");
    _reviews.Add(new Review(numStars, comment, voterName));

Now, you might think that I would use this approach all the time, but it turns out there are some advantages of giving the DbContext to the method, as it has more control. For instance, here is another version of the AddReview method which had better performance, especially if there are lots of reviews on a book.

public void AddReview(int numStars, string comment, string voterName, 
    DbContext context = null)
    if (_reviews != null)    
        _reviews.Add(new Review(numStars, comment, voterName));   
    else if (context == null)
        throw new ArgumentNullException(nameof(context), 
            "You must provide a context if the Reviews collection isn't valid.");
    else if (context.Entry(this).IsKeySet)  
        context.Add(new Review(numStars, comment, voterName, BookId));
        throw new InvalidOperationException("Could not add a new review.");  

This code is longer, mainly because it handles the situation where reviews are already loaded and does some checks to make it more secure. But the main point is that it doesn’t need to load the existing reviews to add a new review – it just adds a single review. That is MUCH faster, especially if you have hundreds of reviews.

Also, it’s not possible to think of all the things you might do and build them into a library. Having the ability to access the application’s DbContext means I have a “get out of jail” card if I need to do something and the EfCore.GenericServices doesn’t handle it. Therefore, I’m glad that feature is there.

But, over the last few years I have concluded that I should minimise the amount of database access code in the entity class methods. That’s because the entity class and its methods start to become an God Object, with way too much going on inside it. So, nowadays if I do need complex database work then I do it outside the entity class, either as a service or as business logic.

To sum up, there are pros and cons allowing the DbContext being injected into the method call. Personally, I will be using the IncludeThen version because it is less coding, but if I find there is a performance issue or something unusual, then I have the ability do fix the problem by adding specific EF Core code inside the entity class method.

Business logic, from the simple to the complex

Back in 2016 a wrote an article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”, and also in my book “Entity Framework Core in Action” chapter 4 I described the same approach. Lots of people really liked the approach, but I fear that is overkill for some of the simpler business logic. This section gives a more nuanced description of what I do in real applications.

In the UK we have a saying “don’t use a hammer to break a nut”, or in software principle, KISS (Keep it Simple, Stupid). From experience working on medium sized web apps I find there is a range of business rules.

  1. Validation checks, e.g. check a property in a range, which can be done by validation attributes.
  2. Super-simple business rules, e.g. doing validation checks via code, for validations that can’t be done by validation attributes.
  3. Business logic that uses multiple entity classes, e.g. building a customer order to some books.
  4. business logic that it is a challenge to write, e.g. my pricing engine example.

Now I will describe what I do, especially on client work where time is money.

business logic types 1 and 2 – different types of validation

My experience is that the first two can be done by via my GenericServices library. That because that library can:

  1. Validate the data in any entity class that is being created or updated (this is optional, as the validation is often done in the front-end).
  2. It looks for methods that return either void, or IStatusGeneric. The IStatusGeneric interface allows the method to return a successful status, or a status with error messages.

The code below shows an example of doing a test and returning a Status. This example is taken from the https://github.com/JonPSmith/EfCore.GenericServices repo. This uses a small NuGet package called GenericServices.StatusGenericti supply the IStatusGeneric that all my libraries use.

public IStatusGeneric AddPromotion(decimal actualPrice, string promotionalText)                  
    var status = new StatusGenericHandler();
    if (string.IsNullOrWhiteSpace(promotionalText))
        status.AddError("You must provide some text to go with the promotion.", nameof(PromotionalText));
        return status;

    ActualPrice = actualPrice;  
    PromotionalText = promotionalText;

    status.Message = $"The book's new price is ${actualPrice:F}.";

    return status; 

business logic type 3 – working over multiple entity classes

For this business type I tend to just create a class/method to do the job. I combine the business logic and the EF Core database accesses in the same code, because its quick. The downside of this approach is you have business logic mixed with database accesses, which is NOT what DDD says you should do. However, the code is in one place and DRY (only one version of this business logic exists), so if the code starts to get complex, then I can always take it up to business logic type 4.

Practically, I put the business logic in a class and register it with the dependency injection service. If there are a several different business features linked to one entity/area I would typically put a method for each function, but all in one class. I also have a NuGet Status library, which all my libraries use, so it’s easy for each function to return a Status if I need to.

The fact that unit testing with a real database is easy with EF Core means its quite possible to test your business logic.

NOTE: Some people don’t think its right to unit test with a real database, but I find it works for me. If you don’t like unit testing your business logic with real databases, then use the next approach which make it really easy to mock the database accesses.

business logic type 4 – a challenge to write

For this business type, then its correct to apply a strong DDD approach, as shown in my article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”. That means I separate the business logic from the database access code by creating a specific repository class for just that business logic. It does take more code/time to do, but the advantages are:

  • Your business logic works on a series of in-memory classes. I find that makes the writing the code much easier, as you’re not having to think about the database side at the same time.
  • If the database classes aren’t a good fit for the business logic you can create your own business-only classes. Then you handle the mapping of the business-only classes to the database classes in the repository part.
  • It very easy to mock the database, because the business logic used a repository pattern to handle the database accesses.

I generally use my EfCore.GenericBizRunner library with this complex type of business logic, but it can be used for business type 3 too. The library is helpful because it can adapt the input and output of the business logic, which helps to handle the mismatch between the business logic level and the front end – bit like a mini DDD anticorruption layer (see the article “Wrapping your business logic with anti-corruption layers”).

Summary diagram

This is fairly long article which covers both CRUD (Create, Read, Update and Delete) functions and business logic function. DDD doesn’t really use terms “CRUD” or “Business logic”. In DDD everything is domain problem, which is solved by calling appropriately-named method(s) in the entity classes.

However I still find the terms “CRUD” or “Business logic” useful to categorise the functions I need to code. Here is a diagram where I try to map the complexity of code to the ways I work.


I hope this article will help people who are starting to use DDD. Obviously, this is my approach and experience of using DDD, and there is no right answer. Someone else might use a very different approach to DDD, but hopefully we all agree that Eric Evans’s Domain-Driven Design book has been one of the key books in making all of us think more about the business (domain) needs rather than the technology.

Nowadays we build really complex applications really quickly because the millions of libraries and documentation that is so easy to access. But as Eric Evans said in his book “When the domain is complex, this is a difficult task, calling for the concentrated effort of talented and skilled people”. That means we need to up our game as developers if we are going to build applications that a) work, b) perform well, and c) doesn’t become an unmanageable “ball of mud”.

Happy coding.

Some of the other articles I have written about Domain-Driven Design:



Improving Domain-Driven Design updates in EfCore.GenericServices

Last Updated: July 31, 2020 | Created: January 27, 2020

I love giving talks, because it makes think about what I am talking about! The talk I gave to the Developer South Coast meetup about EF Core and Domain-Driven Design (DDD) gave me an idea for a new feature in my EfCore.GenericServices library. And this article explains what this new feature is, and why people who use DDD with EF Core might find it useful.

TR;DR; – summary

  • EfCore.GenericServices is a library designed to make CRUD (Create, Read, Update, Delete) actions on an EF Core database easier to write. It works with standard classes and DDD-styled classes.
  • In a new release of EfCore.GenericServices I added a new feature to help when writing DDD methods to update relationships – known in DDD as Root and Aggregate.
  • The new approach has two advantages over the existing method in EfCore.GenericServices
    • The update method inside your DDD class is shorter and simpler
    • The new approach removes any reference to EF Core, which works with architectural approaches that isolate the entity class from the database.

NOTE: The EfCore.GenericServices is an open-source (MIT licence) library available at https://github.com/JonPSmith/EfCore.GenericServices and on NuGet. There is an article all aspects of the library (apart from the 3.1.0 improvement) via this link.

Setting the scene – updating a DDD-styled class

If you are not familiar with DDD-styled classes then you might find my article “Creating Domain-Driven Design entity classes with Entity Framework Core” first, especially the part about updating properties in a DDD-styled class.

DDD covers a very wide range of recommendations, but for this article I’m focus on how you might update an entity class. DDD says we should not simply update various properties, but create methods with a meaningful name to do any updates. Plus, it says we should make the class properties read-only so that the methods are the only way to do any update.

Here are two examples of the different approaches: NOTE: the dto (also known as a ViewModel) is a simple class holding the data from the user/system. It contains the key of the Book and the new publish date.

Normal (non-DDD)

var book = context.Find<Book>(dto.BookId);
book.PublishedOn = dto.NewPublishedDate;        

DDD style

var book = context.Find<Book>(dto.BookId);

The advantages of DDD is the code is in one place, which makes it easy to find and refactor. The disadvantages are you need to write more code, which why I created the EfCore.GenericServices library to reduce the amount of code I have to write. Having started using a DDD approach I looked at my create/update code and I noticed a common set of steps:

  1. Load the class we want to update from the database.
  2. Call the correct method (update) or constructor (create).
  3. Call SaveChanges to update the database.

Having seen this pattern, I decides I could automate all of these steps, so I created the library EfCore.GenericServices. As well as automating create/update the library also handed Read and Delete – all four are known as CRUD (Create, Read, Update, Delete).

The EfCore.GenericServices library contains a object-to-method call mapping capability to automate the calling of methods in a DDD-styled class.The library offers these features for both normal classes and for DDD-styles classes, but the clever bit is around calling DDD methods. For standard classes my library relies on AutoMapper with its object-to-object copying capability, but for DDD classes the library contains and object-to-method call mapping capability. This part of the library maps properties in a DTO/ViewModel to the parameters of methods/constructors in the DDD class, and then calls that methods/constructor (you can read how I map a class to a method in this GenericServices documentation section).

The idea of EfCore.GenericServices is to reduce as much as possible what code the developer has to write. The aim is to remove one of the disadvantages of DDD, i.e. that typically using a DDD requires more code than a non-DDD approach.

UpdateAndSave of relationships is difficult

In the example above I updated a property, PublishedOn, in my class book. That works fine, but if I wanted to Create/Update/Delete (CUD) an associated class (known as Aggregates in DDD) then it gets more complicated.  Typically, the update of an aggregate is done via the Root entity, e.g. a Book class might have a collection of Reviews – DDD says the method to add/update/delete an Aggregate (Review) should be done via the Root (Book).

The problem with this in EF Core is (typically) you need to read in the current relationships before we can apply any CUD action. For example, for my Book class with a collection of Reviews, then adding a new review would require loading the existing Reviews collection before we could add our new Review.

Previously the EfCore.GenericServices library the update method in the entity class had to load the relationship(s). This worked by providing the EF Core DbContext as a parameter in the update method, which allowed the method to use explicit loading (or other methods) to load the relationship. In our Book/Reviews example that would the Reviews collection inside the Book class.

This works, but some people don’t like have database commands in the DDD classes. In fact my current client follows the Clean Architecture approach and has the classes that are mapped to the database at the lowest level, with all the EF Core one level up. This means my approach doesn’t work.

New Feature – IncludeThen attribute!

I have just released version 3.1.0 of the EfCore.GenericServices library with second way to do updates. This uses an IncludeThen attribute to tell the UpdateAndSave method to pre-load the relationships set by the developer in the attribute. This removes the update method to have to load the relationship(s), which has two benefits, a) less code for developer has to write, b) removing the requirement for the entity class to interact with EF Core.

The new feature revolves around an IncludeThen attribute which allows the developer to define a set of relationship(s) they want loaded before the method in the DDD class is called. I give two examples of how that works below.

1. Simple example with just one .Include

The IncludeThen attribute is be added to the DTO/ViewModel you sent the library’s UpdateAndSave method. In this IncludeThen attribute the developer lists the relationship(s) they want loaded before the update method in your DDD classis called. The example below will include the Reviews collection before calling the access method inside the Book class.

public class AddReviewWithIncludeDto : ILinkToEntity<Book>
    public int BookId { get; set; }
    public string VoterName { get; set; }
    public int NumStars { get; set; }
    public string Comment { get; set; }

The lines to point out are:

  • Line 1; The IncludeThen attribute takes a string, so I could have used “Reviews” but I typically use the nameof operator, as if I rename the relationship property the code will still work.
  • Line 2: The ILinkToEntity<Book> tells EfCore.GenericServices that the main class is Book.
  • Line 4: This provides the primary key of the book data that we want to load.
  • Lines 5 to 7: These properties should match in Type and name to the parameters in the access method in the DDD-styled we want to call.

This example translates into the following query:

var book = context.DbSet<Book>()
       .Include(x => x.Reviews)
       .SingleOrDefault(x => x.BookId == dto.BookId) ;

This means the access method to add a new Review is shorter and simpler than the previous approach. Here is my implementation of the AddReview method, where IncludeThen.

public void AddReviewWithInclude(int numStars, string comment, string voterName)
    if (_reviews == null)
        //Check that the IncludeThen attribute set the the correct relationship. 
        throw new InvalidOperationException("Reviews collection must not be null");
    _reviews.Add(new Review(numStars, comment, voterName));

NOTE: I like shorter/simpler code, but the original version of the Addreview method adds a single Review to the database without loading the Reviews, so it is quicker than the Include version if there were lots of Reviews. Therefore, there is room for both approaches.

2. Example with .Include and .ThenInclude

The IncludeThen is not limited to a single .Include: the IncludeThen attribute has a second parameter of params string[] thenIncludeNames ,to define ThenIncludes to be laded too. There is an example below (this only shows one .ThenInclude, but you can have more).

[IncludeThen(nameof(Book.AuthorsLink), nameof(BookAuthor.Author))]
public class AddNewAuthorToBookUsingIncludesDto : ILinkToEntity<Book>
    public int BookId { get; set; }
    public Author AddThisAuthor { get; set; }
    public byte Order { get; set; }

This would translate into the following query:

var book = context.DbSet<Book>()
       .Include(x => x. AuthorsLink).ThenInclude(x => x.Author)
       .SingleOrDefault(x => x.BookId == dto.BookId) ;

You can also have multiple IncludeThen attributes on a class, e.g.

[IncludeThen(nameof(Book.AuthorsLink), nameof(BookAuthor.Author))]
public class AnotherDto : ILinkToEntity<Book>
    //… rest of code left out    

NOTE: You can get a more detailed list of the IncudeThen attribute in this page in the GenericServices documentation.

Updating to EfCore.GenericServices 3.1.0

If you are already using then updating to EfCore.GenericServices version 3.1.0 shouldn’t cause any problems, but you should be aware that the status code (e.g. IGenericStatus) are now coming from the NuGet library GenericServices.StatusGeneric. All the status code is now in a separate library that all my libraries use.

This has two benefits:

  1. You don’t need to include the EfCore.GenericServices in the assemblies that have the classes the EF Core maps to the database, instead you can use the GenericServices.StatusGeneric library. If you are using an architecture like the Clean Architecture approach where you don’t want EF Core included, then swapping from EfCore.GenericServices (which includes EF Core) to GenericServices.StatusGeneric library removes any EF Core references.
  2. This allows EfCore.GenericServices to work with my EfCore.GenericEventRunner library. Errors from the EfCore.GenericEventRunner library can be passed up to EfCore.GenericServices.


The idea I had while explaining about DDD was pretty easy to implement in EfCore.GenericServices (the documentation took twice as long to update!). The change only applies to updates that need access to relationships, but if you are following the DDD’s Root/Aggregate approach there can be quite a few of these.

Using the IncludeThen feature typically means that the access method is simpler and shorter, which makes me (and you!) more efficient. And developers with an architecture or approach where the entity classes have no access/knowledge of EF Core (like my current client) can now use this library.

So, thank you to the Developer South Coast Meetup people approaching me to give a talk, and for the insightful questions from the audience.

Happy coding.

A technique for building high-performance databases with EF Core

Last Updated: August 13, 2020 | Created: December 22, 2019

As the writer of the book “Entity Framework Core in Action” I get asked to build, or fix, applications using Entity Framework Core (EF Core) to be “fast”. Typically, “fast” means the database queries (reads) should return quickly, which in turn improves the scalability of the database.

Over the years I have worked on a lot of databases, and I have found several ways to improve database accesses. In this article I describe a new technique I found that uses an event-driven technique to update cached values in the actual SQL database. For me this approach is robust, fairly easy to add to an existing database, and can make reads quite fast. You might want to bookmark this article in case your boss comes up late in a project and says “the application isn’t fast enough!” – this might be just the approach you need.

The other articles in this series are:

TL;DR; – summary

  • This article describes a way to improve the performance of a database query when using EF Core. For the example application the performance gain is significant.
  • The technique adds new cache values to the existing database and updates them via an event-driven approach provided by my EfCore.GenericEventRunner library.
  • It is one of the simplest ways to add/update cache values, especially if you want to apply it to an existing SQL database.
  • The article is very long because I describe both the “why” and “how” of this approach works.
  • There is an example application in my GitHub repo, EfCoreSqlAndCosmos, that you can run yourself.

Setting the scene – picking the best way to performance tune

There are some things you can do to improve a database query using EF Core – mainly its about writing LINQ queries that translate into efficient SQL database queries. But with big or complex databases this can only take you so far. At that point you need to think about altering the database to make some parts of the database query easier to do.

The known way to speed things up is to use some form of cache, i.e. part of the query that takes a long time is pre-computed and stored so that the query can use this cached value instead. In my example I’m going to pre-compute the average review votes for a book (think Amazon’s star rating) – see this section for the performance improvements this provides. But the cached value(s) could be anything– for one of my clients it was pre-calculating the total pricing of a large and complex project.

Using cached values can make significant improvements to performance (see example later), but there are some big downsides. Caching data is notoriously difficult to get right. The typical problem is that the cache doesn’t get updated when the original data changes. This means the cache value returns old (“stale”) data when it shouldn’t. See this recent tweet from Nick Craven who works at StackOverflow – his comment is “Only cache if find you need to” and goes on to say “It absolutely has downsides in memory, confusion, complexity, races, etc. It’s not free.”

Therefore, I am always looking for caching designs that are simple and robust, i.e. you can be sure that the cache gets updated when the data changes. One approach I really like is using a two-database CQRS database design which is good, but not simple (I have already written about this: see this article on performance gain, and another article on a new design using Cosmos Db). I needed a simpler solution for a client that could be added to their existing databases, which is where this new event-driven approach comes from. The rest of the article cover adding cached values to an example application and what performance gains that gave me.

Example implementation – the book app

The example application I am using is my “book app”, which is I super-simple book selling application. I used this in my “Entity Framework Core in Action” book so I have lots of performance data for this app. In this application I cache two sets of data:

  1. A string holding the list of author’s names, e.g. “Erich Gamma, John Vlissides, Richard Helm, Ralph Johnson” (those names are from the famous “Design Patterns: Elements of Reusable Object-Oriented Software” book). This speeds up the display as it saves looking for each author and getting their name.
  2. The Reviews information, which is the number of reviews and the average star rating for all those reviews. This speeds up both the sorting/filter by the books average star rating and also speeds up the display of the book because

Here is a diagram to showing you the display of one book, with the parts we are going to cache to improve the performance of the book app.

I found there are two ways you can update a calculated cache values. They are:

  1. Complete Revaluation: You can recompute the cached value from the database, e.g. ReviewsCount = Book.Reviews.Count();
  2. Delta Update: You can update the cached value by adding the change (delta) to the existing cached value, e.g. ReviewsCount = ReviewsCount + 1;

The Complete Revaluation approach is the most obvious and works for everything, but as you will see has some issues when accessing the database. The Delta Update approach is quicker and is good for mathematical data, but if you miss something in your implementation then you can get the wrong answer.

NOTE: With the Delta Update approach I recommend building a service that will calculate the values using the Complete Revaluation approach and checks/updates any cached values if there is a problem. You can run when the system is lightly loaded to a) ensure the cache values up to date and b) spot if there are any problems in your Delta Update code.

I will describe both of these approaches, starting with the List of author’s names, which uses the Complete Revaluation approach

Authors string: set by Complete Revaluation approach

I used the Complete Revaluation approach to create the common-delimited string of author’s names.  Here is a diagram giving you an overview of how it works.

Creating an event when an individual Author’ Name is described in this section of my previous article, so I’m going to focus first on the event handler.

NOTE: You should read the article “EfCore.GenericEventRunner: an event-driven library that works with EF Core” to understand my code as I am using the EfCore.GenericEventRunner library in my example code.

Here is the event handler code that will recompute the string of authors for each book. It’s a bit more complicated than you would think, because the changed name hasn’t yet been written to the database yet.

public class AuthorNameUpdatedHandler :
    private readonly SqlEventsDbContext _context;

    public AuthorNameUpdatedHandler(SqlEventsDbContext context)
        _context = context;

    public IStatusGeneric Handle(EntityEvents callingEntity, 
        AuthorNameUpdatedEvent domainEvent)
        foreach (var bookWithEvents in _context.BookAuthors
            .Where(x => x.AuthorId == domainEvent.ChangedAuthor.AuthorId)
            .Select(x => x.Book))
            var allAuthorsInOrder = _context.Set<BookWithEvents>()
                .Where(x => x.BookId == bookWithEvents.BookId)
                .Select(x => x.AuthorsLink.OrderBy(y => y.Order).Select(y => y.Author).ToList())

            var newAuthorsOrdered = string.Join(", ", allAuthorsInOrder.Select(x =>
                x.AuthorId == domainEvent.ChangedAuthor.AuthorId
                    ? domainEvent.ChangedAuthor.Name 
                    : x.Name));

            bookWithEvents.AuthorsOrdered = newAuthorsOrdered;

        return null;

The lines to point out in the code are:

  • Lines 14 to 16: The author may have worked on multiple books, so we need update each book’s AuthorsOrdered string. Note that the domainEvent contains an instance of the Author class where the Name has been changed.
  • Lines 18 to 21: Because the author’s new name that created this event isn’t in the database, we can’t just read the current Author’s name from the database. I have therefore read in all the Author classes, in the correct order first…
  • Then in lines 23 to 26 I go though them and when it comes to the Author that has been changed, we substitute the new Name string instead of the existing database Name string.

This last point shows that we need to be careful about accessing the database, because the events are run just before SaveChanges and therefore some data hasn’t been saved.

Review cached values: set by Delta Update approach.

For the ReviewCount and ReviewsAverageVotes I used the Delta Update approach, which works well with mathematical changes. Here is a diagram showing how the “add a new review” events works:

As you will see it is much quicker to calculate and doesn’t need to access the database, which also makes the code simpler. Here is the “add new review” event handler.

public class ReviewAddedHandler :
    public IStatusGeneric Handle(EntityEvents callingEntity, 
        BookReviewAddedEvent domainEvent)
        var totalStars = Math.Round(
            domainEvent.Book.ReviewsAverageVotes * 
            + domainEvent.NumStars;
        var numReviews = domainEvent.Book.ReviewsCount + 1;
            totalStars / numReviews);

        return null;

The lines to point out in the code are:

  • Lines 7 to 9: I get the total stars applied to this book by multiply the average star rating by the number of reviews (simple maths). I then add the delta change (line 10), which is the star rating from the new Review.
  • Line 11: I add 1 to the ReviewsCount because this is an “Add new Review” event.
  • Lines 12 to 13: The Book class provided an Action that can be called by the events to set the ReviewsCount and ReviewsAverageVotes. This is a nice way to control what the event handler can do within the Book class.

Building code to check/update the cache values

As I said earlier it is a good idea to back up Delta Update approach with code that will recalculate the cached values using the Complete Revaluation approach. If you are adding either approach to an existing database, then you will need a tool to set up the cache values for existing data anyway. And with a little a bit more work you can use the same tool to catch any updates that are going wrong and get logging/feedback so that you can try a track down the software problem.

I always build a tool like this if I add cache values to a database. You can see my HardResetCacheService class that uses the Complete Revaluation approach to check and reset as necessary any cache values. It’s not super-fast, but you can run it when you know the system is normally lightly loaded, or manually if you think there is something wrong. Hopefully you won’t use it a lot, but if you do need it you will be very happy it’s there!

Making the cache design robust

There are two parts to making the design robust, which means the cache values are always correct. As I said at the start, making sure the cache values don’t return old (“stale”) or incorrect data is a big challenge. In this design there are two things to cover:

  1. Making sure a change in the underlying data is reflected into the cache values
  2. Handling multiple updates happening in parallel.

Underlying data change always updates the cache values

The event-driven system I am using ensures that a change in any data that effects a cache value can be captured and sent to the appropriate event handler to update the cache property. Assuming you haven’t got a bug in your system then this will work.

The other part of the event-driven design is the original data and the cache values are store in the same transaction. This means if anything goes wrong then neither changes are saved (see more on the design of event-driven approach to see how this works).

Handling multiple updates happening in parallel

We now need to talk about multiple updates happening in parallel, which brings in all sort of interesting things. Say two people added a new review to the book at exactly the same time. If we don’t do anything to handle this correctly the cache update from one of those reviews could be lost. This is known as a concurrency issue.

This is the part that took the most time to think. I spent days thinking around all the different concurrency issues that could cause a problem and then even more days coming up with the best way to handle those concurrency issues

I considered doing the cache update inside a transaction, but the isolation level needed for totally accurate cache updating required ‘locking’ a lot of data. Even using direct SQL commands to calculate and update the cache isn’t safe (see this fascinating SO question/answer entitled “Is a single SQL Server statement atomic and consistent?”).

I found the best way to handle concurrency issues is to use EF Core concurrency tools to throw a DbUpdateConcurrencyException and then working out the correct cache value. This is most likely the most complex part of the design and I start with the try/catch of exceptions in my EfCore.GenericEventRunner library. Here is a diagram to show you what happens if two reviews are added at the same time.

Now let’s look at the code I need to handle these types of concurrency issue.

Adding handling exceptions from SaveChanges/SaveChangesAsync

First I needed to add a way to capture exceptions when it calls SaveChanges or SaveChangesAsync. I already have a pattern for doing this in my other libraries (EfCore.GenericServices and EfCore.GeneriBizRunner). This allows you to add exception handler to catch database exceptions.

Up until now this feature has been used for turning database errors into error messages that are a) user-friendly and b) don’t disclose anything about your system (see this section from my article “EF Core – validating data and catching SQL errors”). But now I needed a way to handle the DbUpdateConcurrencyException where my code fixes the problem that caused the concurrency exception and it then calls SaveChanges/SaveChangesAsync again.

To do that I have added the same exception handler capability into my EfCore.GenericEventRunner library, but enhanced it for handling concurrency issues. Previously it returned null (exception not handled, so rethrow) or a “bad status” (contains user-friendly error messages to show the user). I added a third, return a “good status” (i.e. no errors) which  means try the call to SaveChanges/SaveChangesAsync again.

This “good status” is what I use when I fix the problems with the cache values.  Here is the code in my EfCore.GenericEventRunner library that surrounds its calling of the base SaveChanges.

        break; //This breaks out of the do/while
    catch (Exception e)
        var exceptionStatus = _config.SaveChangesExceptionHandler?
            .Invoke(e, context);
        if (exceptionStatus == null)
            throw; //no handler, or handler couldn’t handle this exception        

} while (status.IsValid);

The lines to point out in the code are:

  • Line 1: The call of the SaveChanges is in a do/while loop. This is needed, because if the SaveChangesExceptionHandler fixes a concurrency problem, then it needs to call SaveChanges again to store the corrected data. But because it is possible that another concurrency issue happens on this second SaveChanges, then it will call the SaveChangesExceptionHandler again.
  • Line 6: If the call to the base SaveChanges is successful, then it exits the do/while as all is OK.
  • Lines 12 to 13: This is case 1, no handler or handler can’t manage this exception, so the exception is rethrown.
  • Line 17: The while will loop back and call SaveChanges again. If there is an exception the process is run again.

Using the exception handler to fix cache concurrency issues

Now there is a way to capture an exception in my EfCore.GenericEventRunner library coming from SaveChanges/ SaveChangesAsync we can use this to capture concurrency issues around the cache values.

The first thing I need to do is tell EF Core to throw a DbUpdateConcurrencyException if it detects a concurrency issue (see previous diagram). To do this I marked the three properties with the ConcurrencyCheck attribute, as shown below.

public class BookWithEvents : EntityEvents
    //… other code left out
    public string AuthorsOrdered { get; set; }

    public int ReviewsCount { get; set; }

    public double ReviewsAverageVotes { get; set; }
    //... other code left out

Then I created a method called HandleCacheValuesConcurrency, which I registered with the GenericEventRunner on startup (see this documentation on how to do that, or the code in the example application). I’m also not going to my SaveChangesExceptionHandler method due to space, but you can find it here. What I do want to show you are two parts that handle the fixing of the AuthorOrdered string and the two Review cache values.

1. Complete Revaluation example: fixing AuthorOrdered concurrency issue

Here is the method I call from inside my HandleCacheValuesConcurrency method to handle any AuthorsOrdered concurrency issue. Its job is to work out if there was a concurrency issue with the AuthorsOrdered string, and if there is to recalculate the AuthorOrdered. Here is the code

public void CheckFixAuthorOrdered(BookWithEvents bookThatCausedConcurrency, 
    BookWithEvents bookBeingWrittenOut)
    var previousAuthorsOrdered = (string)_entry.Property(

    if (previousAuthorsOrdered != bookThatCausedConcurrency.AuthorsOrdered)
        var allAuthorsIdsInOrder = _context.Set<BookWithEvents>()
            .Where(x => x.BookId == bookBeingWrittenOut.BookId)
            .Select(x => x.AuthorsLink.OrderBy(y => y.Order)
            .Select(y => y.AuthorId)).ToList()

        var namesInOrder = allAuthorsIdsInOrder.Select(x => 
        var newAuthorsOrdered = namesInOrder.FormAuthorOrderedString();

             .CurrentValue = newAuthorsOrdered;

             .OriginalValue = bookThatCausedConcurrency.AuthorsOrdered;

I’m not going to explain all the lines in that code (see the actual source code, which has comments), but I do want to point out I get all the Author’s Names using the EF Core Find command (see line 16, highlighted).

I use EF Core’s Find method because works in special way: Find a) first looks for the entity you are asking for in the tracked entities in the current DbContext instance, if that fails to find anything then b) it looks in the database. I need this Find feature because I know at least one Author’s name has been updated (which kicked off the update of the AuthorsOrdered string) in this DbContext instance, but hasn’t yet be written to the database – that will only happen when the SaveChanges method is successful.

If you are using the Complete Revaluation approach then you also will need to consider whether the database has everything you need – it most likely doesn’t and you will need to look in the tarcked entities in the current DbContext instance to find the data you need to fix the concurrency issue.

2. Delta Update example: fixing the Review cache values

Here is the method I call from inside my SaveChangesExceptionHandler to handle any Review cache value concurrency issue, i.e. the ReviewsCount and/or the ReviewsAverageVotes. Its job is to work out if there was a concurrency issue these two cache values, and if there is to recalculates them. Here is the code

public void CheckFixReviewCacheValues(
    BookWithEvents bookThatCausedConcurrency, 
    BookWithEvents bookBeingWrittenOut)
    var previousCount = (int)_entry.Property(nameof(BookWithEvents.ReviewsCount))
    var previousAverageVotes = (double)_entry.Property(nameof(BookWithEvents.ReviewsAverageVotes))

    if (previousCount != bookThatCausedConcurrency.ReviewsCount ||
        previousAverageVotes != bookThatCausedConcurrency.ReviewsAverageVotes)
        var previousTotalStars = Math.Round(previousAverageVotes * previousCount);
        var countChange = bookBeingWrittenOut.ReviewsCount - previousCount;
        var starsChange = Math.Round(bookBeingWrittenOut.ReviewsAverageVotes *
             bookBeingWrittenOut.ReviewsCount) - previousTotalStars;

        var newCount = bookThatCausedConcurrency.ReviewsCount + countChange;
        var totalStars = Math.Round(bookThatCausedConcurrency.ReviewsAverageVotes *
             bookThatCausedConcurrency.ReviewsCount) + starsChange;

             .CurrentValue = newCount;
             .CurrentValue = totalStars / newCount;

             .OriginalValue = bookThatCausedConcurrency.ReviewsCount;
             .OriginalValue = bookThatCausedConcurrency.ReviewsAverageVotes;

Like the last concurrency method, I’m not going to explain all the lines in that code (see the source code here, which has comments), but I will talk about the differences from the AuthorsOrdered example.

The code doesn’t need to access the database as it can reverse the cache values from a) the book that caused the concurrency exception and b) the current book that was trying to update the database. From these two sources the method can a) extract the two updates and b) combine the two updates into one, which is equivalent to what would have happened if the two updates didn’t ‘collide’.

This approach follows the Delta Update approach, which allows it to fix the problem without needing recalculate the two values again by loading all the Reviews. To my mind this is quicker, which makes it less prone to getting another concurrency issue during the time you are fixing these cache values.

Weighting up the improvements against effort and added complexity

I always look at a new approach and measure its success based on the gains, in this case the performance improvement, against the effort needed to achieve that performance improvement. I also look at the complexity that this new approach adds to an application, as more complexity adds to the long-term support of the application.

Performance improvements

In terms of improving the performance this is a great result. One of the key queries that I expect users of my book app to use is sort and/or filter by votes. I run this with the first 100 books being returned. I measure the performance using the Chrome browser’s developer (F12) Network page in milliseconds running on my local PC, taking the average of about 10 consecutive accesses. For comparison the viewing of the Home page, which only has text and some Razor functions, takes about 8 ms.

The chart below shows “Sort by Votes” is about 15 times quicker and “sort/filter” version is about 8 times faster. That is a very good result.

The other thing to note is the small improvement of test 1, sort by publication date. This is due to the cached AuthorsOrdered string which removes the many-to-many join of authors names for each book.

The author’s string was a big problem in EF Core 2, with a different of 3 to 1 (230ms for EF Core 2, 80ms for the cached value version). That’s because EF Core 3 is quicker than earlier versions of EF Core as it combines the main book query with the many-to-many join of authors names. This shows than the AuthorsOrdered cached value maybe isn’t worth keeping – the extra complexity doesn’t give a good gain in performance.

Development effort

It certainly took me a lot of time, about 40 to 50 hours, to build this example application, but that includes the setup of the new approach and all its associated parts. There was also a lot of thinking/research time to find the best way though. Next time it would be quicker.

In actual fact the first usage of this approach was for one of my clients, and I keep a detailed timesheet for all of my client work. That says it took 11 hours to add Total Price cached value using a Delta Update approach to an existing database. I think (and I think the client does too) 11 hours is good price for the feature/performance gain it provided.

Added Complexity

I built a cached values version in the chapter on performance tuning in my book “Entity Framework Core in Action” (see this article I wrote that summarises that chapter). But in that case, I added the setting of the cached values into the existing business logic which made things much more complicated. This new version is much less ‘intrusive’, i.e. the cache update code is separated from the existing business logic which makes it much easier to refactor the code.

With this event-driven approach you only have to add minimal code in your classes (i.e. call an AddEvent method whenever certain events happen). Then all the complex code to update the cached values is in specific event handlers and concurrency methods. This separation makes this approach much nicer than my original version because the cache code isn’t mixed in with all of your other business logic.


I am very pleased with this new event-driven approach to improving database queries performance. I have done a lot of work on database performance, both for my book and for multiple clients, and this approach is the easiest so far. This new approach is fairly straightforward to apply to an existing database, and it keeps the cache update code separate from the rest of the business logic.

It took me ages to research and write this article – maybe 20 hours on top of the 40/50 hours for writing the code, which is very long for me. But I learnt a lot while looking for the best way to handle simultaneous updates of the same cache values – things like SQL transaction isolation levels, whether a single SQL command is atomic (it isn’t) and what to do inside a concurrency exception in EF Core. I feel quite confident to use this approach in a client’s application, in fact I am already using this approach with my current client to good effect.

The approach I covered isn’t super-simple, but I hope I have described it well enough that you can understand it to use this yourself. Please do look at the example code in which I added to my open-source EfCoreSqlAndCosmos project, and my open-source EfCore.GenericEventRunner, which is a key part of my design, but can also be useful in other situations too (see this article for more on that).

I gave you two ways to compute the cached values: Complete Revaluation and Delta Update. Which of these you use will depend on the type of data/query that gets the cached value. I quite like the Delta Update as its so fast (which means there is minimal performance loss on write-side of your application), but some data just doesn’t fit that way of working, especially strings.

All the best with your developer journey.

EfCore.GenericEventRunner: an event-driven library that works with EF Core

Last Updated: January 27, 2021 | Created: December 1, 2019

In the first article I described an event-driven architecture that work with Entity Framework Core (EF Core). In this article I go into the details of how to use the EfCore.GenericEventRunner that implements this event-driven design. This article covers the specific details of why and how to use this library.

NOTE: The EfCore.GenericEventRunner is an open-source (MIT licence) NuGet library designed to work with EF Core 3 and above. You can also find the code in this GitHub repo.

The articles in this series are:

TL;DR; – summary

  • This article describes how to use the EfCore.GenericEventRunner library, available on NuGet  and on GitHub.
  • EfCore.GenericEventRunner adds a specific event-driven system to EF Core. See this article for a description of this event-driven design.
  • I break up the description into five sections
    • Code to allow your EF Core classes to send events
    • How to build event handlers
    • How the events are run when SaveChanges/SaveChangesAsync is called.
    • How to register your event handlers and GenericEventRunner itself.
    • How to unit test an application which uses GenericEventRunner.

Update: Version 2.0.0 – new features

1. Supports async event handlers

You can now define async Before/After event handlers. These only work when you call SaveChangesAsync.

NOTE: If you want to have a sync and async version of the same event handler, then you can if you follow a simple naming rule – give your two event handler the same name, but the async one has “Async” on the end . GenericEventRunner will run the async if SaveChangesAsync is called and won’t run the sync version.

2. Now supports clean code architecture

There is a very small (one class, one enum, and one interface) library called EfCore.GenericEventRunner.DomainParts that you use in the domain project. This contains the class you need to inherit in an entity class to create events.

Overview of EfCore.GenericEventRunner library

I’m going to go though the four parts of the EfCore.GenericEventRunner library (plus something on unit testing) to demonstrate how to use this library. I start with a diagram which will give you an idea of how you might use GenericEventRunner. Then I will dive into the four parts.

NOTE: If you haven’t read the first article, then I recommend you read/skim that article – it might make understanding what I am trying to do.

In the diagram the blue rectangles are classes mapped to the database, with the events shown in light color at the bottom. The orange rounded rectangle is an event handler.

Here are the four parts of the library, plus a section on unit testing:

  1. ForEntities: This has the code that allows a class to contain and create events.
  2. ForHandlers: This contains the interfaces for building handlers.
  3. ForDbContext: The DbContextWithEvents<T> which contains the overriding of the SaveChanges/ SaveChangesAsync.
  4. The code for registering your event handlers and GenericEventRunner’s EventsRunner.
  5. How to unit test an application which uses GenericEventRunner (and logging).

NOTE: This code in is article taken from the code in the EfCore.GenericEventRunner repo used to test the library. I suggest you look at that code and the unit tests to see how it works.

1. ForEntities: code for your entity classes (see DataLayer in GenericEventRunner repo)

For this example, I am going to show you how I built the “1. Create new Order” (LHS of last diagram). The purpose of this event is to query the stock part a) is there enough stock to manage this order, and b) allocate some stock ready for this order.

The first thing I needed is an “allocate” event. An event is class that inherits the IDomainEvent interface. Here is my “allocate” event.

public class AllocateProductEvent : IDomainEvent
    public AllocateProductEvent(string productName, int numOrdered)
        ProductName = productName;
        NumOrdered = numOrdered;

    public string ProductName { get; }
    public int NumOrdered { get; }

In this case this event is sent from a new order which hasn’t been saved to the database. Therefore, I have to send the ProductName (which in my system is a unique key) and the number ordered because its not (yet) in the main database. Even if the data is in the database, I recommend sending the data in the event, as a) it saves a database access and b) it reduces the likelihood of concurrency issues (I’ll talk more on concurrency issues in the next article).

The next thing is to add that event to the Order class. To be able to do that the Order class must inherit abstract class called EntityEvents, e.g.

public class Order : EntityEvents
       //… rest of class left out

The EntityEvents class provides an AddEvent method which allows you to add a new event to your entity. It also stores the events for the Event Runner to look at when SaveChanges is called. (note that the events aren’t saved to the database – they only hang around as long as the class exists).

Below is the Order constructor, with the focus on the AllocateProductEvent – see the highlighted lines 10 and 11

public Order(string userId, DateTime expectedDispatchDate,
    ICollection<BasketItemDto> orderLines)
    //… some other code removed

    TotalPriceNoTax = 0;
    foreach (var basketItem in orderLines)
        TotalPriceNoTax += basketItem.ProductPrice * basketItem.NumOrdered;
        AddEvent(new AllocateProductEvent(
             basketItem.ProductName, basketItem.NumOrdered));

If you don’t use DDD, then the typical way to create an event is to catch the setting of a property. Here is an example of doing that taken from the first article.

private string _county;
public decimal County
    get => _county;
    private set
        if (value != _county)
            AddEvent(new LocationChangeEvent(value));
        _county = value;

This works because the property Country is changed into an EF Core backing field, and the name of the column in the table is unchanged. But because it’s now a backing field EF Core 3 will (by default) will read/write the field, not the property, which is good otherwise the load could cause an event.

NOTE: EF Core 3 default action is to read/write the field, but before EF Core 3 the default was to set via the property, which would have generated an event.

Types of events

When it comes to adding an event there are three separate lists: one for Before events, one for During events, and one for After events. The names give you a clue to when the handler is run: the Before events run before SaveChanges is called, the During events runs in an transaction where SaveChanges is called, and After events are run after SaveChanges is called. See the diagram below for a picture of this.

NOTE: As well as the three events the GenericEventRunner allows you to run some code just after the DetectChanges method. This is useful for detecting changes and do something to an entity, e.g. adding a LastUpdated DateTime to every entity class that has been changed.

I cover the three types of events in the next section, but I can say that BeforeSave events are by far the most used type, so that is the default for the AddEvent method. If you want to send an event to be run after SaveChanges, then you need to add a second parameter with the type, e.g. AddEvent(…, EventToSend.AfterSave).

2. ForHandlers: Building the event handlers

You need to create the event handlers to handle the events that the entity classes sent out. There are two types of event handler IBeforeSaveEventHandler<TEvent> and IAfterSaveEventHandler<TEvent>. Let me explain why I have the two types.

Before events and handlers

The BeforeSave events and handlers are all about the database. The idea is the BeforeSave handlers can change the entity classes in the database, and those changes are saved with the original data that your normal (non-event trigger) code set up. As I explained in the first article saving the original data and any data changed by the event together in one transaction is safe, as the data can’t get out of step.

Typically, a BeforeSave event will be triggered when something changes, or an event happens. The handler then either does some calculation, maybe accessing the database and returns a result to be saved in the calling entity and/or it might create, update or remove (delete) some other entity classes. The data changes applied by the normal code and the data changes applied by the event handler are saved together.

BeforeSave event handlers also have two extra features:

1. Firstly, they can return an (optional) IStatusGeneric status, which can send back errors. If it returns null or a status with no errors then the SaveChanges will be called.

Here is an example of a BeforeSave event handler which was called by the AllocateProductEvent you saw before. This checks that there is enough stock to accept this order. If it returns a status with any errors, then that stops SaveChanges/ SaveChangesAsync from being called.

public class AllocateProductHandler : IBeforeSaveEventHandler<AllocateProductEvent>
    private readonly ExampleDbContext _context;

    public AllocateProductHandler(ExampleDbContext context)
        _context = context;

    public IStatusGeneric Handle(EntityEvents callingEntity,
          AllocateProductEvent domainEvent)
        var status = new StatusGenericHandler();
        var stock = _context.Find<ProductStock>(domainEvent.ProductName);
        //… test to check it was found OK removed 

        if (stock.NumInStock < domainEvent.NumOrdered)
            return status.AddError(
                $"I could not accept this order because there wasn't enough {domainEvent.ProductName} in stock.");

        stock.NumAllocated += domainEvent.NumOrdered;
        return status;

The lines of code to highlight are:

  • Lines 18 to 19. If there isn’t enough stock it adds an error to the status and returns it immediately. This will stop the SaveChanges from being called.

The default situation is the first BeforeSave event handler that returns an error will stop immediately. If you want all the BeforeSave events to continue, say to get all the possible error messages, then you can set the StopOnFirstBeforeHandlerThatHasAnError property to false in the GenericEventRunnerConfig class provided at setup time (see 4. ForSetup: Registering service on config).

If the returned status has errors, then all the events are cleared and SaveChanges/Async isn’t called (see section “3. ForDbContext” for how these errors are returned to the application).

NOTE: Only a few of your BeforeSave handlers will need a status so you can return null as a quick way to say there are no errors (or more precisely the handler is not looking for errors). You can return a status with no errors and update the statues’ success Message string which will mean that Message will be returned at the top level (assuming a later BeforeSave handler doesn’t overwrite it).

2. Secondly, BeforeSave handlers can raise more events directly or indirectly. For instance, say an event handler changed a property that raised another event we need to pick that new event too. For that reason, the BeforeSave handler runner keeps looping around checking for new events until there are no more.

NOTE:  There is a property in the GenericEventRunnerConfig class called MaxTimesToLookForBeforeEvents value to stop circular events, e.g. an event calls something that calls the same event, which would loop for ever. If the BeforeSave handler runner loops around more than the MaxTimesToLookForBeforeEvents value (default 6) it throws an exception. See section “4. For setup” on how to change the GenericEventRunner’s configuration.

During events and handlers

DuringSave events are used for synchronizing your database update to another system that is separate from the database – if the other system fails then you database fails. This works by creating a transaction on your EF Core database, then call SaveChanges, if the SaveChanges was successful it then calls the other system. The Commit to the database is only called if both SaveChanges and the other system call works.

As an example, in my book “Entity Framework Core in Action, second edition” I use a During event to update a Cosmos DB database whenever the information about a book has changed. (I show this approach in an older article here – I didn’t have the GenericEventRunner then).

The need for during events are rare, but when you want to make sure your database is only updated if another system confirms it is OK to go ahead, then during events are the robust way to achieve this.

NOTE: Jimmy Bogard wrote an excellent series called “Life Beyond Distributed Transactions: An Apostate’s Implementation” in the 8th article in the series he talks about using a transaction in a SQL database to ensure the second update is done before exiting. Jimmy is very clear that too many people ignore these errors – as he says in his tweet “hope is not a strategy”!

Here is an example of a DuringSave event handler that updates the Cosmos DB when a book’s information changes. This is an async handler because Cosmos DB only works reliably with async calls.

public class BookChangeHandlerAsync 
    : IDuringSaveEventHandlerAsync<BookChangedEvent> 
    private readonly IBookToCosmosBookService _service; 

    public BookChangeHandlerAsync(
        IBookToCosmosBookService service)
        _service = service;

    public async Task<IStatusGeneric> HandleAsync( 
        object callingEntity, BookChangedEvent domainEvent, 
        Guid uniqueKey)
        var bookId = ((Book)callingEntity).BookId; 
        switch (domainEvent.BookChangeType) 
            case BookChangeTypes.Added:
                await _service.AddCosmosBookAsync(bookId);
            //... the update and delete versions are left out

        return null; 

The During event handler returns a IStatusGeneric result like the Before event – if it returns null or a status with no errors then the Commit will be called, otherwise the transaction will be rolled back, so the database is not updated.

After events and handlers

AfterSave events are there to do things once the SaveChanges is successful and you know the data is OK. Typical uses are clearing a cache because certain data has changed, or maybe use SignalR to update a screen with the changed data. Unlike the BeforeSave events the events runner only looks once at all the events in the entity classes, so AfterSave events handlers can’t trigger new events.

Here is an example of an AfterSaveEventHandler that would send an internal message to the dispatch department once an Order is successfully placed in the database.

public class OrderReadyToDispatchAfterHandler : 
    public void Handle(EntityEvents callingEntity,
         OrderReadyToDispatchEvent domainEvent)
        //Send message to dispatch that order has been checked and is ready to go

AfterSave event handers aren’t “safe” like the BeforeSave events in that if they fail the database update is already done and can’t be undone. Therefore, you want to make your AfterSave event handlers aren’t going to cause exceptions. They also shouldn’t update the database (that’s the job of the BeforeSave event handlers).

AfterSave event handers also don’t return any status so you can’t know if they worked on not (see one way around this in section “4. Setup” on how to check an AfterSave event handler ran).

3. ForDbContext: Overriding of EF Core’s base SaveChanges/SaveChangesAsync

To make this all work GenericEventRunner needs to override the base SaveChanges/ SaveChangesAsync methods. GenericEventRunner library provides a class called DbContextWithEvents<T>, which contains overrides for the SaveChanges/ SaveChangesAsync and two extra versions called SaveChangesWithStatus/ SaveChangesWithStatusAsync that return a status. Here is a my ExampleDbContext that I use for unit testing GenericEventRunner.

public class ExampleDbContext
    : DbContextWithEvents<ExampleDbContext>
    public DbSet<Order> Orders { get; set; }
    public DbSet<LineItem> LineItems { get; set; }
    public DbSet<ProductStock> ProductStocks { get; set; }
    public DbSet<TaxRate> TaxRates { get; set; }

    public ExampleDbContext(DbContextOptions<ExampleDbContext> options, 
        IEventsRunner eventRunner = null)
        : base(options, eventRunner)

    protected override void OnModelCreating(ModelBuilder modelBuilder)
        modelBuilder.Entity<ProductStock>().HasKey(x => x.ProductName);

Line 2 is the only change in your DbContext. Instead of inheriting DbContext, you inherit GenericEventRunner’s DbContextWithEvents<T>, where T is your class. This overrides the SaveChanges/ SaveChangesAsync and adds some other methods and the IStatusGeneric<int> StatusFromLastSaveChanges property.

For people who are already overriding SaveChanges you can either still layer DbContextWithEvents<T> class on top of your SaveChanges method, which GenericEventRunner will override, and call at the apporriate time. If you want to customise your DbContext then the methods used in the DbContextWithEvents<T> class are public, so you can use them directly. This allows you to reconfigure the GenericEventRunner SaveChanges/ SaveChangesAsync to suit your system.

What happens if BeforeSave or DuringSave event handlers send back an error?

As I said earlier if the Before or During event handlers return an error it does not update the database, but you most likely want to get the error message, which are designed to be shown to the user. I expect most developers to call SaveChanges/Async so the GenericEventRunner throws a GenericEventRunnerStatusException if the combined statuses of all the BeforeSave handlers has any errors. You can then get the errors in two ways:

  • The Message property of the GenericEventRunnerStatusException contains a string starting with an overall message and then each error (separated by the Environment.NewLine characters). This returns just the error text, not the full ValidationResult.
  • For a more detailed error response you can access the IStatusGeneric<int> StatusFromLastSaveChanges property in the DbContext. This provides you with access to the Errors list, where each error has an ErrorResult of type ValidationResult, where you can specify the exact property that caused a problem.

NOTE: The IStatusGeneric<int> StatusFromLastSaveChanges property will be null if SaveChanges hasn’t yet been called.

The alternative is to call the SaveChangesWithStatus/ SaveChangesWithStatusAsync methods directly. That way you can get the status directly without having to use a try/catch. This makes getting the status easier, but if you have a lot of existing code that already calls SaveChanges/SaveChangesAsync then its most likely best to stay with SaveChanges/Async and capture the exception where you need to.

What is the state of the current DbContext when there are exceptions?

We need to consider what state the DbContext is in when there are exceptions. Here is the list:

  • Exceptions before SaveChanges is called (other than GenericEventRunnerStatusException): In this state there may be changes in the database and any events are still there. Therefore, you need to be very careful if you want to call SaveChanges again (Note: this isn’t much different from what happens if you don’t have events – you don’t really know what state the DbContext is in after an exception and you should not try to call SaveChanges).
  • Exceptions during SaveChanges, e.g. DbUpdateConcurrencyException. If you get an exception during SaveChanges itself then it’s something about the database. The DbContext will have all the data ready to retry the SaveChanges, if you can “fix” the problem. If you call SaveChanges again (after fixing it) and it succeeds then all the BeforeEvents have been cleared (because they have already been applied to the DbContext), but any AfterSave events are still there and will run.
  • Exceptions after SaveChanges was called. If there isn’t a During event, then the database is up to date. But if there are During events the database, then it depends on whether then During events are successful.
  • Exceptions in During events. Any exceptions within the transaction will stop the database update.

4. ForSetup: Registering your event handlers

The final stage is to register all your event handlers, and the EventsRunner from the GenericEventRunner library. This is done using the extension method called RegisterGenericEventRunner. There are two signatures for this method. Both need an array of assemblies that is needs to scan to find your BeforeSave/AfterSave event handlers, but one starts with property of type IGenericEventRunnerConfig by which you can change the GenericEventRunner default configuration. Here is an example in ASP.NET Core without a config.

public void ConfigureServices(IServiceCollection services)
    //… other service registeration left out


  • You can provide multiple assemblies to scan.
  • If you don’t provide any assemblies it will scan the calling assembly.
  • If its scan doesn’t find any AfterSave event handlers then it sets the NotUsingAfterSaveHandlers config property to false (saves time in the the SaveChanges/ SaveChangesAsync).
  • See the documentation on registering the GenericEventRunner for more info.

NOTE: If you send an event has hasn’t got a registered handler then you will get a GenericEventRunnerException at run time.

There are two ways to configure GenericEventRunner and the event handlers at startup.

  1. You can provide a GenericEventRunnerConfig class at the first parameter to the RegisterGenericEventRunner. You can change the default setting of various parts of GenericEventRunner (see the config class for what features it controls).
  2. There is an EventHandlerConfig Attribute which you can add to an event handler class. From this you can set the lifetime of the handler. The default is transient

NOTE: The ability to change the lifetime of an event handler is there in case you need to communicate to event handler in some way, e.g. to check that an AfterSave event handler has run properly. In this case you could set the event handler’s lifetime to “Scoped” and use DI to inject the same handler into your code. (This is advanced stuff! – be careful).

5. Unit Testing applications which use GenericEventRunner

I recommend unit testing your events system, as if you haven’t provided an event handler you will get a runtime exception. Setting up the system to test events is a little complex because GenericEventRunner uses dependency injection (DI). I have therefore built some code you might find useful in unit tests.

The class called SetupToTestEvents in the GenericEventRunner’s Test assembly that contains an extension method called CreateDbWithDiForHandlers that registers your event handlers and return an instance of your DbContext, with the required EventsRunner, to use in your unit tests. Here is an example of how you would use it in a unit test.

public void TestOrderCreatedHandler()
    var options = SqliteInMemory.CreateOptions<ExampleDbContext>();
    var context = options.CreateDbWithDiForHandlers 

        var itemDto = new BasketItemDto
            ProductName = context.ProductStocks.OrderBy(x => x.NumInStock).First().ProductName,
            NumOrdered = 2,
            ProductPrice = 123

        var order = new Order("test", DateTime.Now, new List<BasketItemDto> { itemDto });

        order.TotalPriceNoTax.ShouldEqual(2 * 123);
        order.GrandTotalPrice.ShouldEqual(order.TotalPriceNoTax * (1 + order.TaxRatePercent / 100));
        context.ProductStocks.OrderBy(x => x.NumInStock).First().NumAllocated.ShouldEqual(2);

The lines of code to highlight are:

  • Line 5: You create your database options. In this case I am using a method in my EfCore.TestSupport library to create an in-memory Sqlite database, but it could be any type of database.
  • Line 6 and 7: This is where I call the CreateDbWithDiForHandlers extension method which needs two types:
    • TContext: This is your DbContext class
    • THandler: This should be one of your event handlers. This is used for find an assembly which GenericEventRunner needs to scan to find all your event handlers are in so that it can register them in DI. (It also registers any event handlers in the executing assembly – that allows you to add extra handlers for unit testing).

The CreateDbWithDiForHandlers extension method has some useful optional parameters- have a look at the code to see what they provide.

NOTE: I didn’t include the SetupToTestEvents class in the EfCore.GeneriEventHandler because it uses code from my EfCore.TestSupport library. You will need to copy it by hand from the GitHub repo into your unit test assembly.


The GenericEventRunner Event Runner logs each event handler before it is run. The log message starts with a prefix:

  • First letter: ‘A’ for AfterSave event handlers and ‘B’ for BeforeSave event handlers
  • Then number: this show what loop was it run, e.g. 1, 2, 3 etc. (remember, BeforeHandlers can create new events, which needs another loop around to find them). This is generally useful to see what events are fired when.

Here is an example from one of m GenericEventRunner unit tests. Notice that the last log message starts with “B2”, which means it must have been triggered by a change caused by one of the event handlers that run in the first (i.e. “B1”) event loop.

"B1: About to run a BeforeSave event handler …OrderCreatedHandler."
"B1: About to run a BeforeSave event handler …AllocateProductHandler."
"B2: About to run a BeforeSave event handler …TaxRateChangedHandler."

Also, the unit test CreateDbWithDiForHandlers method allows you to capture logs, which can be useful in testing that events handlers run at the correct time.


Well done for getting here! It’s a long article but I hope it told you “why” and well as “how” to use the EfCore.GenericEventRunner library. If you are thinking of using this library I recommend you inspecting/cloning the EfCore.GenericEventRunner GitHub repo and look at the examples and the unit tests to see how it works.

While this library is new, I have been working on a similar system in my client’s application for some time. That means the features and approach of this library has been proven in the real-world. In fact, the AfterSave events have been added to help deal with some issues that cropped up in the client’s original implementation.

The third article in this series is “A technique for building high-performance databases with EF Core” which uses this event library to improve the the performance of my “book app”. I build a similar version in chapter 13 in my book “Entity Framework Core in Action”, but in my opinion the new version that uses events is much better (You can see the original article from the book called “Entity Framework Core performance tuning – a worked example”).

Happy coding.