EF Core In depth – what happens when EF Core writes to the database?

Last Updated: June 13, 2020 | Created: June 13, 2020

This article is the second “under the hood” view of what happens when you use EF Core. The first was about reading from the database, and this article is about writing to the database. This is CUD part of the four CRUD (Create, Read, Update, and Delete) accesses.

I do assume you know EF Core, but I start with a look at using EF Core to make sure we have the basics covered before I dive into the depths of EF Core. But this is a “deep dive” so be ready for lots of technical detail, hopefully described in a way you can understand.

This article is part of a “EF Core In depth” series. Here is the current list of articles in this series:

This “EF Core In depth” series is inspired by what I found while updating my book “Entity Framework Core in Action” to cover EF Core 5. I am also added a LOT of new content from my experiences of working with EF Core on client applications over the last 2½ years.

NOTE: There is a companion GitHub repo at https://github.com/JonPSmith/EfCoreinAction-SecondEdition. There are unit tests that go with the content in this article – look for unit test class whose name start with “Ch06_”.

TL;DR – summary

  • EF Core can create a new entry in the database with new or existing relationships. To do this it has to set the right order to write out the classes so that it can get set the any linked class. This makes it easy for the developer to write out classes with complex links between them.
  • When you call the EF Core command Add to create a new entry many things happen
    • EF Core finds all the links from the added class to other classes. For each linked class it works out if it needs to create a new row in the database, or jus link to an existing row in the database.
    • It also fills in any foreign key, either with the actual key of an existing row, or a pseudo-key for links to new classes.
  • EF Core can detect when you change a property in a class you read in from the database. It does this by holding a hidden copy of the class(es) read in. When you call SaveChanges it compares each clear read in with its original value and only creates commands to change the specific class/property that was changed.
  • EF Core’s Remove method will delete the row in the database pointed to by the primary key value in the class you provide as a parameter. If the deleted class has relationships then the database can sort out what to do, but you can change the delete rules.

Setting the scene – the basics of EF Core writing to the database

TIP: If you already know EF Core then you can skip this section – it’s just an example of how you write to a database.

For the examples in my book I have created a small web application that sells books – think super-simple Amazon. In this introduction I am going to describe the database structure and then give you a simple example of writing to that database.

a. The classes/tables I’m going to work with

My Book App as I call it starts out in chapter 2 with the following five tables shown in the figure below. I chose this because a) its easy to understand because of Amazon etc. and b) it had each of the basic relationships that can exist between tables.

Theses tables are mapped to classes with similar names, e.g. Book, BookAuthor, Author, with properties with the same name as the columns shown in the tables. I’m not going to show the classes because of space, but you can see these classes here in my GitHub repo.

b. A look at what you need to access this database via EF Core

For EF Core to write to the database I have shown you need 5 parts

  1. A database server, such as SQL Server, Sqlite, PostgreSQL…
  2. A class, or classes, to map to your database – I refer to these as entity classes.
  3. A class which inherits EF Core’s DbContext class, which contains the setup/configuration of EF Core
  4. A way to create a database
  5. Finally, the commands to write to the database.

The unit test code below comes from the EfCoreinAction-SecondEdition GitHub repo and shows a simple example of writing out a set of books, with their authors, reviews etc. to a database.

[Fact]
public void TestWriteTestDataSqliteInMemoryOk()
{
    //SETUP
    var options = SqliteInMemory.CreateOptions<EfCoreContext>();
    using (var context = new EfCoreContext(options))
    {
        context.Database.EnsureCreated();

        //ATTEMPT
        var book = new Book
        {   
            Title = "Test",  
            Reviews = new List<Review>() 
        };       
        book.Reviews.Add(new Review { NumStars = 5 }); 
        context.Add(book);   
        context.SaveChanges();  

        //VERIFY
        var bookWithReview = context.Books
            .Include(x => x.Reviews).Single()
        bookWithReview.Reviews.Count.ShouldEqual(1);
    }
}

Now, if we link unit test code to the list of 5 parts, it goes like this

  1. A database server – Line 5: I have chosen a Sqlite database server, and in this case the SqliteInMemory.CreateOptions method, which comes from my EfCore.TestSupport NuGet package, sets up a new, in-memory database (in-memory database are great for unit testing as you can set up a new, empty database just for this test – see chapter 17 of my book for more).
  2. A class, or classes – not shown, but there is a Book entity class, with relationships to an a Review entity class.
  3. A class inherits DbContext – Line 6:  the EfCoreContext class inherits the DbContext class and configures the links from the classes to the database (you can see this class here in my GitHub repo).
  4. A way to create a database – Line 8: because it’s a new database I use this command to create the correct SQL tables, keys, indexes etc. The EnsureCreated method is used for unit tests, but for real applications you most likely will use EF Core migrations.
  5. Commands to write to the database – Lines 17 and 18
    1. Line 17: the Add method tells EF Core that a new book with its relationships (in this case, just a Review), needs to be written to the database.
    1. Line 18: In this case the SaveChange method creates new rows in Books and Review tables in the database.

The last few lines after the //VERIFY comment are some simple checks that the books have been written to the database.

In this example you added new entries (SQL command INSERT INTO) to the database, but EF Core will also handle updates and deletes to the database. The next section covers this create example and then moves onto other examples of Create, Update and Delete.

What happens when EF Core writes in the SQL database?

I’m going to start with creating a new Book entity class, with one, new Review entity class. I chose this as the simplest write which has a relationship.  You have just as I did in the unit test above, but if you skipped this here it the important part again:

var book = new Book              
{                                
    Title = "Test",              
    Reviews = new List<Review>() 
};                               
book.Reviews.Add(new Review { NumStars = 1 });
context.Add(book);               
context.SaveChanges();           

To add these two linked entities to the database EF Core has to

  1. Work out what order it should create these new rows – in this case it has to create a row in the Books table so that it has the primary key of the Book.
  2. Copy any primary keys into the foreign key of any relationships – this this case it copies the Books row’s primary key, BookId, into the foreign key in the new Review row.
  3. Copy back any new data creating in the database so that the entity classes properly represent the database – in this case it must copy back the BookId and update the BookId property in both the Book and Review entity classes and the ReviewId for the Review entity class.

So, let’s see the SQL from this create, as shown in the following listing

-- first database access
SET NOCOUNT ON; 
-- This inserts a new row into the Books table. 
-- The database generates the Book’s primary key                            
INSERT INTO [Books] ([Description], [Title], ...)
VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6);     

-- This returns the primary key, with checks to ensure the new row was added
SELECT [BookId] FROM [Books]                        
WHERE @@ROWCOUNT = 1 AND [BookId] = scope_identity();

-- second database access
SET NOCOUNT ON;
-- This inserts a new row into the Review table. 
-- The database generates the Review’s primary key
INSERT INTO [Review] ([BookId], [Comment], ...)
VALUES (@p7, @p8, @p9, @p10);

-- This returns the primary key, with checks to ensure the new row was added
SELECT [ReviewId] FROM [Review]
WHERE @@ROWCOUNT = 1 AND [ReviewId] = scope_identity();

The important point is that EF Core handles the problem of writing out the entity classes in the correct order so that it can fill in any foreign keys. This is simple example but for a client I had to build a very complex piece of data consisting of about 15 different entity classes, some new entity classes were added, some were updated and some removed, but one call to SaveChanges and EF Core will work out what to do, in the right order, to update the database. So, EF Core makes writing complex data to a database easy for the developer.

I mention this because I have seen EF Core code where the developer used multiple calls of the SaveChanges method to obtain the primary key from the first create to set the foreign key for the related entity. For instance.

var book = new Book              
{                                
    Title = "Test"
}; 
context.Add(book);               
context.SaveChanges();           
var review = new Review { BookId = book.BookId, NumStars = 1 }
context.Add(review);               
context.SaveChanges();           

That would have the same effect as the previous code, but it has a weakness – if the second SaveChanges fails, then you have partial updated database, which might not be a problem in this case, but in my client’s case (which was a security system) that could be very bad indeed!

So, the take-away from this is – you don’t need to copy primary keys into foreign keys, because you can use set up the navigational properties and EF Core will sort out the foreign keys for you. So, if you think you need to call SaveChanges twice, then it normally means you haven’t set up the right navigational properties to handle that case.

What happens in the DbContext when EF Core writes to the database?

In the last section you saw what EF Core does at the database end, but now you are going to look at what happens inside EF Core. Most of the time you don’t need to know this, but there are times that knowing this is very important – for instance if you are catching changes during a call to SaveChanges, then you only get its State before SaveChanges is called, but you only have the primary key of a newly created entity after the call to SaveChanges.

The example is a little more complex from the last one. In this example I want to show you the different way EF Core handles new instances of an entity class over an instance of an entity that has been read from the database. The code in listing below creates a new Book, but with an Author that is already in the database. The code has comments saying Stage 1, Stage 2 and Stage 3 and I then describe what happens after each stage using diagrams.

//STAGE1                                             
var author = context.Authors.First();                
var bookAuthor = new BookAuthor { Author = author }; 
var book = new Book                                  
{                                                    
    Title = "Test Book",                             
    AuthorsLink = new List<BookAuthor> { bookAuthor }
};                                                   

//STAGE2
context.Add(book);                                   

//STAGE3
context.SaveChanges();                               

The next three figures show you what is happening inside the entity classes and their tracked data at each stage. Each figure shows the following data at the end of its stage.

  • The State of each entity instance at each stage of the process (shown above each entity class).
  • The Book and BookAuthor classes are brown to show that these are new instances of the classes and need to be added to the database, while the Author entity class is blue to represent that instance was read from the database.
  • The primary and foreign keys with the current value in brackets. If a key is (0), then it hasn’t been set yet.
  • The navigational links are shown as connections from the navigational property to the appropriate entity class that it is linked to.
  • Changes between each stage are shown by bold text or thicker lines for the navigational links.

The following figure shows the situation after Stage 1 has finished. This is your initial code that sets up a new Book entity class (left) with a new BookAuthor entity class (middle) which links the Book to an existing Author entity class (right).

The figure above show the condition of the three entity classes after Stage 1 has finished, that is your code has set up the entity classes to represent the new book you want add to the database. This is the starting point before you call any EF Core methods.

The next figure shows the situation after the line context.Add(book) is executed. The changes are shown in bold, and with thick lines for the added navigational links.

You may be surprised how much happened when the Add method was executed (I was!). What it seems to be doing is getting the entities as close to the position that they will be after the SaveChanges is called. Here are the things that happen when the Add method is called.

It sets the State of the entity provided as a parameter to Added – in this example that is the Book entity. Then it looks all entities linked to the entity provided as a parameter, either by navigational properties or by foreign key values. For each linked entity it does the following (NOTE: I don’t know the exact order they are done in).

  • If the entity is not tracked, that is its current State of Detached, it sets its State to Added. In this example, that is the BookAuthor entity – the Author’s State isn’t updated because that entity it is tracked. 
  • It fills in any foreign keys for the correct primary keys. If the linked primary key isn’t yet available, it puts a unique negative number in the CurrentValue properties the tracking data for the primary key and the foreign key. You can see that in figure above.
  • It fills in any navigational properties that aren’t currently set up – shown as thick blue lines in the figure above.

NOTE The call of the Add method can take some time! In this example, the only entities to link to are set by your code, but Add’s relational fixup stage can link to any tracked entity. But if have lots of tracked entity classes in the current DbContext it can take a long time. There are ways you can control this which I cover in chapter 14, EF Core performance tuning, in my book “Entity Framework Core in Action” second edition.

The final stage, stage 3, is what happens when the SaveChanges method is called, as shown in the next figure.

You saw in the “what happens in the SQL database” section that any columns that were set/changed by the database are copied back into the entity class so that the entity matches the database. In this example the Book’s BookId and the BookAuthor’s BookId where updated to have the key value created in the database. Also, now that all the entities involved in this database write now match the database their States are set to Unchanged.

Now that may have seemed a long explanation to something that “just works”, and many times you don’t need to know that. But when something doesn’t work correctly, or you want to do something complex like logging entity class changes, then knowing this is very useful.

What happens when EF Core updates the database?

The last example was about adding new entities to the database, but there was no update going on. So, in this section I am going to show what happens when you update something that is already in the database. The update relies on the normal query that I covered in the first article “what happens when EF Core reads from the database?”

The update is a simple one, just three lines, but it shows the three stages in the code: read, update, save.

var books = context.Books.ToList();
books.First().PublishedOn = new DateTime(2020, 1, 1);
context.SaveChanges();  

The following figure shows the three stages.

As you can see, the type of query you use matters – the normal query loads the data and keeps a “tracking snapshot” of the data returned to the calling code. The returned entity classes are said to be “tracked”. If an entity class isn’t tracked, then you can’t update it.

NOTE: The Author entity class in the last section was also “tracked”. In that example the tracked state of the Author told EF Core that the Author was already in the database so that it wasn’t created again.

So, if you change any properties in the loaded, tracked entity classes, then when you call SaveChanges it compares ALL the tracked entity classes against their tracking snapshot. For each class it goes through all the properties that are mapped to the database, and any properties that link to other entity classes (known as navigational properties). This process, known as change tracking, will detect up every change in the tracked entities, both for non-relational properties like Title, PubishedOn, and navigational links, which will be converted to changes to foreign keys that link tables together.

In this simple example there are only four books, but in real applications you might have loaded lots entity classes all linked to each other. At that point the comparison stage can take a while. Therefore, you should try to only load the entity classes that you need to change.

NOTE: There is an EF Core command called Update, but that is used in specific cases where you want every property/column updated. EF Core’s change tracking is the default approach as it only updates the property/column that has changed.

Each update will create a SQL UPDATE command, and all these UPDATEs will be applied within a SQL transaction. Using a SQL transaction means that all the updates (and any other changes EF Coure found) are applied as a group, and if any one part fails then any database changes in the transaction are not applied. That isn’t important in our simple example, but once you start changing relationships between tables then its important that they all work, or all fail.

What happens when EF Core deleting data in the database?

The last part of the CRUD is delete, which in some ways is simple, you just call context.Remove(myClass), and in other ways its complex, e.g. what happens when you delete an entity class that another entity class relies on? I’m going to give you a quick answer to the first part, and a much longer to the second part.

The way to delete an entity class mapped to the database is to use the Remove method. Here an example where I load a specific book and then delete it.

var book = context.Books
    .Single(p => p.Title == "Quantum Networking");
context.Remove(book); 
context.SaveChanges();

The stages are:

  1. load the book entity class that you want to delete. This gets all its data, but for a delete you only really need the entity class’s primary key.
  2. The call to the Remove method set’s the State of the book to Deleted. This information is sorted in the tracking snapshot for this book.
  3. Finally, the SaveChanges creates a SQL Delete command that is sent to the database, with any other database changes, and in a SQL transaction (see update description for more on a transaction).

That looks straightforward, but there is something going on here that is important, but not obvious from that code. It turns out that the book with the title of “Quantum Networking” has some other entity classes (database: rows) linked to it – in this specific test case the book with the title of “Quantum Networking” has links to the following entity classes:

  • Two Reviews
  • One PriceOffer
  • One BookAuthor linking to its Author.

Now, the Reviews, PriceOffer and BookAuthor (but not the Author) entity classes are only relevant to this book – the terms we use is they are dependent on the Book entity class. So, if the Book is deleted, then these Reviews, PriceOffer, and any BookAuthor linking rows should be deleted too. If you don’t delete these, then the database’s links are incorrect, and a SQL database would throw an exception. So, why did this delete work?

What happens here is the database relationships between the Books table and the three dependent tables have been set up with a Delete rule called cascade delete. Here is an example of the SQL commands EF Core would produce to create the Review table.

CREATE TABLE [Review] (
    [ReviewId] int NOT NULL IDENTITY,
    [VoterName] nvarchar(max) NULL,
    [NumStars] int NOT NULL,
    [Comment] nvarchar(max) NULL,
    [BookId] int NOT NULL,
    CONSTRAINT [PK_Review] PRIMARY KEY ([ReviewId]),
    CONSTRAINT [FK_Review_Books_BookId] FOREIGN KEY ([BookId]) 
         REFERENCES [Books] ([BookId]) ON DELETE CASCADE
);

The highlighted part is the constraint (think it as a rule) that says the review is linked to a row in the Books table via its BookId column. On the end of that constraint you will see the works ON DELETE CASCADE. That tells the database that if the book it is linked to is deleted, then this review should be deleted too. That means that the delete of the Book is allowed as all the dependent rows are deleted too.

That is very helpful, but maybe to want to change the delete rules for some reason? In chapter four of my book  I add some business logic to allow a user to buy a book. I decided I didn’t want to allow a book to be deleted if it existed in a customer’s order. To do this I added some EF Core configuration inside the DbContext to change the delete behaviour to Restrict – see below

public class EfCoreContext : DbContext
{
    private readonly Guid _userId;                                   

    public EfCoreContext(DbContextOptions<EfCoreContext> options)                         
        : base(options)

    public DbSet<Book> Books { get; set; }
    //… other DbSet<T>s removed

    protected override void OnModelCreating(ModelBuilder modelBuilder
    {
        //… other configurations removed 

        modelBuilder.Entity<LineItem>()
            .HasOne(p => p.ChosenBook) 
            .WithMany()
            .OnDelete(DeleteBehavior.Restrict);
    } 
}

Once this change to the configuration is migrated to the database the SQL DELETE CASCADE would be removed. That means the SQL constraint changes to a setting of NO ACTION. This means if you try to delete a book that is in a customer’s Order (with uses a LineItem table to keep each item in an order), then the database would return an error, which EF Core would turn into an exception.

This gives you a good idea of what is going on, but there is quite a bit more that I haven’t covered (but I do cover in my book). Here are some things about Delete that I haven’t covered.

  • You can have required relationships (dependent) and optional relationships and EF Core uses different rules for each type.
  • EF Core contains DeleteBehaviors that brings some of the work that the database would do into EF Core. This is useful to avoid problems when your entity classes’ relationships are circular – some databases throw an error if they find a circular delete loop.
  • You can delete an entity class by providing the Remove method with a new, empty class with just the primary key set. That can be useful when working with a UI/WebAPI that only returns the primary key.

Conclusion

So, I have covered the Create, Update and Delete part of the CRUD, with the previous article handling the Read part.

As you have seen the creation of new data in the database using EF Core is easy to use, but complex inside. You don’t (usually) need to know about what happens inside EF Core or the database, but having some idea is going help you take full advantage of EF Core’s cleverness.

Updates are also easy – just change a property or properties in the entity class you read in and when you call SaveChanges EF Core will find the changed data and build database commands to update the database to the same settings. This works for non-relational properties, like the book Title, and for navigational properties, where you could change a relationship.

Finally, we looked at a delete. Again, easy to use but a lot can be happening underneath. Also, look out for the next article where I talk about what is called “Soft Delete”. This is were you set a flag and EF Core won’t see that entity class anymore – it’s still in the database, but its hidden. Many of my clients have used this as it helps if their users inadvertently delete something because they can un-(soft) delete it.

I hope you found this useful and look out for more articles in this series.

Happy coding.

EF Core In depth – what happens when EF Core reads from the database?

Last Updated: June 1, 2020 | Created: May 31, 2020

This article gives an “under the hood” view of what happens when EF Core reads in data from a database. I look at two types of database read: a normal query and a query that contains the AsNoTracking method in it. I also show how a bit of experimenting on my part solved a performance problem that one of my client’s had.

I do assume you know EF Core, but I start with a look at using EF Core to make sure we have the basics covered before I dive into the depths of EF Core. But this is a “deep dive” so be ready for lots of technical detail, hopefully described in a way you can understand.

This article is part of a “EF Core In depth” series. Here is the current list of articles in this series:

  • EF Core In depth – what happens when EF Core reads from the database? (this article)
  • EF Core In depth – what happens when EF Core writes to the database? (coming soon)

This “EF Core In depth” series is inspired by what I found while updating my book “Entity Framework Core in Action” to cover EF Core 5. I am also added a LOT of new content from my experiences of working with EF Core on client applications over the last 2½ years.

NOTE: There is a companion GitHub repo at https://github.com/JonPSmith/EfCoreinAction-SecondEdition. This has a simple e-commerce site called Book App that you can run. Also, there are unit tests that go with the content in this article – look for unit test class whose name start with “Ch01_”, “Ch02_” etc.

TL;DR – summary

  • EF Core has two ways to read data from the database (known as a query): a normal LINQ query and a LINQ query that contains the method AsNoTracking.
  • Both types of query return classes (referred to as entity classes) with links to any other entity classes (known as navigational properties) loaded at the same time. But how and what they are linked to is different between the two types of queries.
  • The normal query also takes a copy of any data it reads in inside the application’s DbContext – the entity classes are said to be tracked. This allows the loaded entity classes to take part in commands to update the database.
  • This normal query also has some sophisticated code called relational fixup which fills in any links between the entity classes read in, and any other tracked entities.
  • The AsNoTracked query doesn’t take a copy so it isn’t tracked – this means its faster than a normal query. This also means it won’t be considered for database writes.
  • Finally, I show a little-known feature of EF Core’s normal query as an example of how clever it is in linking up relationship via its navigational properties.

Setting the scene – the basics of EF Core writing to the database

TIP: If you already know EF Core then you can skip this section – it’s just an example of how you read a database.

In my book I have created a small web application that sells books – think super-simple Amazon. In this introduction I am going to describe the database structure and then give you a simple example of writing to that database.

a. The classes/tables I’m going to work with

My Book App as I call it starts out in chapter 2 with the following five tables shown in the figure below. I chose this because a) the data/concepts are easy to understand because of sites like Amazon etc. and b) it has one of each of the basic relationships that can exist between tables.

Theses tables are mapped to classes with similar names, e.g. Book, BookAuthor, Author, with properties with the same name as the columns shown in the tables. I’m not going to show the classes because of space, but you can see these classes here in my GitHub repo.

b. A look at what you need to read this database via EF Core

For EF Core to write to the database I have shown you need 5 parts

  1. A database server, such as SQL Server, Sqlite, PostgreSQL…
  2. An existing database with data in it.
  3. A class, or classes, to map to your database – I refer to these as entity classes.
  4. A class which inherits EF Core’s DbContext class, which contains the setup/configuration of EF Core
  5. Finally, the commands to read from the database.

The unit test code below comes from the EfCoreinAction-SecondEdition GitHub repo and shows a simple example of reading in a set of four Books, with their BookAuthor and Authors entity classes from an existing database.

The example database contains four books, where the first two books have the same author, Martin Folwer.

[Fact]
public void TestBookCountAuthorsOk()
{
    //SETUP
    var options = SqliteInMemory.CreateOptions<EfCoreContext>();
    //code to set up the database with four books, two with the same Author
    using (var context = new EfCoreContext(options))
    {
        //ATTEMPT
        var books = context.Books
            .Include(r => r.AuthorsLink)
                .ThenInclude(r => r.Author)
            .ToList();

        //VERIFY
        books.Count.ShouldEqual(4);
        books.SelectMany(x => x.AuthorsLink.Select(y => y.Author))
            .Distinct().Count().ShouldEqual(3);
    }
}

Now, if we link unit test code to the list of 5 parts, it goes like this

  1. A database server – Line 5: I have chosen a Sqlite database server, and in this case the SqliteInMemory.CreateOptions method, which comes from my EfCore.TestSupport NuGet package, sets up a new, in-memory database (in-memory database are great for unit testing as you can set up a new, empty database just for this test – see chapter 17 of my book for more).
  2. An existing database with data – Line 6: I deal with writing to the database in the next article, for now just assume there is a database with four books, two of which have the same author.
  3. A class, or classes – not shown but classes found here, but there is a Book entity class, with relationships to an Author entity class, via a many-to-many linking entity class called BookAuhor.
  4. A class inherits DbContext – Line 7:  the EfCoreContext class inherits the DbContext class and configures the links from the classes to the database (you can see this class here in my GitHub repo).
  5. Commands to read from the database – Lines 10 to 13 – this is a query:
    1. Line 10 – the EfCoreContext instance called context gives you access to the database, and adding Books says you want to access the Books table.
    1. Line 11  – The Include is known as eager loading and tells EF Core that when it loads a Book is should also load all the BookAuthor entity classes that are linked to that book.
    1. Line 12  – The ThenInclude is part of the eager loading and tells EF Core that when it loads a BookAuthor it should also load the Author entity classes that are linked to that BookAuthor.

The result of all of this is a set of books, with normal properties, like the Title of the Book, filled in and the navigational properties that link the entity classes, like the AuthorsLink property in the Book, filled in with a reference to the correct instance of the entity class it links to.  And the last few lines after the //VERIFY comment are some simple checks that there are four books have, between them, three distinct authors.

This example is known as a query, and one of the four types of database accesses, which are known as CRUD (Create, Read, Update, and Delete). I cover the Create and Update in the next article.

How EF Core represents data when reading from the database

When you query a database EF Core goes thought various steps to convert from data returned by the database into entity classes with navigational links filled in. In this section we will look at those steps for two types of queries – a normal query (i.e. without AsNoTracking, also known as a read-write query) and a query with the AsNoTracking method added (known as a read-only query).

But first we look at the initial part which takes your LINQ command, converts it to the relevant commands for the database type you are using, and gets the data back. This is common to the two types of query we are going to look at. See the following figure for this first part.

There is some very complex code that converts your LINQ into database commands, but there really isn’t a lot to say other than if your LINQ can’t be translated you will get an exception from EF Core with a message that contains ‘could not be translated’. Also, when the data is coming back features like Value Converters may adapt the data.

NOTE: In chapter 6 of my book I cover some of the more complex LINQ commands and what you should do to help EF Core to translate to database commands.

This section has shown the first part of the query, where your LINQ is turned into database commands and returns all the correct values. Now we look at the second part of the query where EF Core takes the returned values and turns them into instances of the entity classes and filling in any navigational properties. There are two types of queries which we will look at.

  1. A normal query (read-write query)
  2. An AsNoTracking query, which has the AsNoTracking method added (read-only query).

1. Normal query – a read-write query

A normal query reads in the data in such a way that the data can be edited, which is why I refer to it as a read-write query. It doesn’t automatically update data (See next article for how to write to the database), but unless your query read-write then you won’t be about to update the data you have just read in.

The example I gave you in the introduction does a normal query that reads in the four example books with the links to their authors. Here is the query code part of that example

var books = context.Books
    .Include(r => r.AuthorsLink)
        .ThenInclude(r => r.Author)
    .ToList();

Then EF Core goes through three steps to convert those values back into entity classes with navigational properties filled in. The figure below shows the three steps and the resulting entity classes with their navigational links.

Let’s look at three steps:

  1. Create classes and fill in data. This takes the values that came back for the database and fills in the non-navigational (known as scalar) properties, fields, etc. In the Book entity class this would be properties like BookId (Book’s primary key), Title, etc. – see bottom left, light blue rectangles.
          There can be a lot of other issues around here, such as how EF Core used constructors, backing fields, shadow properties, adapting data, client-side calculations to name but a few. Chapters 2, 6 and 7 cover these issues.
  2. Relational fixup. The first step will have filled in the primary keys and foreign keys, which define how the data is connected to each other. EF Core then uses these keys to set up the navigational properties between the entity classes (shown as thick blue lines in the figure).
            This Relational fixup’s linking feature goes beyond the entity classes just read in by the query, it looks at every tracked entity in the DbContext and fills in any navigational properties. This is a powerful feature, but if you have lots of tracked entities then it can take some time – that’s why the AsNoTracking query exists, to be quicker.
  3. Creating a tracking snapshot. The tracking snapshot is a copy of the entity classes that are passed back to the user, plus other things like a link to each entity class that it shadows – an entity is said to be tracked, which means it can be used in database writes.

2. AsNoTracking query – read-only query

An AsNoTracking query is a read-only query. That means anything you read in won’t be looked at when the SaveChanges method is called. The AsNoTracking option is there because it makes the query perform better. I cover this and other differences from a normal query in the next section.

Following the example in the introduction I alter the query code to add the AsNoTracking method below (see line 2)

var books = context.Books
    .AsNoTracking()
    .Include(r => r.AuthorsLink)
        .ThenInclude(r => r.Author)
    .ToList();

The LINQ query goes through the two of the three steps shown in the normal query figure above. The step that is left out is the 3. Tracking snapshot, and the relational fixup step is slightly different. The following figures shows the steps for an AsNoTracking query.

Let’s look at three steps:

  1. Create classes and fill in data. (same as normal query) This takes the values that came back for the database and fills in the non-navigational (known as scalar) properties, fields, etc. (known as scalar properties). In the Book entity class this would be properties like BookId (Book’s primary key), Title, etc. – see bottom left, light blue rectangles.
  2. Relational fixup. (different from normal query) The first step will have filled in the primary keys and foreign keys, which define how the data is connected to each other. EF Core can use that to fill in the navigational properties (shown as thick blue lines) between the entity classes, but NOT looking outside the query to tracked entities.
  3. Creating a tracking snapshot. (NOT RUN)

c. Differences between normal and AsNoTracking queries

Now let’s compare the two query types and highlight the differences.

  1. The AsNoTracking query performs better. The main reason for having the AsNoTracking feature is about performance. The AsNoTracking query is
    1. Slightly faster and uses slightly less memory because it doesn’t have to create the tracking snapshot.
    1. Not having the tracking snapshot of the queried data improved the performance of SaveChanges because it doesn’t have to inspect the tracking snapshot for changes.
    1. Slightly faster because the relational fixup doesn’t what is called identity resolution. This is why you get two author instances with the same data in them.
  2. The AsNoTracking query relational fixup only links entities in the query. In the normal query I already said that the relational fixup linked both to entities in the query AND entities that are currently tracked. But the AsNoTracking query only filled in the navigational properties between entities in the query.
  3. The AsNoTracking query doesn’t always represent the database relationships. Another difference in relational fixup between the two types of queries is that the AsNoTracking query uses a quicker fixup without identity resolution. This can produce multiple instances for the same row in the database – see the blue Author entities and comment in the bottom right of the previous figure. That difference doesn’t matter if you are just showing the data to a user, but if you have business logic then the multiple instances doesn’t correctly reflect the structure of the data and could cause problems.

Useful relational fixup feature with hierarchical data

The relational fixup step is quite clever, especially in a normal query. This allows all sorts of clever things in a normal query and I wanted to show you how I used relational fixup to solve a performance problem I had in a client’s project.

I worked for a company where a lot of their dat was hierarchical, that is data that has a series of linked entity classes with an indeterminate depth. The problem was I had to parse the whole hierarchy before I could display it. I initially did this by eager loading the first two levels and then used explicit loading for deeper level. It worked but the performance was very slow, and the database was overloaded with lots of single database accesses.

This got me thinking, if the normal query relational fixup is so clever could it help me improve the performance of the query? – and it could! Let me give you an example using an example of employees in a company. The figure below shows you a possible hierarchical structure of a company we want to load.

NOTE: You can see the Employee class here, but the basic idea is it has Manger navigation Manager navigational property (single), which linked to their boss (or null if top person) and a WorksForMe navigational property (collection), which has all the employees that work for this employee (can be none). It also has employee info like their Name and what department(s) they work for.

You could use .Include(x => x.WorksForMe).ThenInclude(x => x.WorksForMe)… and so on, but it turns out that a single .Include(x => x.WorksForMe) is enough, as the relational fixup can work out the rest! That is surprizing, but very useful.

For instance, if I wanted to select all the people that work in Development (each Employee has a property with the name WhatTheyDo with a type Role which has the department(s) they work in) I could write this code.

var devDept = context.Employees                         
    .Include(x => x.WorksFromMe)                        
    .Where(x => x.WhatTheyDo.HasFlag(Roles.Development))
    .ToList();

This creates one query that loads all the employees who work in Development, and the relational fixup filled in the WorksFoMe navigational property (collection) and the Manager navigational property (single) on all the employees in the returned employees. This improves both the time the query takes and reduced the load on the database server by only sending one query (comareed with my original query that used explicit loading).

NOTE: You do need to work out which relationship to Include. In this case I have a Manager navigational property (single) and a WorksForMe navigational property (collection). It turns out that including the WorksForMe property fills in both the WorksForMe collection and the Manager property. But including the Manager navigational property means that the WorksForMe collection is only created if there are entities to link to, otherwise it is null. I don’t know why – that’s why I test everything to test what works.

Conclusion

You have seen two types of queries, which I called a) a normal, read-write query, and b) an AsNoTracking, read-only query. For each query type I showed you what EF Core does “under the hood” and the structure of the data read in. And the differences in how they work shows their strengths and weaknesses.

The AsNoTracking query is the obvious solution for read-only queries, as its faster than the normal, read-write query. But you should keep in mind its limitations of the relational fixup, which can create multiple instances of classes where the database only has one relationship.

The normal, read-write query is the solution for loading tracked entities, which means you can use them in Create, Update and Delete database changes. The normal, read-write query does take up more resources of time and memory, but is has some useful features such as linking automatically to other tracked instances of entity classes.

I hope you have found this article useful. You can find a must longer and detailed version of this in chapters 1 to 6 of my book Entity Framework in Action, section edition.

Happy coding.

Domain-Driven Design and Entity Framework Core – two years on

Last Updated: April 7, 2020 | Created: April 4, 2020

This article is about my experiences of applying a Domain-Driven Design (DDD) approach when working with Entity Framework Core (EF Core). I have now used DDD and my supporting libraries for two years on my own projects and client projects. Here are the bigger client projects where I used an DDD approach:

  • A six-month engagement where I architected the backend of a SASS system using ASP.NET Core/EF Core. My design used a lot of DDD concepts and my DDD libraries.
  • A four-month project design and build an adapter between two security systems – this had complex business logic.
  • A six-month engagement on an already started ASP.NET Core/EF Core application to take it to its first release. The project used a different DDD approach to the one I usually use, which taught me a lot.

This article looks at what I have learnt along the way.

TL;DR – summary

  • I really like DDD because I know exactly where the code for a given function is and, because DDD “locks down” the data, which means that the code is the only implementation of that function. The last sentence encapsulates the primary reasons I love DDD.
  • I have found that a DDD approach still works for with projects where the specification isn’t nailed down, or changes as the project progresses. Mainly because DDD functions are easy to find, test, and refactor.
  • But using an DDD approach does require more code to be written. The code is ‘better’ but using a DDD approach can slow down development. That isn’t what I, or my clients, want.
  • To offset the extra code of DDD I have built two libraries – EfCore.GenericServices for calling DDD methods in the classes and EfCore.GenericBizRunner for running business logic.
  • I love my EfCore.GenericServices and use it for lots of situations. It makes working with DDD-styled classes really easy. Maybe the best library I have built so far.
  • I find that business logic ranges from something that is super-simple up to super complex. I have found that I use three different approaches depending on the type and complex of the business logic.  
  • I have found my EfCore.GenericBizRunner library is useful but a bit ‘heavy’, so I tend to only it if the business logic is complicated.

Setting the scene – my approach to making DDD easier to use

I have used a DDD approach for many years, but it wasn’t until EF Core came out that I felt I could build property DDD-styled classed (known as domain entities in DDD, and I use the term entity classes in C#). Once I had that I was full on with DDD, but I found that came at a cost – it took me longer to write a DDD application than my previous approach using my EF6.x library called GenericServices.

As I wrote DDD code I was looking for the repetitive code in using DDD. With my experience of writing the original GenericServices library I knew where to look and I came out with a library called EfCore.GenericServices. This used EF Core and supported non-DDD in a similar way to the original GenericServices library by using AutoMapper’s object-to-object mapping. But the extra bit was its ability to work with EF Core entity classes by providing an object-to-method-call mapping for DDD-styled classes (I also created a EfCore.GenericBizRunner library for handling business logic, but that that works the same for non-DDD and DDD approaches).

The difference these libraries, especially EfCore.GenericServices, has made to my speed of development is massive. I would say I am back up to the speed of development I was with the non-DDD approach, but now my code is much easier to find, fix, and refactor.

NOTE: all my examples will come from a ASP.NET Core application I build to go with my book, “Entity Framework Core in Action”. This example e-commerce site that “sells” books (think super-simple amazon). You can see a running version of this application at http://efcoreinaction.com/ and the code can be found in this GitHub repo.

Why DDD take longer to write, and how can my library help?

Let’s really look at the extra code that DDD needs to work. In the diagram below I have a trivial requirement to update the publication date of a book.

You can immediately see that the DDD code is longer, by about 9 lines. Now you might say 9 lines isn’t much, but in a real application you have hundreds, if not thousands, of different actions like this, and that builds up. Also, some of it repetitious (and boring!), and I don’t like writing repetitious code.

My analysis showed that the process of calling a DDD method had a standard pattern:

  1. Input the data via a DTO/ViewModel
  2. Load the entity class via its primary key
  3. Call the DDD method
  4. Call SaveChanges

So, I isolated that pattern and built library to make it easier, let’s now compare the UpdateDate process again, but this time using my EfCore.GenericServices helping with the DDD side – see diagram below.

Now the DDD code is shorter than the non-DDD code, and all the repetitive code has gone! You can see that the call in the ASP.NET Core action has changed, but it’s the same length. The only extra line not shown here is you need to add ILinkToEntity<Book> to the DateDto class. ILinkToEntity<T> is an empty interface which tells EfCore.GenericServices which entity class the DTO is linked to.  

Also, EfCore.GenericServices has code to handle a lot of edge-cases, like what happens if the entity class isn’t found, and what if the data in the DTO doesn’t pass some validation checks, etc. Because it’s a library its worth adding all these extra features, which takes out other code you might have needed to write.

The pros and cons of putting EF Core code in your entity classes

However, there is an issue with EfCore.GenericServices that I needed to handle – I can load the main entity class, but some actions work on EF Core navigational properties (basically the links to other tables), and how do I handle that? – see EF Core docs where it defines the term navigation properties.

As an example of accessing navigational properties I want to add a Review to a Book (think Amazon reviews). The DDD approach says that a Review is dependant on the Book, so any changes to the Reviews collection should be done via a method in the Book entity class (the term for the Book/Reviews relationship in DDD is Root and Aggregates). The question is, we have the Book loaded, but we don’t have the collection of Reviews loaded, so how do we handle this?

In the first version of EfCore.GenericServices I gave the responsibly for handling the Reviews to the entity class method. This required the method to have access to the application’s DbContext, and here is one simple example of a method to add a new Review to a Book.

public void AddReview(int numStars, string comment, string voterName, 
    DbContext context) 
{
    context.Entry(this)
        .Collection(c => c.Reviews).Load();
    _reviews.Add(new Review(numStars, comment, voterName));                            
}

NOTE: I’m not explaining why I use a backing field, _reviews, in this example. I suggest you have a look at my article “Creating Domain-Driven Design entity classes with Entity Framework Core” for why I do that.

That works, but some people don’t like having the DbContext accessible inside an entity class. For instance, one my client’s project used a “clean architecture” approach with DDD. That means that the entity class has no external references, so the entity classes didn’t know anything about EF Core or its DbContext.

Early in 2020 I realised I could change the EfCore.GenericServices library to load related navigational properties by providing an IncludeThen attribute which defined what navigational property(s) to load. The IncludeThen is added to the DTO which has properties that match the method’s parameters (see this example in one of my articles) . This means I can write code in the DDD method that doesn’t need access to the application’s DbContext, as shown below.

public void AddReviewWithInclude(int numStars, string comment, string voterName)
{
    if (_reviews == null)
        throw new InvalidOperationException(
             "The Reviews collection must be loaded");
    _reviews.Add(new Review(numStars, comment, voterName));
}

Now, you might think that I would use this approach all the time, but it turns out there are some advantages of giving the DbContext to the method, as it has more control. For instance, here is another version of the AddReview method which had better performance, especially if there are lots of reviews on a book.

public void AddReview(int numStars, string comment, string voterName, 
    DbContext context = null)
{
    if (_reviews != null)    
    {
        _reviews.Add(new Review(numStars, comment, voterName));   
    }
    else if (context == null)
    {
        throw new ArgumentNullException(nameof(context), 
            "You must provide a context if the Reviews collection isn't valid.");
    }
    else if (context.Entry(this).IsKeySet)  
    {
        context.Add(new Review(numStars, comment, voterName, BookId));
    }
    else                                     
    {                                        
        throw new InvalidOperationException("Could not add a new review.");  
    }
}

This code is longer, mainly because it handles the situation where reviews are already loaded and does some checks to make it more secure. But the main point is that it doesn’t need to load the existing reviews to add a new review – it just adds a single review. That is MUCH faster, especially if you have hundreds of reviews.

Also, it’s not possible to think of all the things you might do and build them into a library. Having the ability to access the application’s DbContext means I have a “get out of jail” card if I need to do something and the EfCore.GenericServices doesn’t handle it. Therefore, I’m glad that feature is there.

But, over the last few years I have concluded that I should minimise the amount of database access code in the entity class methods. That’s because the entity class and its methods start to become an God Object, with way too much going on inside it. So, nowadays if I do need complex database work then I do it outside the entity class, either as a service or as business logic.

To sum up, there are pros and cons allowing the DbContext being injected into the method call. Personally, I will be using the IncludeThen version because it is less coding, but if I find there is a performance issue or something unusual, then I have the ability do fix the problem by adding specific EF Core code inside the entity class method.

Business logic, from the simple to the complex

Back in 2016 a wrote an article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”, and also in my book “Entity Framework Core in Action” chapter 4 I described the same approach. Lots of people really liked the approach, but I fear that is overkill for some of the simpler business logic. This section gives a more nuanced description of what I do in real applications.

In the UK we have a saying “don’t use a hammer to break a nut”, or in software principle, KISS (Keep it Simple, Stupid). From experience working on medium sized web apps I find there is a range of business rules.

  1. Validation checks, e.g. check a property in a range, which can be done by validation attributes.
  2. Super-simple business rules, e.g. doing validation checks via code, for validations that can’t be done by validation attributes.
  3. Business logic that uses multiple entity classes, e.g. building a customer order to some books.
  4. business logic that it is a challenge to write, e.g. my pricing engine example.

Now I will describe what I do, especially on client work where time is money.

business logic types 1 and 2 – different types of validation

My experience is that the first two can be done by via my GenericServices library. That because that library can:

  1. Validate the data in any entity class that is being created or updated (this is optional, as the validation is often done in the front-end).
  2. It looks for methods that return either void, or IStatusGeneric. The IStatusGeneric interface allows the method to return a successful status, or a status with error messages.

The code below shows an example of doing a test and returning a Status. This example is taken from the https://github.com/JonPSmith/EfCore.GenericServices repo. This uses a small NuGet package called GenericServices.StatusGenericti supply the IStatusGeneric that all my libraries use.

public IStatusGeneric AddPromotion(decimal actualPrice, string promotionalText)                  
{
    var status = new StatusGenericHandler();
    if (string.IsNullOrWhiteSpace(promotionalText))
    {
        status.AddError("You must provide some text to go with the promotion.", nameof(PromotionalText));
        return status;
    }

    ActualPrice = actualPrice;  
    PromotionalText = promotionalText;

    status.Message = $"The book's new price is ${actualPrice:F}.";

    return status; 
}

business logic type 3 – working over multiple entity classes

For this business type I tend to just create a class/method to do the job. I combine the business logic and the EF Core database accesses in the same code, because its quick. The downside of this approach is you have business logic mixed with database accesses, which is NOT what DDD says you should do. However, the code is in one place and DRY (only one version of this business logic exists), so if the code starts to get complex, then I can always take it up to business logic type 4.

Practically, I put the business logic in a class and register it with the dependency injection service. If there are a several different business features linked to one entity/area I would typically put a method for each function, but all in one class. I also have a NuGet Status library, which all my libraries use, so it’s easy for each function to return a Status if I need to.

The fact that unit testing with a real database is easy with EF Core means its quite possible to test your business logic.

NOTE: Some people don’t think its right to unit test with a real database, but I find it works for me. If you don’t like unit testing your business logic with real databases, then use the next approach which make it really easy to mock the database accesses.

business logic type 4 – a challenge to write

For this business type, then its correct to apply a strong DDD approach, as shown in my article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”. That means I separate the business logic from the database access code by creating a specific repository class for just that business logic. It does take more code/time to do, but the advantages are:

  • Your business logic works on a series of in-memory classes. I find that makes the writing the code much easier, as you’re not having to think about the database side at the same time.
  • If the database classes aren’t a good fit for the business logic you can create your own business-only classes. Then you handle the mapping of the business-only classes to the database classes in the repository part.
  • It very easy to mock the database, because the business logic used a repository pattern to handle the database accesses.

I generally use my EfCore.GenericBizRunner library with this complex type of business logic, but it can be used for business type 3 too. The library is helpful because it can adapt the input and output of the business logic, which helps to handle the mismatch between the business logic level and the front end – bit like a mini DDD anticorruption layer (see the article “Wrapping your business logic with anti-corruption layers”).

Summary diagram

This is fairly long article which covers both CRUD (Create, Read, Update and Delete) functions and business logic function. DDD doesn’t really use terms “CRUD” or “Business logic”. In DDD everything is domain problem, which is solved by calling appropriately-named method(s) in the entity classes.

However I still find the terms “CRUD” or “Business logic” useful to categorise the functions I need to code. Here is a diagram where I try to map the complexity of code to the ways I work.

Conclusion

I hope this article will help people who are starting to use DDD. Obviously, this is my approach and experience of using DDD, and there is no right answer. Someone else might use a very different approach to DDD, but hopefully we all agree that Eric Evans’s Domain-Driven Design book has been one of the key books in making all of us think more about the business (domain) needs rather than the technology.

Nowadays we build really complex applications really quickly because the millions of libraries and documentation that is so easy to access. But as Eric Evans said in his book “When the domain is complex, this is a difficult task, calling for the concentrated effort of talented and skilled people”. That means we need to up our game as developers if we are going to build applications that a) work, b) perform well, and c) doesn’t become an unmanageable “ball of mud”.

Happy coding.

Some of the other articles I have written about Domain-Driven Design:

EfCore.GenericServices

EfCore.GenericBizRunner

Improving Domain-Driven Design updates in EfCore.GenericServices

Last Updated: January 28, 2020 | Created: January 27, 2020

I love giving talks, because it makes think about what I am talking about! The talk I gave to the Developer South Coast meetup about EF Core and Domain-Driven Design (DDD) gave me an idea for a new feature in my EfCore.GenericServices library. And this article explains what this new feature is, and why people who use DDD with EF Core might find it useful.

TR;DR; – summary

  • EfCore.GenericServices is a library designed to make CRUD (Create, Read, Update, Delete) actions on an EF Core database easier to write. It works with standard classes and DDD-styled classes.
  • In a new release of EfCore.GenericServices I added a new feature to help when writing DDD methods to update relationships – known in DDD as Root and Aggregate.
  • The new approach has two advantages over the existing method in EfCore.GenericServices
    • The update method inside your DDD class is shorter and simpler
    • The new approach removes any reference to EF Core, which works with architectural approaches that isolate the entity class from the database.

NOTE: The EfCore.GenericServices is an open-source (MIT licence) library available at https://github.com/JonPSmith/EfCore.GenericServices and on NuGet. There is an article all aspects of the library (apart from the 3.1.0 improvement) via this link.

Setting the scene – updating a DDD-styled class

If you are not familiar with DDD-styled classes then you might find my article “Creating Domain-Driven Design entity classes with Entity Framework Core” first, especially the part about updating properties in a DDD-styled class.

DDD covers a very wide range of recommendations, but for this article I’m focus on how you might update an entity class. DDD says we should not simply update various properties, but create methods with a meaningful name to do any updates. Plus, it says we should make the class properties read-only so that the methods are the only way to do any update.

Here are two examples of the different approaches: NOTE: the dto (also known as a ViewModel) is a simple class holding the data from the user/system. It contains the key of the Book and the new publish date.

Normal (non-DDD)

var book = context.Find<Book>(dto.BookId);
book.PublishedOn = dto.NewPublishedDate;        
context.SaveChanges();

DDD style

var book = context.Find<Book>(dto.BookId);
book.ChangePublishedDate(dto.NewPublishedDate);        
context.SaveChanges();

The advantages of DDD is the code is in one place, which makes it easy to find and refactor. The disadvantages are you need to write more code, which why I created the EfCore.GenericServices library to reduce the amount of code I have to write. Having started using a DDD approach I looked at my create/update code and I noticed a common set of steps:

  1. Load the class we want to update from the database.
  2. Call the correct method (update) or constructor (create).
  3. Call SaveChanges to update the database.

Having seen this pattern, I decides I could automate all of these steps, so I created the library EfCore.GenericServices. As well as automating create/update the library also handed Read and Delete – all four are known as CRUD (Create, Read, Update, Delete).

The EfCore.GenericServices library contains a object-to-method call mapping capability to automate the calling of methods in a DDD-styled class.The library offers these features for both normal classes and for DDD-styles classes, but the clever bit is around calling DDD methods. For standard classes my library relies on AutoMapper with its object-to-object copying capability, but for DDD classes the library contains and object-to-method call mapping capability. This part of the library maps properties in a DTO/ViewModel to the parameters of methods/constructors in the DDD class, and then calls that methods/constructor (you can read how I map a class to a method in this GenericServices documentation section).

The idea of EfCore.GenericServices is to reduce as much as possible what code the developer has to write. The aim is to remove one of the disadvantages of DDD, i.e. that typically using a DDD requires more code than a non-DDD approach.

UpdateAndSave of relationships is difficult

In the example above I updated a property, PublishedOn, in my class book. That works fine, but if I wanted to Create/Update/Delete (CUD) an associated class (known as Aggregates in DDD) then it gets more complicated.  Typically, the update of an aggregate is done via the Root entity, e.g. a Book class might have a collection of Reviews – DDD says the method to add/update/delete an Aggregate (Review) should be done via the Root (Book).

The problem with this in EF Core is (typically) you need to read in the current relationships before we can apply any CUD action. For example, for my Book class with a collection of Reviews, then adding a new review would require loading the existing Reviews collection before we could add our new Review.

Previously the EfCore.GenericServices library the update method in the entity class had to load the relationship(s). This worked by providing the EF Core DbContext as a parameter in the update method, which allowed the method to use explicit loading (or other methods) to load the relationship. In our Book/Reviews example that would the Reviews collection inside the Book class.

This works, but some people don’t like have database commands in the DDD classes. In fact my current client follows the Clean Architecture approach and has the classes that are mapped to the database at the lowest level, with all the EF Core one level up. This means my approach doesn’t work.

New Feature – IncludeThen attribute!

I have just released version 3.1.0 of the EfCore.GenericServices library with second way to do updates. This uses an IncludeThen attribute to tell the UpdateAndSave method to pre-load the relationships set by the developer in the attribute. This removes the update method to have to load the relationship(s), which has two benefits, a) less code for developer has to write, b) removing the requirement for the entity class to interact with EF Core.

The new feature revolves around an IncludeThen attribute which allows the developer to define a set of relationship(s) they want loaded before the method in the DDD class is called. I give two examples of how that works below.

1. Simple example with just one .Include

The IncludeThen attribute is be added to the DTO/ViewModel you sent the library’s UpdateAndSave method. In this IncludeThen attribute the developer lists the relationship(s) they want loaded before the update method in your DDD classis called. The example below will include the Reviews collection before calling the access method inside the Book class.

[IncludeThen(nameof(Book.Reviews))]
public class AddReviewWithIncludeDto : ILinkToEntity<Book>
{    
    public int BookId { get; set; }
    public string VoterName { get; set; }
    public int NumStars { get; set; }
    public string Comment { get; set; }
}

The lines to point out are:

  • Line 1; The IncludeThen attribute takes a string, so I could have used “Reviews” but I typically use the nameof operator, as if I rename the relationship property the code will still work.
  • Line 2: The ILinkToEntity<Book> tells EfCore.GenericServices that the main class is Book.
  • Line 4: This provides the primary key of the book data that we want to load.
  • Lines 5 to 7: These properties should match in Type and name to the parameters in the access method in the DDD-styled we want to call.

This example translates into the following query:

var book = context.DbSet<Book>()
       .Include(x => x.Reviews)
       .SingleOrDefault(x => x.BookId == dto.BookId) ;

This means the access method to add a new Review is shorter and simpler than the previous approach. Here is my implementation of the AddReview method, where IncludeThen.

public void AddReviewWithInclude(int numStars, string comment, string voterName)
{
    if (_reviews == null)
        //Check that the IncludeThen attribute set the the correct relationship. 
        throw new InvalidOperationException("Reviews collection must not be null");
    _reviews.Add(new Review(numStars, comment, voterName));
}

NOTE: I like shorter/simpler code, but the original version of the Addreview method adds a single Review to the database without loading the Reviews, so it is quicker than the Include version if there were lots of Reviews. Therefore, there is room for both approaches.

2. Example with .Include and .ThenInclude

The IncludeThen is not limited to a single .Include: the IncludeThen attribute has a second parameter of params string[] thenIncludeNames ,to define ThenIncludes to be laded too. There is an example below (this only shows one .ThenInclude, but you can have more).

[IncludeThen(nameof(Book.AuthorsLink), nameof(BookAuthor.Author))]
public class AddNewAuthorToBookUsingIncludesDto : ILinkToEntity<Book>
{
    [HiddenInput]
    public int BookId { get; set; }
    public Author AddThisAuthor { get; set; }
    public byte Order { get; set; }
}

This would translate into the following query:

var book = context.DbSet<Book>()
       .Include(x => x. AuthorsLink).ThenInclude(x => x.Author)
       .SingleOrDefault(x => x.BookId == dto.BookId) ;

You can also have multiple IncludeThen attributes on a class, e.g.

[IncludeThen(nameof(Book.Reviews))]
[IncludeThen(nameof(Book.AuthorsLink), nameof(BookAuthor.Author))]
public class AnotherDto : ILinkToEntity<Book>
{
    //… rest of code left out    

NOTE: You can get a more detailed list of the IncudeThen attribute in this page in the GenericServices documentation.

Updating to EfCore.GenericServices 3.1.0

If you are already using then updating to EfCore.GenericServices version 3.1.0 shouldn’t cause any problems, but you should be aware that the status code (e.g. IGenericStatus) are now coming from the NuGet library GenericServices.StatusGeneric. All the status code is now in a separate library that all my libraries use.

This has two benefits:

  1. You don’t need to include the EfCore.GenericServices in the assemblies that have the classes the EF Core maps to the database, instead you can use the GenericServices.StatusGeneric library. If you are using an architecture like the Clean Architecture approach where you don’t want EF Core included, then swapping from EfCore.GenericServices (which includes EF Core) to GenericServices.StatusGeneric library removes any EF Core references.
  2. This allows EfCore.GenericServices to work with my EfCore.GenericEventRunner library. Errors from the EfCore.GenericEventRunner library can be passed up to EfCore.GenericServices.

Conclusion

The idea I had while explaining about DDD was pretty easy to implement in EfCore.GenericServices (the documentation took twice as long to update!). The change only applies to updates that need access to relationships, but if you are following the DDD’s Root/Aggregate approach there can be quite a few of these.

Using the IncludeThen feature typically means that the access method is simpler and shorter, which makes me (and you!) more efficient. And developers with an architecture or approach where the entity classes have no access/knowledge of EF Core (like my current client) can now use this library.

So, thank you to the Developer South Coast Meetup people approaching me to give a talk, and for the insightful questions from the audience.

Happy coding.

A technique for building high-performance databases with EF Core

Last Updated: January 1, 2020 | Created: December 22, 2019

As the writer of the book “Entity Framework Core in Action” I get asked to build, or fix, applications using Entity Framework Core (EF Core) to be “fast”. Typically, “fast” means the database queries (reads) should return quickly, which in turn improves the scalability of the database.

Over the years I have worked on a lot of databases, and I have found several ways to improve database accesses. In this article I describe a new technique I found that uses an event-driven technique to update cached values in the actual SQL database. For me this approach is robust, fairly easy to add to an existing database, and can make reads quite fast. You might want to bookmark this article in case your boss comes up late in a project and says “the application isn’t fast enough!” – this might be just the approach you need.

The other articles in this series are:

TL;DR; – summary

  • This article describes a way to improve the performance of a database query when using EF Core. For the example application the performance gain is significate.
  • The technique adds new cache values to the existing database and updates them via an event-driven approach provided by my EfCore.GenericEventRunner library.
  • It is one of the simplest ways to add/update cache values, especially if you want to apply it to an existing SQL database.
  • The article is very long because I describe both the “why” and “how” of this approach works.
  • There is an example application in my GitHub repo, EfCoreSqlAndCosmos, that you can run yourself.

Setting the scene – picking the best way to performance tune

There are some things you can do to improve a database query using EF Core – mainly its about writing LINQ queries that translate into efficient SQL database queries. But with big or complex databases this can only take you so far. At that point you need to think about altering the database to make some parts of the database query easier to do.

The known way to speed things up is to use some form of cache, i.e. part of the query that takes a long time is pre-computed and stored so that the query can use this cached value instead. In my example I’m going to pre-compute the average review votes for a book (think Amazon’s star rating) – see this section for the performance improvements this provides. But the cached value(s) could be anything– for one of my clients it was pre-calculating the total pricing of a large and complex project.

Using cached values can make significant improvements to performance (see example later), but there are some big downsides. Caching data is notoriously difficult to get right. The typical problem is that the cache doesn’t get updated when the original data changes. This means the cache value returns old (“stale”) data when it shouldn’t. See this recent tweet from Nick Craven who works at StackOverflow – his comment is “Only cache if find you need to” and goes on to say “It absolutely has downsides in memory, confusion, complexity, races, etc. It’s not free.”

Therefore, I am always looking for caching designs that are simple and robust, i.e. you can be sure that the cache gets updated when the data changes. One approach I really like is using a two-database CQRS database design which is good, but not simple (I have already written about this: see this article on performance gain, and another article on a new design using Cosmos Db). I needed a simpler solution for a client that could be added to their existing databases, which is where this new event-driven approach comes from. The rest of the article cover adding cached values to an example application and what performance gains that gave me.

Example implementation – the book app

The example application I am using is my “book app”, which is I super-simple book selling application. I used this in my “Entity Framework Core in Action” book so I have lots of performance data for this app. In this application I cache two sets of data:

  1. A string holding the list of author’s names, e.g. “Erich Gamma, John Vlissides, Richard Helm, Ralph Johnson” (those names are from the famous “Design Patterns: Elements of Reusable Object-Oriented Software” book). This speeds up the display as it saves looking for each author and getting their name.
  2. The Reviews information, which is the number of reviews and the average star rating for all those reviews. This speeds up both the sorting/filter by the books average star rating and also speeds up the display of the book because

Here is a diagram to showing you the display of one book, with the parts we are going to cache to improve the performance of the book app.

I found there are two ways you can update a calculated cache values. They are:

  1. Complete Revaluation: You can recompute the cached value from the database, e.g. ReviewsCount = Book.Reviews.Count();
  2. Delta Update: You can update the cached value by adding the change (delta) to the existing cached value, e.g. ReviewsCount = ReviewsCount + 1;

The Complete Revaluation approach is the most obvious and works for everything, but as you will see has some issues when accessing the database. The Delta Update approach is quicker and is good for mathematical data, but if you miss something in your implementation then you can get the wrong answer.

NOTE: With the Delta Update approach I recommend building a service that will calculate the values using the Complete Revaluation approach and checks/updates any cached values if there is a problem. You can run when the system is lightly loaded to a) ensure the cache values up to date and b) spot if there are any problems in your Delta Update code.

I will describe both of these approaches, starting with the List of author’s names, which uses the Complete Revaluation approach

Authors string: set by Complete Revaluation approach

I used the Complete Revaluation approach to create the common-delimited string of author’s names.  Here is a diagram giving you an overview of how it works.

Creating an event when an individual Author’ Name is described in this section of my previous article, so I’m going to focus first on the event handler.

NOTE: You should read the article “EfCore.GenericEventRunner: an event-driven library that works with EF Core” to understand my code as I am using the EfCore.GenericEventRunner library in my example code.

Here is the event handler code that will recompute the string of authors for each book. It’s a bit more complicated than you would think, because the changed name hasn’t yet been written to the database yet.

public class AuthorNameUpdatedHandler :
    IBeforeSaveEventHandler<AuthorNameUpdatedEvent>
{
    private readonly SqlEventsDbContext _context;

    public AuthorNameUpdatedHandler(SqlEventsDbContext context)
    {
        _context = context;
    }

    public IStatusGeneric Handle(EntityEvents callingEntity, 
        AuthorNameUpdatedEvent domainEvent)
    {
        foreach (var bookWithEvents in _context.BookAuthors
            .Where(x => x.AuthorId == domainEvent.ChangedAuthor.AuthorId)
            .Select(x => x.Book))
        {
            var allAuthorsInOrder = _context.Set<BookWithEvents>()
                .Where(x => x.BookId == bookWithEvents.BookId)
                .Select(x => x.AuthorsLink.OrderBy(y => y.Order).Select(y => y.Author).ToList())
                .Single();

            var newAuthorsOrdered = string.Join(", ", allAuthorsInOrder.Select(x =>
                x.AuthorId == domainEvent.ChangedAuthor.AuthorId
                    ? domainEvent.ChangedAuthor.Name 
                    : x.Name));

            bookWithEvents.AuthorsOrdered = newAuthorsOrdered;
        }

        return null;
    }
}

The lines to point out in the code are:

  • Lines 14 to 16: The author may have worked on multiple books, so we need update each book’s AuthorsOrdered string. Note that the domainEvent contains an instance of the Author class where the Name has been changed.
  • Lines 18 to 21: Because the author’s new name that created this event isn’t in the database, we can’t just read the current Author’s name from the database. I have therefore read in all the Author classes, in the correct order first…
  • Then in lines 23 to 26 I go though them and when it comes to the Author that has been changed, we substitute the new Name string instead of the existing database Name string.

This last point shows that we need to be careful about accessing the database, because the events are run just before SaveChanges and therefore some data hasn’t been saved.

Review cached values: set by Delta Update approach.

For the ReviewCount and ReviewsAverageVotes I used the Delta Update approach, which works well with mathematical changes. Here is a diagram showing how the “add a new review” events works:

As you will see it is much quicker to calculate and doesn’t need to access the database, which also makes the code simpler. Here is the “add new review” event handler.

public class ReviewAddedHandler :
    IBeforeSaveEventHandler<BookReviewAddedEvent>
{
    public IStatusGeneric Handle(EntityEvents callingEntity, 
        BookReviewAddedEvent domainEvent)
    {
        var totalStars = Math.Round(
            domainEvent.Book.ReviewsAverageVotes * 
            domainEvent.Book.ReviewsCount)
            + domainEvent.NumStars;
        var numReviews = domainEvent.Book.ReviewsCount + 1;
        domainEvent.UpdateReviewCachedValues(numReviews, 
            totalStars / numReviews);

        return null;
    }
}

The lines to point out in the code are:

  • Lines 7 to 9: I get the total stars applied to this book by multiply the average star rating by the number of reviews (simple maths). I then add the delta change (line 10), which is the star rating from the new Review.
  • Line 11: I add 1 to the ReviewsCount because this is an “Add new Review” event.
  • Lines 12 to 13: The Book class provided an Action that can be called by the events to set the ReviewsCount and ReviewsAverageVotes. This is a nice way to control what the event handler can do within the Book class.

Building code to check/update the cache values

As I said earlier it is a good idea to back up Delta Update approach with code that will recalculate the cached values using the Complete Revaluation approach. If you are adding either approach to an existing database, then you will need a tool to set up the cache values for existing data anyway. And with a little a bit more work you can use the same tool to catch any updates that are going wrong and get logging/feedback so that you can try a track down the software problem.

I always build a tool like this if I add cache values to a database. You can see my HardResetCacheService class that uses the Complete Revaluation approach to check and reset as necessary any cache values. It’s not super-fast, but you can run it when you know the system is normally lightly loaded, or manually if you think there is something wrong. Hopefully you won’t use it a lot, but if you do need it you will be very happy it’s there!

Making the cache design robust

There are two parts to making the design robust, which means the cache values are always correct. As I said at the start, making sure the cache values don’t return old (“stale”) or incorrect data is a big challenge. In this design there are two things to cover:

  1. Making sure a change in the underlying data is reflected into the cache values
  2. Handling multiple updates happening in parallel.

Underlying data change always updates the cache values

The event-driven system I am using ensures that a change in any data that effects a cache value can be captured and sent to the appropriate event handler to update the cache property. Assuming you haven’t got a bug in your system then this will work.

The other part of the event-driven design is the original data and the cache values are store in the same transaction. This means if anything goes wrong then neither changes are saved (see more on the design of event-driven approach to see how this works).

Handling multiple updates happening in parallel

We now need to talk about multiple updates happening in parallel, which brings in all sort of interesting things. Say two people added a new review to the book at exactly the same time. If we don’t do anything to handle this correctly the cache update from one of those reviews could be lost. This is known as a concurrency issue.

This is the part that took the most time to think. I spent days thinking around all the different concurrency issues that could cause a problem and then even more days coming up with the best way to handle those concurrency issues

I considered doing the cache update inside a transaction, but the isolation level needed for totally accurate cache updating required ‘locking’ a lot of data. Even using direct SQL commands to calculate and update the cache isn’t safe (see this fascinating SO question/answer entitled “Is a single SQL Server statement atomic and consistent?”).

I found the best way to handle concurrency issues is to use EF Core concurrency tools to throw a DbUpdateConcurrencyException and then working out the correct cache value. This is most likely the most complex part of the design and I start with the try/catch of exceptions in my EfCore.GenericEventRunner library. Here is a diagram to show you what happens if two reviews are added at the same time.

Now let’s look at the code I need to handle these types of concurrency issue.

Adding handling exceptions from SaveChanges/SaveChangesAsync

First I needed to add a way to capture exceptions when it calls SaveChanges or SaveChangesAsync. I already have a pattern for doing this in my other libraries (EfCore.GenericServices and EfCore.GeneriBizRunner). This allows you to add exception handler to catch database exceptions.

Up until now this feature has been used for turning database errors into error messages that are a) user-friendly and b) don’t disclose anything about your system (see this section from my article “EF Core – validating data and catching SQL errors”). But now I needed a way to handle the DbUpdateConcurrencyException where my code fixes the problem that caused the concurrency exception and it then calls SaveChanges/SaveChangesAsync again.

To do that I have added the same exception handler capability into my EfCore.GenericEventRunner library, but enhanced it for handling concurrency issues. Previously it returned null (exception not handled, so rethrow) or a “bad status” (contains user-friendly error messages to show the user). I added a third, return a “good status” (i.e. no errors) which  means try the call to SaveChanges/SaveChangesAsync again.

This “good status” is what I use when I fix the problems with the cache values.  Here is the code in my EfCore.GenericEventRunner library that surrounds its calling of the base SaveChanges.

do
{
    try
    {
        status.SetResult(callBaseSaveChanges.Invoke());
        break; //This breaks out of the do/while
    }
    catch (Exception e)
    {
        var exceptionStatus = _config.SaveChangesExceptionHandler?
            .Invoke(e, context);
        if (exceptionStatus == null)
            throw; //no handler, or handler couldn’t handle this exception        

        status.CombineStatuses(exceptionStatus);
    }
} while (status.IsValid);

The lines to point out in the code are:

  • Line 1: The call of the SaveChanges is in a do/while loop. This is needed, because if the SaveChangesExceptionHandler fixes a concurrency problem, then it needs to call SaveChanges again to store the corrected data. But because it is possible that another concurrency issue happens on this second SaveChanges, then it will call the SaveChangesExceptionHandler again.
  • Line 6: If the call to the base SaveChanges is successful, then it exits the do/while as all is OK.
  • Lines 12 to 13: This is case 1, no handler or handler can’t manage this exception, so the exception is rethrown.
  • Line 17: The while will loop back and call SaveChanges again. If there is an exception the process is run again.

Using the exception handler to fix cache concurrency issues

Now there is a way to capture an exception in my EfCore.GenericEventRunner library coming from SaveChanges/ SaveChangesAsync we can use this to capture concurrency issues around the cache values.

The first thing I need to do is tell EF Core to throw a DbUpdateConcurrencyException if it detects a concurrency issue (see previous diagram). To do this I marked the three properties with the ConcurrencyCheck attribute, as shown below.

public class BookWithEvents : EntityEvents
{
    //… other code left out
    [ConcurrencyCheck]
    public string AuthorsOrdered { get; set; }

    [ConcurrencyCheck]
    public int ReviewsCount { get; set; }

    [ConcurrencyCheck]
    public double ReviewsAverageVotes { get; set; }
    //... other code left out
}

Then I created a method called HandleCacheValuesConcurrency, which I registered with the GenericEventRunner on startup (see this documentation on how to do that, or the code in the example application). I’m also not going to my SaveChangesExceptionHandler method due to space, but you can find it here. What I do want to show you are two parts that handle the fixing of the AuthorOrdered string and the two Review cache values.

1. Complete Revaluation example: fixing AuthorOrdered concurrency issue

Here is the method I call from inside my HandleCacheValuesConcurrency method to handle any AuthorsOrdered concurrency issue. Its job is to work out if there was a concurrency issue with the AuthorsOrdered string, and if there is to recalculate the AuthorOrdered. Here is the code

public void CheckFixAuthorOrdered(BookWithEvents bookThatCausedConcurrency, 
    BookWithEvents bookBeingWrittenOut)
{
    var previousAuthorsOrdered = (string)_entry.Property(
        nameof(BookWithEvents.AuthorsOrdered)).OriginalValue;

    if (previousAuthorsOrdered != bookThatCausedConcurrency.AuthorsOrdered)
    {
        var allAuthorsIdsInOrder = _context.Set<BookWithEvents>()
            .Where(x => x.BookId == bookBeingWrittenOut.BookId)
            .Select(x => x.AuthorsLink.OrderBy(y => y.Order)
            .Select(y => y.AuthorId)).ToList()
            .Single();

        var namesInOrder = allAuthorsIdsInOrder.Select(x => 
            _context.Find<AuthorWithEvents>(x).Name);
        var newAuthorsOrdered = namesInOrder.FormAuthorOrderedString();

        _entry.Property(nameof(BookWithEvents.AuthorsOrdered))
             .CurrentValue = newAuthorsOrdered;

        _entry.Property(nameof(BookWithEvents.AuthorsOrdered))
             .OriginalValue = bookThatCausedConcurrency.AuthorsOrdered;
    }
}

I’m not going to explain all the lines in that code (see the actual source code, which has comments), but I do want to point out I get all the Author’s Names using the EF Core Find command (see line 16, highlighted).

I use EF Core’s Find method because works in special way: Find a) first looks for the entity you are asking for in the tracked entities in the current DbContext instance, if that fails to find anything then b) it looks in the database. I need this Find feature because I know at least one Author’s name has been updated (which kicked off the update of the AuthorsOrdered string) in this DbContext instance, but hasn’t yet be written to the database – that will only happen when the SaveChanges method is successful.

If you are using the Complete Revaluation approach then you also will need to consider whether the database has everything you need – it most likely doesn’t and you will need to look in the tarcked entities in the current DbContext instance to find the data you need to fix the concurrency issue.

2. Delta Update example: fixing the Review cache values

Here is the method I call from inside my SaveChangesExceptionHandler to handle any Review cache value concurrency issue, i.e. the ReviewsCount and/or the ReviewsAverageVotes. Its job is to work out if there was a concurrency issue these two cache values, and if there is to recalculates them. Here is the code

public void CheckFixReviewCacheValues(
    BookWithEvents bookThatCausedConcurrency, 
    BookWithEvents bookBeingWrittenOut)
{
    var previousCount = (int)_entry.Property(nameof(BookWithEvents.ReviewsCount))
        .OriginalValue;
    var previousAverageVotes = (double)_entry.Property(nameof(BookWithEvents.ReviewsAverageVotes))
        .OriginalValue;

    if (previousCount != bookThatCausedConcurrency.ReviewsCount ||
        previousAverageVotes != bookThatCausedConcurrency.ReviewsAverageVotes)
    {
        var previousTotalStars = Math.Round(previousAverageVotes * previousCount);
        var countChange = bookBeingWrittenOut.ReviewsCount - previousCount;
        var starsChange = Math.Round(bookBeingWrittenOut.ReviewsAverageVotes *
             bookBeingWrittenOut.ReviewsCount) - previousTotalStars;

        var newCount = bookThatCausedConcurrency.ReviewsCount + countChange;
        var totalStars = Math.Round(bookThatCausedConcurrency.ReviewsAverageVotes *
             bookThatCausedConcurrency.ReviewsCount) + starsChange;

        _entry.Property(nameof(BookWithEvents.ReviewsCount))
             .CurrentValue = newCount;
        _entry.Property(nameof(BookWithEvents.ReviewsAverageVotes))
             .CurrentValue = totalStars / newCount;

        _entry.Property(nameof(BookWithEvents.ReviewsCount))
             .OriginalValue = bookThatCausedConcurrency.ReviewsCount;
        _entry.Property(nameof(BookWithEvents.ReviewsAverageVotes))
             .OriginalValue = bookThatCausedConcurrency.ReviewsAverageVotes;
    }
}

Like the last concurrency method, I’m not going to explain all the lines in that code (see the source code here, which has comments), but I will talk about the differences from the AuthorsOrdered example.

The code doesn’t need to access the database as it can reverse the cache values from a) the book that caused the concurrency exception and b) the current book that was trying to update the database. From these two sources the method can a) extract the two updates and b) combine the two updates into one, which is equivalent to what would have happened if the two updates didn’t ‘collide’.

This approach follows the Delta Update approach, which allows it to fix the problem without needing recalculate the two values again by loading all the Reviews. To my mind this is quicker, which makes it less prone to getting another concurrency issue during the time you are fixing these cache values.

Weighting up the improvements against effort and added complexity

I always look at a new approach and measure its success based on the gains, in this case the performance improvement, against the effort needed to achieve that performance improvement. I also look at the complexity that this new approach adds to an application, as more complexity adds to the long-term support of the application.

Performance improvements

In terms of improving the performance this is a great result. One of the key queries that I expect users of my book app to use is sort and/or filter by votes. I run this with the first 100 books being returned. I measure the performance using the Chrome browser’s developer (F12) Network page in milliseconds running on my local PC, taking the average of about 10 consecutive accesses. For comparison the viewing of the Home page, which only has text and some Razor functions, takes about 8 ms.

The chart below shows “Sort by Votes” is about 15 times quicker and “sort/filter” version is about 8 times faster. That is a very good result.

The other thing to note is the small improvement of test 1, sort by publication date. This is due to the cached AuthorsOrdered string which removes the many-to-many join of authors names for each book.

The author’s string was a big problem in EF Core 2, with a different of 3 to 1 (230ms for EF Core 2, 80ms for the cached value version). That’s because EF Core 3 is quicker than earlier versions of EF Core as it combines the main book query with the many-to-many join of authors names. This shows than the AuthorsOrdered cached value maybe isn’t worth keeping – the extra complexity doesn’t give a good gain in performance.

Development effort

It certainly took me a lot of time, about 40 to 50 hours, to build this example application, but that includes the setup of the new approach and all its associated parts. There was also a lot of thinking/research time to find the best way though. Next time it would be quicker.

In actual fact the first usage of this approach was for one of my clients, and I keep a detailed timesheet for all of my client work. That says it took 11 hours to add Total Price cached value using a Delta Update approach to an existing database. I think (and I think the client does too) 11 hours is good price for the feature/performance gain it provided.

Added Complexity

I built a cached values version in the chapter on performance tuning in my book “Entity Framework Core in Action” (see this article I wrote that summarises that chapter). But in that case, I added the setting of the cached values into the existing business logic which made things much more complicated. This new version is much less ‘intrusive’, i.e. the cache update code is separated from the existing business logic which makes it much easier to refactor the code.

With this event-driven approach you only have to add minimal code in your classes (i.e. call an AddEvent method whenever certain events happen). Then all the complex code to update the cached values is in specific event handlers and concurrency methods. This separation makes this approach much nicer than my original version because the cache code isn’t mixed in with all of your other business logic.

Conclusion

I am very pleased with this new event-driven approach to improving database queries performance. I have done a lot of work on database performance, both for my book and for multiple clients, and this approach is the easiest so far. This new approach is fairly straightforward to apply to an existing database, and it keeps the cache update code separate from the rest of the business logic.

It took me ages to research and write this article – maybe 20 hours on top of the 40/50 hours for writing the code, which is very long for me. But I learnt a lot while looking for the best way to handle simultaneous updates of the same cache values – things like SQL transaction isolation levels, whether a single SQL command is atomic (it isn’t) and what to do inside a concurrency exception in EF Core. I feel quite confident to use this approach in a client’s application, in fact I am already using this approach with my current client to good effect.

The approach I covered isn’t super-simple, but I hope I have described it well enough that you can understand it to use this yourself. Please do look at the example code in which I added to my open-source EfCoreSqlAndCosmos project, and my open-source EfCore.GenericEventRunner, which is a key part of my design, but can also be useful in other situations too (see this article for more on that).

I gave you two ways to compute the cached values: Complete Revaluation and Delta Update. Which of these you use will depend on the type of data/query that gets the cached value. I quite like the Delta Update as its so fast (which means there is minimal performance loss on write-side of your application), but some data just doesn’t fit that way of working, especially strings.

All the best with your developer journey.

EfCore.GenericEventRunner: an event-driven library that works with EF Core

Last Updated: January 3, 2020 | Created: December 1, 2019

In the first article I described an event-driven architecture that work with Entity Framework Core (EF Core). In this article I go into the details of how to use the EfCore.GenericEventRunner that implements this event-driven design. This article covers the specific details of why and how to use this library.

NOTE: The EfCore.GenericEventRunner is an open-source (MIT licence) NuGet library designed to work with EF Core 3 and above. You can also find the code in this GitHub repo.

The articles in this series are:

TL;DR; – summary

  • This article describes how to use the EfCore.GenericEventRunner library, available on NuGet  and on GitHub.
  • EfCore.GenericEventRunner adds a specific event-driven system to EF Core. See this article for a description of this event-driven design.
  • I break up the description into five sections
    • Code to allow your EF Core classes to send events
    • How to build event handlers
    • How the events are run when SaveChanges/SaveChangesAsync is called.
    • How to register your event handlers and GenericEventRunner itself.
    • How to unit test an application which uses GenericEventRunner.

Overview of EfCore.GenericEventRunner library

I’m going to go though the four parts of the EfCore.GenericEventRunner library (plus something on unit testing) to demonstrate how to use this library. I start with a diagram which will give you an idea of how you might use GenericEventRunner. Then I will dive into the four parts.

NOTE: If you haven’t read the first article, then I recommend you read/skim that article – it might make understanding what I am trying to do.

In the diagram the blue rectangles are classes mapped to the database, with the events shown in light color at the bottom. The orange rounded rectangle is an event handler.

Here are the four parts of the library, plus a section on unit testing:

  1. ForEntities: This has the code that allows a class to contain and create events.
  2. ForHandlers: This contains the interfaces for building handlers.
  3. ForDbContext: The DbContextWithEvents<T> which contains the overriding of the SaveChanges/ SaveChangesAsync.
  4. The code for registering your event handlers and GenericEventRunner’s EventsRunner.
  5. How to unit test an application which uses GenericEventRunner (and logging).

NOTE: This code in is article taken from the code in the EfCore.GenericEventRunner repo used to test the library. I suggest you look at that code and the unit tests to see how it works.

1. ForEntities: code for your entity classes (see DataLayer in GenericEventRunner repo)

For this example, I am going to show you how I built the “1. Create new Order” (LHS of last diagram). The purpose of this event is to query the stock part a) is there enough stock to manage this order, and b) allocate some stock ready for this order.

The first thing I needed is an “allocate” event. An event is class that inherits the IDomainEvent interface. Here is my “allocate” event.

public class AllocateProductEvent : IDomainEvent
{
    public AllocateProductEvent(string productName, int numOrdered)
    {
        ProductName = productName;
        NumOrdered = numOrdered;
    }

    public string ProductName { get; }
    public int NumOrdered { get; }
}

In this case this event is sent from a new order which hasn’t been saved to the database. Therefore, I have to send the ProductName (which in my system is a unique key) and the number ordered because its not (yet) in the main database. Even if the data is in the database, I recommend sending the data in the event, as a) it saves a database access and b) it reduces the likelihood of concurrency issues (I’ll talk more on concurrency issues in the next article).

The next thing is to add that event to the Order class. To be able to do that the Order class must inherit abstract class called EntityEvents, e.g.

public class Order : EntityEvents
{ 
       //… rest of class left out

The EntityEvents class provides an AddEvent method which allows you to add a new event to your entity. It also stores the events for the Event Runner to look at when SaveChanges is called. (note that the events aren’t saved to the database – they only hang around as long as the class exists).

Below is the Order constructor, with the focus on the AllocateProductEvent – see the highlighted lines 10 and 11

public Order(string userId, DateTime expectedDispatchDate,
    ICollection<BasketItemDto> orderLines)
{
    //… some other code removed

    TotalPriceNoTax = 0;
    foreach (var basketItem in orderLines)
    {
        TotalPriceNoTax += basketItem.ProductPrice * basketItem.NumOrdered;
        AddEvent(new AllocateProductEvent(
             basketItem.ProductName, basketItem.NumOrdered));
    }
}

If you don’t use DDD, then the typical way to create an event is to catch the setting of a property. Here is an example of doing that taken from the first article.

private string _county;
public decimal County
{
    get => _county;
    private set
    {
        if (value != _county)
            AddEvent(new LocationChangeEvent(value));
        _county = value;
    }
}

This works because the property Country is changed into an EF Core backing field, and the name of the column in the table is unchanged. But because it’s now a backing field EF Core 3 will (by default) will read/write the field, not the property, which is good otherwise the load could cause an event.

NOTE: EF Core 3 default action is to read/write the field, but before EF Core 3 the default was to set via the property, which would have generated an event.

Types of events

When it comes to adding an event there are two separate lists: one for BeforeSave events and one for AfterSave events. The names give you a clue to when the handler is run: the BeforeSave events run before SaveChanges is called, and AfterSave events are run after SaveChanges is called.

I cover the two types of events in the next section, but I can say that BeforeSave events are by far the most used type, so that is the default for the AddEvent method. If you want to send an event to be run after SaveChanges, then you need to add a second parameter with the type, e.g. AddEvent(…, EventToSend.AfterSave).

2. ForHandlers: Building the event handlers

You need to create the event handlers to handle the events that the entity classes sent out. There are two types of event handler IBeforeSaveEventHandler<TEvent> and IAfterSaveEventHandler<TEvent>. Let me explain why I have the two types.

BeforeSave events and handlers

The BeforeSave events and handlers are all about the database. The idea is the BeforeSave handlers can change the entity classes in the database, and those changes are saved with the original data that your normal (non-event trigger) code set up. As I explained in the first article saving the original data and any data changed by the event together in one transaction is safe, as the data can’t get out of step.

Typically, a BeforeSave event will be triggered when something changes, or an event happens. The handler then either does some calculation, maybe accessing the database and returns a result to be saved in the calling entity and/or it might create, update or remove (delete) some other entity classes. The data changes applied by the normal code and the data changes applied by the event handler are saved together.

BeforeSave event handlers also have two extra features:

1. Firstly, they can return an (optional) IStatusGeneric status, which can send back errors. If it returns null or a status with no errors then the SaveChanges will be called.

Here is an example of a BeforeSave event handler which was called by the AllocateProductEvent you saw before. This checks that there is enough stock to accept this order. If it returns a status with any errors, then that stops SaveChanges/ SaveChangesAsync from being called.

public class AllocateProductHandler : IBeforeSaveEventHandler<AllocateProductEvent>
{
    private readonly ExampleDbContext _context;

    public AllocateProductHandler(ExampleDbContext context)
    {
        _context = context;
    }

    public IStatusGeneric Handle(EntityEvents callingEntity,
          AllocateProductEvent domainEvent)
    {
        var status = new StatusGenericHandler();
        var stock = _context.Find<ProductStock>(domainEvent.ProductName);
        //… test to check it was found OK removed 

        if (stock.NumInStock < domainEvent.NumOrdered)
            return status.AddError(
                $"I could not accept this order because there wasn't enough {domainEvent.ProductName} in stock.");

        stock.NumAllocated += domainEvent.NumOrdered;
        return status;
    }
}

The lines of code to highlight are:

  • Lines 18 to 19. If there isn’t enough stock it adds an error to the status and returns it immediately. This will stop the SaveChanges from being called.

The default situation is the first BeforeSave event handler that returns an error will stop immediately. If you want all the BeforeSave events to continue, say to get all the possible error messages, then you can set the StopOnFirstBeforeHandlerThatHasAnError property to false in the GenericEventRunnerConfig class provided at setup time (see 4. ForSetup: Registering service on config).

If the returned status has errors, then all the events are cleared and SaveChanges/Async isn’t called (see section “3. ForDbContext” for how these errors are returned to the application).

NOTE: Only a few of your BeforeSave handlers will need a status so you can return null as a quick way to say there are no errors (or more precisely the handler is not looking for errors). You can return a status with no errors and update the statues’ success Message string which will mean that Message will be returned at the top level (assuming a later BeforeSave handler doesn’t overwrite it).

2. Secondly, BeforeSave handlers can raise more events directly or indirectly. For instance, say an event handler changed a property that raised another event we need to pick that new event too. For that reason, the BeforeSave handler runner keeps looping around checking for new events until there are no more.

NOTE:  There is a property in the GenericEventRunnerConfig class called MaxTimesToLookForBeforeEvents value to stop circular events, e.g. an event calls something that calls the same event, which would loop for ever. If the BeforeSave handler runner loops around more than the MaxTimesToLookForBeforeEvents value (default 6) it throws an exception. See section “4. For setup” on how to change the GenericEventRunner’s configuration.

AfterSave events and handlers

AfterSave events are there to do things once the SaveChanges is successful and you know the data is OK. Typical uses are clearing a cache because certain data has changed, or maybe use SignalR to update a screen with the changed data. Unlike the BeforeSave events the events runner only looks once at all the events in the entity classes, so AfterSave events handlers can’t trigger new events.

Here is an example of an AfterSaveEventHandler that would send an internal message to the dispatch department once an Order is successfully placed in the database.

public class OrderReadyToDispatchAfterHandler : 
    IAfterSaveEventHandler<OrderReadyToDispatchEvent>
{
    public void Handle(EntityEvents callingEntity,
         OrderReadyToDispatchEvent domainEvent)
    {
        //Send message to dispatch that order has been checked and is ready to go
    }
}

AfterSave event handers aren’t “safe” like the BeforeSave events in that if they fail the database update is already done and can’t be undone. Therefore, you want to make your AfterSave event handlers aren’t going to cause exceptions. They also shouldn’t update the database (that’s the job of the BeforeSave event handlers).

AfterSave event handers also don’t return any status so you can’t know if they worked on not (see one way around this in section “4. Setup” on how to check an AfterSave event handler ran).

3. ForDbContext: Overriding of EF Core’s base SaveChanges/SaveChangesAsync

To make this all work GenericEventRunner needs to override the base SaveChanges/ SaveChangesAsync methods. GenericEventRunner library provides a class called DbContextWithEvents<T>, which contains overrides for the SaveChanges/ SaveChangesAsync and two extra versions called SaveChangesWithStatus/ SaveChangesWithStatusAsync that return a status. Here is a my ExampleDbContext that I use for unit testing GenericEventRunner.

public class ExampleDbContext
    : DbContextWithEvents<ExampleDbContext>
{
    public DbSet<Order> Orders { get; set; }
    public DbSet<LineItem> LineItems { get; set; }
    public DbSet<ProductStock> ProductStocks { get; set; }
    public DbSet<TaxRate> TaxRates { get; set; }

    public ExampleDbContext(DbContextOptions<ExampleDbContext> options, 
        IEventsRunner eventRunner = null)
        : base(options, eventRunner)
    {
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<ProductStock>().HasKey(x => x.ProductName);
    }
}

Line 2 is the only change in your DbContext. Instead of inheriting DbContext, you inherit GenericEventRunner’s DbContextWithEvents<T>, where T is your class. This overrides the SaveChanges/ SaveChangesAsync and adds some other methods and the IStatusGeneric<int> StatusFromLastSaveChanges property.

For people who are already overriding SaveChanges you can either still layer DbContextWithEvents<T> class on top of your SaveChanges method, which GenericEventRunner will override, and call at the apporriate time. If you want to customise your DbContext then the methods used in the DbContextWithEvents<T> class are public, so you can use them directly. This allows you to reconfigure the GenericEventRunner SaveChanges/ SaveChangesAsync to suit your system.

What happens if BeforeSave event handler send back an error?

As I said earlier if the BeforeSave event handlers return an error it does not call SaveChanges/Async, but you most likely want to get the error messaged, which are designed to be shown to the user. I expect most developers to call SaveChanges/Async so the GenericEventRunner throws a GenericEventRunnerStatusException if the combined statuses of all the BeforeSave handlers has any errors. You can then get the errors in two ways:

  • The Message property of the GenericEventRunnerStatusException contains a string starting with an overall message and then each error (separated by the Environment.NewLine characters). This returns just the error text, not the full ValidationResult.
  • For a more detailed error response you can access the IStatusGeneric<int> StatusFromLastSaveChanges property in the DbContext. This provides you with access to the Errors list, where each error has an ErrorResult of type ValidationResult, where you can specify the exact property that caused a problem.

NOTE: The IStatusGeneric<int> StatusFromLastSaveChanges property will be null if SaveChanges hasn’t yet been called.

The alternative is to call the SaveChangesWithStatus/ SaveChangesWithStatusAsync methods directly. That way you can get the status directly without having to use a try/catch. This makes getting the status easier, but if you have a lot of existing code that already calls SaveChanges/SaveChangesAsync then its most likely best to stay with SaveChanges/Async and capture the exception where you need to.

What is the state of the current DbContext when there are exceptions?

We need to consider what state the DbContext is in when there are exceptions. Here is the list:

  • Exceptions before SaveChanges is called (other than GenericEventRunnerStatusException): In this state there may be changes in the database and any events are still there. Therefore, you need to be very careful if you want to call SaveChanges again (Note: this isn’t much different from what happens if you don’t have events – you don’t really know what state the DbContext is in after an exception and you should not try to call SaveChanges).
  • Exceptions during SaveChanges, e.g. DbUpdateConcurrencyException. If you get an exception during SaveChanges itself then it’s something about the database. The DbContext will have all the data ready to retry the SaveChanges, if you can “fix” the problem. If you call SaveChanges again (after fixing it) and it succeeds then all the BeforeEvents have been cleared (because they have already been applied to the DbContext), but any AfterSave events are still there and will run.
  • Exceptions after SaveChanges was called. The database is up to date. If an AfterSave event handler throws an exception then other AfterSave event handlers may be lost. As I said AfterSave event handlers are not robust.

4. ForSetup: Registering your event handlers

The final stage is to register all your event handlers, and the EventsRunner from the GenericEventRunner library. This is done using the extension method called RegisterGenericEventRunner. There are two signatures for this method. Both need an array of assemblies that is needs to scan to find your BeforeSave/AfterSave event handlers, but one starts with property of type IGenericEventRunnerConfig by which you can change the GenericEventRunner default configuration. Here is an example in ASP.NET Core without a config.

public void ConfigureServices(IServiceCollection services)
{
    //… other service registeration left out
       services.RegisterGenericEventRunner(
                Assembly.GetAssembly(typeof(OneOfMyEventHandlers)));
}

NOTES:

  • You can provide multiple assemblies to scan.
  • If you don’t provide any assemblies it will scan the calling assembly.
  • If its scan doesn’t find any AfterSave event handlers then it sets the NotUsingAfterSaveHandlers config property to false (saves time in the the SaveChanges/ SaveChangesAsync).

NOTE: If you send an event has hasn’t got a registered handler then you will get a GenericEventRunnerException at run time.

There are two ways to configure GenericEventRunner and the event handlers at startup.

  1. You can provide a GenericEventRunnerConfig class at the first parameter to the RegisterGenericEventRunner. You can change the default setting of various parts of GenericEventRunner (see the config class for what features it controls).
  2. There is an EventHandlerConfig Attribute which you can add to an event handler class. From this you can set the lifetime of the handler. The default is transient

NOTE: The ability to change the lifetime of an event handler is there in case you need to communicate to event handler in some way, e.g. to check that an AfterSave event handler has run properly. In this case you could set the event handler’s lifetime to “Scoped” and use DI to inject the same handler into your code. (This is advanced stuff! – be careful).

5. Unit Testing applications which use GenericEventRunner

I recommend unit testing your events system, as if you haven’t provided an event handler you will get a runtime exception. Setting up the system to test events is a little complex because GenericEventRunner uses dependency injection (DI). I have therefore built some code you might find useful in unit tests.

The class called SetupToTestEvents in the GenericEventRunner’s Test assembly that contains an extension method called CreateDbWithDiForHandlers that registers your event handlers and return an instance of your DbContext, with the required EventsRunner, to use in your unit tests. Here is an example of how you would use it in a unit test.

[Fact]
public void TestOrderCreatedHandler()
{
    //SETUP
    var options = SqliteInMemory.CreateOptions<ExampleDbContext>();
    var context = options.CreateDbWithDiForHandlers 
        <OrderCreatedHandler>();
    {
        context.Database.EnsureCreated();
        context.SeedWithTestData();

        var itemDto = new BasketItemDto
        {
            ProductName = context.ProductStocks.OrderBy(x => x.NumInStock).First().ProductName,
            NumOrdered = 2,
            ProductPrice = 123
        };

        //ATTEMPT
        var order = new Order("test", DateTime.Now, new List<BasketItemDto> { itemDto });
        context.Add(order);
        context.SaveChanges();

        //VERIFY
        order.TotalPriceNoTax.ShouldEqual(2 * 123);
        order.TaxRatePercent.ShouldEqual(4);
        order.GrandTotalPrice.ShouldEqual(order.TotalPriceNoTax * (1 + order.TaxRatePercent / 100));
        context.ProductStocks.OrderBy(x => x.NumInStock).First().NumAllocated.ShouldEqual(2);
    }
} 

The lines of code to highlight are:

  • Line 5: You create your database options. In this case I am using a method in my EfCore.TestSupport library to create an in-memory Sqlite database, but it could be any type of database.
  • Line 6 and 7: This is where I call the CreateDbWithDiForHandlers extension method which needs two types:
    • TContext: This is your DbContext class
    • THandler: This should be one of your event handlers. This is used for find an assembly which GenericEventRunner needs to scan to find all your event handlers are in so that it can register them in DI. (It also registers any event handlers in the executing assembly – that allows you to add extra handlers for unit testing).

The CreateDbWithDiForHandlers extension method has some useful optional parameters- have a look at the code to see what they provide.

NOTE: I didn’t include the SetupToTestEvents class in the EfCore.GeneriEventHandler because it uses code from my EfCore.TestSupport library. You will need to copy it by hand from the GitHub repo into your unit test assembly.

Logging

The GenericEventRunner Event Runner logs each event handler before it is run. The log message starts with a prefix:

  • First letter: ‘A’ for AfterSave event handlers and ‘B’ for BeforeSave event handlers
  • Then number: this show what loop was it run, e.g. 1, 2, 3 etc. (remember, BeforeHandlers can create new events, which needs another loop around to find them). This is generally useful to see what events are fired when.

Here is an example from one of m GenericEventRunner unit tests. Notice that the last log message starts with “B2”, which means it must have been triggered by a change caused by one of the event handlers that run in the first (i.e. “B1”) event loop.

"B1: About to run a BeforeSave event handler …OrderCreatedHandler."
"B1: About to run a BeforeSave event handler …AllocateProductHandler."
"B2: About to run a BeforeSave event handler …TaxRateChangedHandler."

Also, the unit test CreateDbWithDiForHandlers method allows you to capture logs, which can be useful in testing that events handlers run at the correct time.

Conclusions

Well done for getting here! It’s a long article but I hope it told you “why” and well as “how” to use the EfCore.GenericEventRunner library. If you are thinking of using this library I recommend you inspecting/cloning the EfCore.GenericEventRunner GitHub repo and look at the examples and the unit tests to see how it works.

While this library is new, I have been working on a similar system in my client’s application for some time. That means the features and approach of this library has been proven in the real-world. In fact, the AfterSave events have been added to help deal with some issues that cropped up in the client’s original implementation.

The third article in this series is “A technique for building high-performance databases with EF Core” which uses this event library to improve the the performance of my “book app”. I build a similar version in chapter 13 in my book “Entity Framework Core in Action”, but in my opinion the new version that uses events is much better (You can see the original article from the book called “Entity Framework Core performance tuning – a worked example”).

Happy coding.

A robust event-driven architecture for using with Entity Framework Core

Last Updated: December 22, 2019 | Created: November 29, 2019

The term “event-driven architecture” covers a wide range of distributed systems like MicroServices, Serverless etc. And this chapter from the book Software Architecture Patterns says event-driven architectures have many useful features, but they are hard to build and test. One reason for that is because designing a system that is robust, i.e. works in all situations, requires significant thought and code.

I came across a simple, but useful application of events to trigger code that accesses the database via EF Core. While this is a very small subset of the whole event-driven space I think it is very useful, especially as it is inherently robust. In fact, I like the approach so much I have built a library called EfCore.GenericEventRunner so I can easily add this feature to any projects I work on, and that’s described in the next article.

The articles in this series are:

TL;DR; – summary

  • This article is about a specific event-driven design that works with EF Core. It is limited in what it does -mainly database updates, but it does that in a robust way.
  • The advantages for this design are:
    • The design is robust by design. It saves the original data that caused the events with the data updated by the event handler together in one transaction.
  • The disadvantages for this design are:
    • More difficult to follow the application flow
    • Adds (some) complexity to your application.
  • The rest of the article gives examples of how the event-driven design works with EF Core.

Setting the Scene – why is this EF Core event system useful?

While working for a client I came across an event-driven design that had initially come from a Jimmy Bogard article, but the client had extended it. There were some issues which I needed to help them fix, but I could see how useful it was. Let me explain what I liked (and disliked) about this approach.

First benefit: Separation of concerns.

My client’s company provides services across the USA and where they deliver their service effects the tax they need to charge. So, when you set, or change, the location you have to recalculate the tax for this job, or any job linked to that location. But for me changing the location’s address is a very different to a recalculation of the job’s tax.

From a design point of view the changing the location, which can be done by a simple update to the State and County properties, doesn’t naturally fit with recalculating the tax on an invoice. And the same location might be used in multiple invoices, which makes the simple setting of State and County into a much more complex piece of business logic.

Their solution is to kick off a “location State/County” event any time a location’s State/County properties change, or a “job location change” event if a different location is selected for a job. It is then down to the event handlers for these two events to recalculation of the tax for that job, or jobs. See the two different use cases below, with the events in red, the event handler in orange, and classes mapped to the database in blue.

So, in effect we have separated the “set/change location” logic from the “calculation of the tax” logic. That might seem small, but to me with my design hat on that is very good “separation of concerns”. And with my developer hat on it makes the setting of the location really simple, leaving the complex tax calculation to be run separately.

In fact, the client has loads of these linked business rules which benefit from using events like this.

Second benefit: Its robust by design

It is really easy to design applications when everything is normal and there are no competing updates. But designing systems that can handles error situations, concurrency issues, and database connection faults is way harder. For that reason, I’m always looking for designs that handle these errors by design, and this event approach does that for everything but some concurrency issues.

As you will see later in this article the data that triggered an event (in the last example the location), and any data that changed (in the last example the TaxRate) are saved together in one transaction by calling to EF Core’s SaveChanges method. That is important because either all the data is saved, or no data is saved. That means the original data and the data from the events will never get out of step.

And if you do have an exception on SaveChanges, such as a DbUpdateConcurrencyException, all the data is now in the DbContext and the events have been cleared. This means you can “fix” the problem and safely call SaveChanges again and it will save the original data and event-generated data to the database, with no extra event calls.

Third benefit: This event-driven design fits with Domain-Driven Design

Domain-Driven Design (DDD) is, to quote the dddcommunity.org site, “is an approach to developing software for complex needs by deeply connecting the implementation to an evolving model of the core business concepts.”. I for one try to use a DDD approach in anything I do.

Some DDD users advocate that the code inside the entity class should not know anything about the database (I’m not in that camp, but I respect their views). That means you need to do all the database work before you go to the entity class. But the event-driven design I am describing gives you another option – you can send an event to a handler that can access the database for the class.

Taking the example of the location effecting the tax, then using an event-driven approach allows the class to ask an external service to calculate the tax (which in this case needs to access the database). I think that keeps the separation of database from entity class while handling the business rule in an efficient and fast way.

Downsides of using the event-driven design.

It’s important to think about the downside of this event-driven design as well, as no extra feature comes with a price.

First downside: using events can make it difficult to under the code

The first problem is described by Martin Fowler in an excellent article called What do you mean by “Event-Driven”?. He says “The problem is that it can be hard to see such a flow as it’s not explicit in any program text.

For instance, there is an example above there were two types of events (“location State/County” and “job location change”), but what do those events call? You can’t use the VS/VSCode “Go to Definition” (F12) feature to go to the handler code because its hidden behind layers of interfaces and DI. That can make things hard to follow.

My advice is, if you have business code where all the business rules sensibly belongs together then write one set of code, and don’t use events. Only use events where it makes sense, like decoupling the setting of the location from the recalculation of the tax rate. I also suggest you name of the event and the handler starts with the same name, e.g. LocationChangeEvent and LocationChangeHandler respectively. That makes it easier to work out what code is called.

Second downside: makes your application more complicated

Adding event handling to an application isn’t simple and it changes your DbContext, especially around the SaveChanges/ SaveChangesAsync. Complexity is bad, as it makes the application harder to understand and test. You have to weight up the usefulness of the event-driven design against the extra complexity it adds to your application.

NOTE: In the next article I describe my EfCore.GenericEventRunner library which provides you with a pre-build event-driven system. You can read that article and see if you think it is useful.

Implementing this in EF Core

I have spent a lot of time on the pros and cons of the approach so now we look at how it works. I start with a diagram which shows the three stages of the event handling.

This example gives a good idea of what is possible and the next three sections show the code you need at each stage.  

Stage 1 – adding event triggers to your entity classes

An event is triggered in an entity class that you have read in and is tracked, i.e. it wasn’t loaded with a query that has the .AsNoTracking in it. This is because the event runner only looks for events in tracked entities.

You can send an event from anywhere, but the typical approach is to trigger an event when something changes. One way is to catch the setting of a property by using a backing field and testing if something changes in the property setter. Here is an example.

private string _county;
public decimal County
{
    get => _county;
    private set
    {
        if (value != _county)
            AddEvent(new LocationChangeEvent(value));
        _county = value;
    }
}

The things to note are:

  • Line 1: I’m using a private field so that I can add my own code in the property setter. Converting a normal property to this form is handled by EF Core via a backing field and the name of the column in the table is unchanged. NOTE: In EF Core 3 and above when EF Core loads data it puts it in the private field, not via the setter – that’s good otherwise the load could cause an event (before EF Core 3 the default was to set via the property, which would have generated an event).
  • Lines 7 & 8: This is code that triggers an event if the Tax value has changed.

If you are using a Domain-Driven Design (DDD) then you can put the AddEvent call in your DDD method or constructors. Here is an example from the example code in the EfCore.GenericEventRunner code.

public Order(string userId, DateTime expectedDispatchDate, ICollection<BasketItemDto> orderLines)
{
    UserId = userId;
    DispatchDate = expectedDispatchDate;
    AddEvent(new OrderCreatedEvent(expectedDispatchDate, SetTaxRatePercent));

    var lineNum = 1;
    _LineItems = new HashSet<LineItem>(orderLines
        .Select(x => new LineItem(lineNum++, x.ProductName, x.ProductPrice, x.NumOrdered)));

    TotalPriceNoTax = 0;
    foreach (var basketItem in orderLines)
    {
        TotalPriceNoTax += basketItem.ProductPrice * basketItem.NumOrdered;
        AddEvent(new AllocateProductEvent(basketItem.ProductName, basketItem.NumOrdered));
    }
}
private void SetTaxRatePercent(decimal newValue)
{
    TaxRatePercent = newValue;
}

The things to note are:

  • Line 5: The event add here is given a method called SetTaxRatePercent (see lines 18 to 21) which allows the event to set the TaxRatePercent property which has a private setter. I do this because I using a DDD design where all the properties are read-only, but I hand the event handler, via the event a method to set that property.
  • Line 15. I want to allocate each item of stock from this order and to do this I must send over the information in the event. That’s because the Order isn’t in the database yet, so the event handler can’t read the database to get it.

NOTE: If you trigger an event in a constructor make sure its not the constructor that EF Core will use when loading the data – check the EF Core documentation on how this works.

Stage 2 – Before SaveChanges

The EfCore.GenericEventRunner overrides the base SaveChanges/ SaveChangesAsync and has an event runner that will find all the events before SaveChanges/ SaveChangesAsync is called. It does this by looking for all the tracked entities (i.e. any classes loaded, Added, Attached etc.) that has inherited  EfCore.GenericEventRunner’s EntityEvents class. This contains methods to get the events and then wipes the events (I do that to ensure an event isn’t run twice).

NOTE: To make it simpler to understand I talked about “events”, but in fact there are two types of events in EfCore.GenericEventRunner: the BeforeSave and the AfterSave events, which run before or after the call to SaveChanges/ SaveChangesAsync respectively. I will explain why I added the AfterSave events in the next article.

(Before) Handlers have to inherit the following interface, where the T part is the type of event the handler can process.

public interface IBeforeSaveEventHandler<in T> where T : IDomainEvent
{
    IStatusGeneric Handle(EntityEvents callingEntity, T domainEvent);
}

Here is an example handler for working out the Tax value

public class OrderCreatedHandler : IBeforeSaveEventHandler<OrderCreatedEvent>
{
    private readonly TaxRateLookup _rateFinder;

    public OrderCreatedHandler(ExampleDbContext context)
    {
        _rateFinder = new TaxRateLookup(context);
    }

    public IStatusGeneric Handle(EntityEvents callingEntity, 
        OrderCreatedEvent domainEvent)
    {
        var tax = _rateFinder.GetTaxRateInEffect(domainEvent.ExpectedDispatchDate);
        domainEvent.SetTaxRatePercent(tax);

        return null;
    } 
}

The EfCore.GenericEventRunner library has an extension method called RegisterGenericEventRunner which scans the assemblies you provide to find all the handlers that have the IBeforeSaveEventHandler (and IAfterSaveEventHandler) interfaces. You should put this in your start-up code where the other dependency injection (DI) items are registered.

In the overridden SaveChanges/ SaveChangesAsync methods an event runner looks for event handlers in the DI services that match the full handler + event type. It then runs each event handler with the event data.

NOTE: I am not covering the inner workings of the event handler here as I want to give you a good overview of the approach.  Suffice to say there is a lot going on in the event handler.

Stage 3 – Run SaveChanges

The final stage is saving the data to the database. Its simple to do because EF Core does all the complex stuff. SaveChanges will inspect all the tracked entities and work out what State each entity is in: either Added, Modified, Deleted, or Unchanged. It then builds the database commands to update the database.

As I said earlier the important thing is the original data and the new data added by the event handlers are saved together in one transaction. That means you can be sure all the data was written out, or if there was a problem the nothing is left out.

Conclusions

I have described an event-driven design which is very limited in its scope: it focuses on updating database data via EF Core. This approach isn’t a “silver bullet” that does everything, but I think it is a valuable tool in building applications. I expect to still be using my normal business rule (see this article on how a build my business logic), but this event-driven design now allows me to access external services (i.e. event handlers) while inside the entity class, which is something I have wanted to be able to do for a while.

I spent some time describing the design and its benefits because it wasn’t obvious to me how useful this event-driven design was until I saw the client’s system. Also, I felt it was best to describe how it works before describing the EfCore.GenericEventRunner library, which I do in the next article.

I want to recognise Jimmy Bogard Blog for his original article on an event-driven approach that my client used. I find Jimmy’s articles really good as he, like me, writes from his work with clients. I also want to thank my client for exposing me to this approach in a real-world system.

NOTE: My client is aware that I am building the EfCore.GenericEventRunner library, which is a complete rewrite done in my own time. This library also solves one outstanding problem in their implementation, so they benefit too.

Happy coding.

An in-depth study of Cosmos DB and the EF Core 3.0 database provider

Last Updated: January 17, 2020 | Created: November 4, 2019

This article looks at the capabilities of Cosmos DB database when used via the new EF Core 3.0 database provider. It looks at various differences between Cosmos DB, which is a NoSQL database, and a SQL database. It also looks at the limitations when using the EF Core 3.0 Cosmos DB provider.

I have been waiting for this release for a while because I think both SQL and NoSQL database have a role in building high performance applications. I have a CQRS approach which uses both types of databases and I wanted to update it (see section “Building something more complex with Cosmos DB” for more info). But when I started using Cosmos DB and the new EF Core database provider, I found things were more different than I expected!

The articles forms part of the articles I am writing to update my book. “Entity Framework Core in Action”, with the changes in EF Core 3. The list of articles in this series is:

NOTE: If you not sure of the differences between SQL (relational) and NoSQL databases I recommend googling “sql vs nosql difference” and read some of the articles. I found this article gave a good, high-level list of differences at the start, but the rest of it was a bit out of date.

TL;DR; – summary

A quick introduction to using EF Core’s Cosmos DB provider

I thought I would start with a simple example that demonstrates how to use the Cosmos DB database provider in EF Core 3. In this example I have a Book (my favourite example) with some reviews. There are the two classes and the DbContext.

public class CosmosBook
{
    public int CosmosBookId { get; set; }
    public string Title { get; set; }
    public double Price { get; set; }
    public DateTime PublishedDate { get; set; }

    //----------------------------------
    //relationships 

    public ICollection<CosmosReview> Reviews { get; set; }
}
[Owned]
public class CosmosReview
{
    public string VoterName { get; set; }
    public int NumStars { get; set; }
    public string Comment { get; set; }
}
public class CosmosDbContext : DbContext
{
    public DbSet<CosmosBook> Books { get; set; }

    public CosmosDbContext(DbContextOptions<CosmosDbContext> options)
        : base(options) { }
}

If you have used SQL databases with EF Core, then the code above should be very familiar, as its pretty much the same as a SQL database. The unit test below is also to a SQL unit tes, but I point out a few things at the end that are different for Cosmos DB.

[Fact]
public async Task TestAddCosmosBookWithReviewsOk()
{
    //SETUP
    var options = this.GetCosmosDbToEmulatorOptions<CosmosDbContext>();
    using var context = new CosmosDbContext(options);
    await context.Database.EnsureDeletedAsync();
    await context.Database.EnsureCreatedAsync();

    //ATTEMPT
    var cBook = new CosmosBook
    {
        CosmosBookId = 1,      //NOTE: You have to provide a key value!
        Title = "Book Title",
        PublishedDate = new DateTime(2019, 1,1),
        Reviews = new List<CosmosReview>
        {
            new CosmosReview{Comment = "Great!", NumStars = 5, VoterName = "Reviewer1"},
            new CosmosReview{Comment = "Hated it", NumStars = 1, VoterName = "Reviewer2"}
        }
    };
    context.Add(cBook);
    await context.SaveChangesAsync();

    //VERIFY
    (await context.Books.FindAsync(1)).Reviews.Count.ShouldEqual(2);
}

Here are some comments on certain lines that don’t follow what I would do with a SQL database.

  • Line 5: The GetCosmosDbToEmulatorOptions method comes from my EfCore.TestSupport library and sets up the Cosmos DB connection to the Azure Cosmos DB Emulator – see the “Unit Testing Cosmos DB” section at the end of this article.
  • Lines 6 and 7: This creates an empty database so that my new entry doesn’t clash with any exsiting data.
  • Line 13: This is the first big thing with Cosmos DB – it won’t create unique primary keys. You have to provide a unique key, so most people use GUIDs.
  • Lines 23 and 26: Notice I use async versions of the EF Core commands. There is a warning to only use async methods in the EF Core Cosmos DB documentation.

And finally, here is what is placed in the Cosmos DB database:

{
    "CosmosBookId": 1,
    "Discriminator": "CosmosBook",
    "Price": 0,
    "PublishedDate": "2019-01-01T00:00:00",
    "Title": "Book Title",
    "id": "CosmosBook|1",
    "Reviews": [
        {
            "Comment": "Great!",
            "Discriminator": "CosmosReview",
            "NumStars": 5,
            "VoterName": "Reviewer1"
        },
        {
            "Comment": "Hated it",
            "Discriminator": "CosmosReview",
            "NumStars": 1,
            "VoterName": "Reviewer2"
        }
    ],
    "_rid": "+idXAO3Zmd0BAAAAAAAAAA==",
    "_self": "dbs/+idXAA==/colls/+idXAO3Zmd0=/docs/+idXAO3Zmd0BAAAAAAAAAA==/",
    "_etag": "\"00000000-0000-0000-8fc9-8720a1f101d5\"",
    "_attachments": "attachments/",
    "_ts": 1572512379
}

Notice that there are quite a few extra lines of data that EF Core/Cosmos DB adds to make this all work. Have a look at this link for information on what these extra properties do.

Building something more complex with Cosmos DB

There are plenty of simple examples of using the EF Core Cosmos DB database provider, but in my experience its not until you try and build a real application that you hit the problems. In my book, “Entity Framework Core in Action” I built a CQRS architecture database in chapter 14 using both a SQL and NoSQL database and as Cosmos DB wasn’t available I used another NoSQL database, RavenDB. I did this to get better read performance on my example book sales web site.

I wanted to redo this two-database CQRS architecture using the EF Core’s Cosmos DB provider. I had a go with a pre-release Cosmos DB provider in EF Core 2.2, but the EF Core 2.2. Cosmos DB provider had (big) limitations. Once EF Core 3 came out I started rebuilding the two-database CQRS architecture with its stable release of the Cosmos DB provider.

NOTE: The new version is in the master branch of the EfCoreSqlAndCosmos repo, with the older version in the branch called NetCore2_2Version.

This example application, called EfCoreSqlAndCosmos, is a site selling books, with various sorting, filtering, and paging features. I have designed the application so that I can compare the performance of a SQL Server database against a Cosmos DB database, both being queries via EF Core 3. Here is a picture of the application to give you an idea of what it looks like – you swap between the SQL Books page (shown) and the NoSQL Books page accessed by the “Books (NoSQL)” menu item.

NOTE: This code is open source and is available on GitHub. You can clone the repo and run it locally. It uses localdb for the SQL Server database and Azure Cosmos DB Emulator for the local Cosmos DB database.

What I found when I rebuilt my EfCoreSqlAndCosmos application

It turns out both the Cosmos DB and EF Core Cosmos DB provider have some changes/limitations over what I am used to with a SQL (relational) database. Some of the changes are because Cosmos DB database has a different focus – its great at scalability and performance but poor when it comes to complex queries (that’s true in most NoSQL databases too). But also, the EF Core 3.0 Cosmos DB provider has quite a few limitations that impacted what I could do.

This article has therefore morphed into a look at what you can, and can’t do with the EF Core 3 Cosmos DB provider, using my book app as an example. What I’m going to do is walk you thought the obstacles I encounter on trying to build my book app and explain what was the problem and how I got around it.

Issue #1: No unique key generation or constraints in Cosmos DB

UPDATE EF CORE 3.1

In EF Core 3.1 it now throws an CosmosException with the error message saying “Resource with specified id or name already exists.”

In the initial example I showed you that Cosmos DB doesn’t create primary key in the way the SQL databases can do. That means you are likely to use something like a GUID, or GUID string, if you want to add to a Cosmos DB.

You also don’t get the sort of checking of primary/foreign keys that you have in SQL databases. But you do get a error if that key is already used. My TestAddCosmosBookAddSameKey unit test shows that if you Add (i.e. try to create) a new entry with the same key as an existing entry in the database then you get a CosmosException

I quite like SQL constraints because it ensures that my database keeps is referential integrity, which means when a change is applied to a SQL database it makes sure all the primary key and foreign keys are correct against the database constraints. But Cosmos, and most NoSQL databases has no constraints like SQL databases. This is another feature that NoSQL database drops to make the database simpler and therefore faster.

Issue #2: Counting number of books in Cosmos DB is SLOW!

In my original application I had a pretty standard paging system, with selection of page number and page size. This relies on being able to count how many books there are in the filtered list. But with the changeover to Cosmos DB this didn’t work very well at all.

First the EF Core 3.0 Cosmos DB provider does not support aggregate operators, e.g. Average, Sum, Min, Max, Any, All, and importantly for me, Count. There is a way to implement Count, but its really slow. I thought this was a real limitation, until I realised something about Cosmos DB – you really don’t want to count entries if you can help it – let me explain.

On SQL database counting things is very fast (100,000 books takes about 20 ms. on my PC). But using a Cosmos DB counting means accessing the data, and you are a) charged for all the data you access and b) you have to pay for the level of throughput you want provision for. Both of these is measured in RU/second, with the starting point being 400 RU/second for a Cosmos DB database on Azure.

The Azure Cosmos DB Emulator tells me that to count 100,000 of my book class using the best Count approach (i.e. Cosmos DB SQL: SELECT VALUE Count(c) FROM c) takes 200 ms. (not rate limited) and uses 1,608 RU/second – that is a costly query to provision for!

NOTE: The current EF Core solution, noSqlDbContext.Books.Select(_ => 1).AsEnumerable().Count() is worse. It takes 1 second to read 100,000 books.

So, the learning here is some things in Cosmos DB are faster than SQL and some things are slower, and more costly, than SQL. And you have to handle that.

This made me change the NoSQL side of the EfCoreSqlAndCosmos application to not use normal paging but use a Prev/Next page approach (see picture).

You might notice that Amazon uses a limited next/previous and says things like “1-16 of over 50,000 results for…” rather than count the exact number of entries.

Issue #3: Complex queries can be a problem in Cosmos DB

In my “filter by year” option I need to find all the years where books were published, so that the user can pick the year they are looking for from a dropdown. When I ran my code taken from my SQL example I got an exception with a link to this EF Core Issue which says that Cosmos DB has some limitations on queries.

See the two code snippets below from my filter dropdown code to see the difference from the Cosmos DB (left) and SQL Server (right) code. Notice the Cosmos DB query needed the second part of the query to be done in the application.

The learning from this example is that Cosmos DB doesn’t support complex queries. In fact the general rule for NoSQL databases is that they don’t have the range of query capabilities that SQL database has.

Issue #4: You can’t sort on a mixture of nullable and non-nullable entries

My SQL queries to show books contains a LINQ command to work out the average review stars for a book. SQL databases returns null if there aren’t any reviews for a book (see note at the end of this second for the technicalities of why that is).

So, when I built my Cosmos DB version, I naturally added a nullable double (double?) to match. But when I tried to order by the reviews it threw an exception because I had a mix of nulls and numbers. Now, I don’t know if this is a bug in EF Core (I have added an issue to the EF Core code) or an overall limitation.

UPDATE: It turns out its a bug in Cosmos DB which hasn’t been fixed yet. I suspect that fix won’t get into EF Core 3.1 but hopefully fixed in EF Core 5.

The solution for my application was easy – I just set a null value to 0 as my ReviewsCount property told me if there were any reviews. But be aware of this limitation if your using nulls, especially strings. PS. Saving/returning nulls work find, and Where clauses work too – it’s just OrderBy that has a problem.

NOTE: In my SQL version I use the LINQ command … p.Reviews.Select(y => (double?)y.NumStars).Average(), which converts into the SQL command AVG(CAST([r1].[NumStars] AS float)). The (double?) cast is necessary because the SQL AVE command returns null if there are no rows to average over.

Issue #5. Missing database functions

In my SQL filter by year I have the following piece of code:

var filterYear = int.Parse(filterValue);      
return books.Where(x => x.PublishedOn.Year == filterYear 
     && x.PublishedOn <= DateTime.UtcNow);

That returns all the books in the give year, but excludes books not yet published. I ran into two issues with the Cosmos DB version.

Firstly, the PublishedOn.Year part of the query gets converted into (DATEPART(year, [b].[PublishedOn]) in SQL database, but Cosmos DB doesn’t have that capability. So, to make the filter by year feature to work I had to add an extra property called “YearPublished” to hold the year.

The second part was the DateTime.UtcNow gets converted to GETUTCDATE() in a SQL database. Now Cosmos DB does have a GetCurrentDateTime () method, but at the moment EF Core 3.0 does not support that, and many other Cosmos database functions too (subscribe to this EF Core issue to track progress on this).

All is not lost. By me adding a new int property called “YearPublished” to my CosmosBook and getting the UtcNow from the client I got the query to work – see below:

var now = DateTime.UtcNow;
var filterYear = int.Parse(filterValue);      
return books.Where(x => x.YearPublished == filterYear 
     && x.PublishedOn <= now);

This is another example of different query features between SQL databases (which are very similar because of the definition of the SQL language) and a range of different NoSQL databases, don’t have any standards to follow. On the plus side Cosmos DB automatically indexes every property by default, which helps my application.

Issue #6. No relationships (yet)

We now have a stable, usable Cosmos DB database provider in EF Core 3.0, but its certainly not finished. From the EF Core Azure Cosmos DB Provider Limitations page in the EF Core docs it said (lasted updated 09/12/2019)

Temporary limitations

  • Even if there is only one entity type without inheritance mapped to a container it still has a discriminator property.
  • Entity types with partition keys don’t work correctly in some scenarios
  • Include calls are not supported
  • Join calls are not supported

Looking at the last two items that means, for now, you will have to put all the data into the one class using [Owned] types (see the example I did right at the beginning). That’s OK for my example application, but does cut out a number of options for now. I will be interesting to see what changes are made to the Cosmos DB provider in EF Core 5 (due out in November 2020).

Issue #7. Permanent limitation due to Cosmos DB design

The Cosmos DB database has certain limitations over what you are used to with SQL database. The two big ones are:

1. No migrations – can cause problems!

Cosmos DB, like many NOSQL databases, saves a json string, which Cosmos DB calls a “Document” – I showed you that in the first example. Cosmos DB doesn’t control the content of that Document, it just reads or writes to it, so there’s no ‘migration’ feature that you are used change all the Documents in a database. And this can cause problems you need to be aware of!

For instance, say I updated my CosmosBook class to add a new property of type double called “Discount”. If you now tried to read the ‘old’, i.e. data that was written to the database before you added the “Discount” property, then you would get an exception. That’s because it expects a property called “Discount” to fill in the non-nullable property in your class.

As you can see, this means you need to think about migrations yourself: either run special code to add a default value for the new, non-nullable property or make your new properties nullable

NOTE: Taking a property out of a class is fine – the data is in the document, but it just isn’t read. When you update document, it replaced all old data with the new data.

2. No client-initiated transactions

Cosmos only provide Document-level transactions. That means it updates the whole document, or doesn’t change anything in the document. That way you can be sure the document is correct. But doing any clever transaction stuff that SQL database provide are not in Cosmos DB (and most, but not all, NoSQL databases).

This makes some things that you can do in SQL quite hard, but if you really need transactions then you should either need a NoSQL that has that feature (a few do) or go back to a SQL database.

However there are ways to implement a substitute for a transaction, but it’s quite complex. See the series called Life Beyond Distributed Transactions: An Apostate’s Implementation, by Jimmy Bogard for a way to work around this.

UPDATE: Cosmos DB is continually changing and in the the November 2019 “what’s new” article they have added some form of transactional feature to Cosmos DB (and support for GroupBy). So my comments about no transactions might change in the future.

Unit testing Cosmos DB code

I know this article is long but I just wanted to let you know about the very useful Azure Cosmos DB Emulator for testing and developing your Cosmos DB applications. The Emulator runs locally and store your results using localdb. It also come with a nice interface to see/edit the data.

NOTE: You could use a real Cosmos DB on Azure, but they aren’t cheap and you will need lots of databases if you want to run your unit tests in parallel.

I have added a couple of extensions to my EfCore.TestSupport library to help you set up the options for Cosmos DB accessing the Azure Cosmos DB Emulator. Here is a very simple example of a unit test using my GetCosmosDbToEmulatorOptions<T> method, with comments at the end

[Fact]
public async Task ExampleUsingCosmosDbEmulator()
{
    //SETUP
    var options = this.GetCosmosDbToEmulatorOptions<CosmosDbContext>();
    using var context = new CosmosDbContext(options);
    await context.Database.EnsureDeletedAsync();
    await context.Database.EnsureCreatedAsync();

    //ATTEMPT
    var cBook = new CosmosBook {CosmosBookId = 1, Title = "Book Title")
    context.Add(cBook);
    await context.SaveChangesAsync();

    //VERIFY
    (await context.Books.FindAsync(1)).ShoudlNotBeNull();
}

Comments on certain lines.

  • Line 5: This method in by library links to your local Azure Cosmos DB Emulator creates a database using the name of the test class. Having a name based on the test class means only the tests in this class uses that database. That’s important because xUnit can run test classes in parallel and you don’t want tests in other classes writing all over the database you are using for your test.
  • Line 7 & 8: I found deleting and the recreating a Cosmos DB is a quick (< 2 secs on my PC) way to get an empty database. That’s much better than SQL Server database that can take around 8 to 10 seconds on my PC.

NOTE: The GetCosmosDbToEmulatorOptions have a few other options/calls. You can find the full documentation about this here. Also, the EfCoreSqlAndCosmos repo has lots of unit tests you can look at to get the idea – see the Test project.

Conclusion

Wow, this article is very long and took me way longer than I expected. I really hope you find this useful. I certainly learnt a lot about what Cosmos DB can do and the state of the EF Core 3 Cosmos DB database provider.

This is the first non-preview version of the EF Core Cosmos DB database provider and many features aren’t available, but it had enough for me to use it successfully. Its going to be very interesting to see where both the EF Core Cosmos provider and the Cosmos DB will be when NET Core 5 comes out in November 2020.

As I said at the beginning of the article, I think both SQL and NosSQL databases have a role in building modern applications. Each approach has different strengths and weaknesses, and this article is about seeing what they are. But its not until I write the next article comparing the performance of various approaches will some of the strengths of the Cosmos DB will come through.

In the meantime, I hope this article and its companion EfCoreSqlAndCosmos repo helps you in understanding and using Cosmos DB.

Happy coding!

If you have a ASP.NET Core or Entity Framework Core problem that you want help on then I am available as a freelance contractor. Please send me a contact request via my Contact page and we can talk some more on Skype.

NET Core 3 update to “Entity Framework Core in Action” book

Last Updated: January 17, 2020 | Created: October 19, 2019

With the release of EF Core 3 I wanted to provide updates to my book, “Entity Framework Core in Action”. This article covers the whole of the book and provides the updated information. But I also plan to write two more articles that go with the big, under the hood, changes in EF Core 3, they are: a) query overhaul and b) the addition of the NoSQL database, Cosmos DB, as a database provider. The list of articles in this series is:

NOTE: This article is written to supplement/update the book “Entity Framework Core in Action”. It is not a general introduction to EF Core 3 – please look at the Microsoft documentation for the changes in EF Core 3.

TL;DR; – summary

  • The release of EF Core 3 made small changes to the code you write, but there are big changes inside, i.e. query overhaul and support for NoSQL databases (Cosmos DB).
  • For the most part the “Entity Framework Core in Action” book (which covered up to EF Core 2.1) doesn’t change a lot.
  • There are 19 small changes in EF Core 3 that effect the book. In my opinion only two of them are really important:
  • The big changes in EF Core 3 which warrant their own articles are:
    • Query overhaul, where the way that LINQ is converted to database commands has been revamped. I mentions this in this article but going to more detail another article.
    • Support for NoSQL databases (Cosmos DB) which I will cover in separate articles.

List of changes by chapter

I’m now going to go through each chapter in my book and tell you what has changed. But first here is a summary:

  • Chapter 1 – 3 changes (medium, setup)
  • Chapter 2 – 1 change (important)
  • Chapter 3 – no changes
  • Chapter 4 – no changes
  • Chapter 5 – 2 changes (small, code)
  • Chapter 6 – 1 change (small, config)
  • Chapter 7 – 3 changes (small, config)
  • Chapter 8 – 1 change (optional, config)
  • Chapter 9 – 2 changes (small, rename)
  • Chapter 10 – no changes
  • Chapter 11 – 1 change (optional)
  • Chapter 12 – 2 changes (medium)
  • Chapter 13 – see a new article looking at the performance changes.
  • Chapter 14 – 1 change (medium)
  • Chapter 15 – 2 changes (small)

NOTE: There are other changes in EF Core 3.0 that I don’t list because the topic was not covered in my book. Typically these other changes are about features that are too deep to cover in a 500-page book. For a full list of all the breaking changes in EF Core 3 see this link to the Microsoft documentation.

Chapter 1 – Introduction to Entity Framework Core

This chapter introduced EF Core, described what it does and gives a simple application that accesses a database to start.  Much of this stays the same, but here are the changes:

Pages 9 to 11, Installing, creating application, and downloading example

The description uses VS2017 but if you want to use EF Core 3 you should use VS2019 instead (note: installing VS2019 should also install NET Core 3 SDK). The steps are the same but VS2019 is a bit easier.

NOTE: If you want to download an EF Core 3 version for the companion repo EfCoreInAction then I have created new branch called Chapter01-NetCore3 that you can download.

Pages 17 to 19, Reading data from the database

In listing 1.2 I introduce the EF Core method called AsNoTracking. This tells EF Core that you don’t want to update this data, which reduces the memory it take and makes it slightly faster.

EF Core 3 adds another another performance improvement by returning individual instances of each entity class, rather that returning a single instance for duplicate entities, i.e. classes of the same type and primary key (if that doesn’t make sense then read the EF Core docs on this).

Now that sounds OK, but it can produce some very subtle changes to your application, mainly in your business logic. Taking the Book/Author example in the book, if I read in two books with the same author using AsNoTracking and I would get different results in EF Core 2 from EF Core 3. Here is the code and what is returned:

var entities = context.Books
    .Where(x => x.AuthorsLink.Author.Name == "Author of two books")
    .Include(x => x.AuthorsLink)
         .ThenInclude(x => x.Author)
    .ToList();
  • EF Core 2: I get two instances of the Book classes and ONE instance of the Author class (because both books have the same author)
  • EF Core 3: I get two instances of the Book classes and TWO instance of the Author class, one for each book.

This shouldn’t be a problem if you are reading data to show to the user, but it may be a problem in your business logic if you were relying on the instances being unique. I found it in my Seed from Production tests when I was saving production data to json.

Pages 22 to 26 – Should you use EF Core in your next project?

Much of what I said there still applied but EF Core is much more mature and used by millions in real products. EF Core’s stability and coverage are much higher now than when I started to write this book.

Chapter 2 – Querying the database

There is a very significant under-the-hood change to the client vs. server evaluation feature in EF Core 3. I described the client vs. server evaluation (pages 43 to 45) and at the end I warned you that client vs. server evaluation could produce really inefficient database accessed, and by default EF wouldn’t alert you to this (you could turn on an event that would throw an exception, but by default this was off).

It turns out it is pretty easy to write a LINQ query that can’t be properly translated to a database access and were run in software, which can be very slow. The EF Core 3 the team decided to fix this problem as part of the query overhaul by restricting the client vs. server evaluation to remove these bad performing queries. In EF Core 3 you now get an InvalidOperationException thrown if the query couldn’t be translated into a single, valid database access command.

See the Conclusion section at the end of this article for some more information on how to handle such an exception.

NOTE: client vs. server evaluation is still works at the top level (and is very useful) – see the EF Core restricted client evaluation docs to show you what works.

Chapter 3 – Changed the database content

No changes.

Chapter 4 – Using EF Core in business logic

No changes.

Chapter-5 – Using EF Core in ASP.NET Core

Obviously, the big change is using ASP.NET Core 3 which changes the code you write in ASP.NET Core’s Program and Startup classes. But all of the concepts, and nearly all of the code, in this chapter stays the same. The only code changes are:

NOTE: In the companion repo EfCoreInAction I have created a branch called Chapter05-NetCore3. In this I have updated all the projects to NetStandard2.1/netcoreapp3.0. I also swapped out AutoFac for the NetCore.AutoRegisterDi library – see this article about why I did that.

Chapter 6 – Configuring nonrelational properties

There is a small change to backing fields on how data is loaded (pages 172 and 173). In EF core 2 it defaulted to loading data via the property, if present. In EF Core 3 it defaults to loading data via the private field, which is a better default – see this link for more on this.

Chapter 7 – Configuring relationships

Pages 195 to 199 – Owned types

Owned types in EF Core 3 have changes how you map an Owned type to another table (page page 198). Now you have to provide the table name as a second parameter in the Fluent APIs’ OwnOne method. See this link for more information.

NOTE: In appendix B, page 470 that EF Core 2.1 provided the [Owned] attribute which can be used instead of Fluent API configuration. I find this a simpler way to configure a class as an Owned type than using the Fluent API, but it can’t change the table mapping.

Pages 199 to 203 – Table per hierarchy section

In the table per hierarchy section I had to configure the discriminator’s AfterSaveBehavior behaviour to Save (see bold content in listing 7.22). In NET Core 3 you cannot do that anymore because the MetaData property is read-only, but it seems that the issue which needed me to set the AfterSaveBehavior property has been fixed, so I don’t need that feature.

Pages 203 to 205 – Table splitting

By request the way that tables splitting works has changed. Now the dependant part (BookDetail class in my example) is now optional. This is slightly complicated change so I refer you to the EF Core 3 breaking change section that covers this.

Chapter 8 – Configuring advanced features and handling concurrency conflicts

There is one small improvement around user defined functions (pages 209 to 213). In listing 8.4 I said you always needed to define the Schema name when configuring the user defined functions. In EF Core 3 you don’t need to set the Schema name if you are using the default schema name.

Chapter 9 – Going deeper into the DbContext

Pages 256 to 259 – Using raw SQL commands in EF Core

The EF Core SQL commands have been renamed to make the commands safer from SQL injection attacks. There are two versions of each method: one using C#’s string interpolation and one that uses {0} and parameters, e.g.

context.Books
    .FromSqlRaw("SELECT * FROM Books WHERE Price < {0}”, priceLimit)
    .ToList();
context.Books
     .FromSqlInterpolated($"SELECT * FROM Books WHERE Price < {priceLimit}”)
     .ToList();

My unit tests also show that you can’t add (and don’t need) the IgnoreQueryFilters() method before a EF Core SQL command (you did need to in EF Core 2). It seems that the methods like FromSqlRaw don’t have the query filter added to the SQL. That’s a nice tidy up as it often made the SQL command invalid, but please remember to add the query filter to your SQL if you need it.

Page 261 – Accessing EF Core’s view of the database

In EF Core 3 access to the database schema information has changed. Instead of using the method Relational you can now directly access database information with methods such as GetTableName and GetColumnName. For an example of this have a look at the DatabaseMetadata extensions in the Chapter13-Part2-NetCore3 branch.

Chapter 10 – Useful software patterns for EF Core applications

No changes.

Chapter 11 – Handling database migrations

Pages 303 to 306 – Stage 1: creating a migration

In figure 11.1 I showed you how to create an IDesignTimeDbContextFactory if you have a multi-project application. But if your EF Core 3 code is in an ASP.NET Core 3 application you don’t need to do this because EF Core 3 will call CreateHostBuilder in ASP.NET Core’s Program class to get an instance of your application DbContext. That’s a small, but nice improvement.

Chapter 12 – EF Core performance tuning

All the suggestions for improving performance are just as useful in EF Core 3, but here are a few adjustments for this chapter.

Pages 338 to 339 – Accessing the logging information produced by EF Core

The code that showed to setup logging on EF Core is superseded. The correct way to get logging information from EF Core is to use the UseLoggerFactory when defining the options. Here is an example of how this could work (see the full code in my EFCore.TestSupport library).

var logs = new List<LogOutput>();
var options = new DbContextOptionsBuilder<BookContext>()
    .UseLoggerFactory(new LoggerFactory(new[]
    {
        new MyLoggerProviderActionOut(log => logs.Add(log))
    }))
    .UseSqlite(connection)
    .Options;
using (var context = new BookContext(options))
//… your code to test

In this section I also talked about the QueryClientEvaluationWarning, which is replaced in EF Core with a hard exception if the query can’t be translated to single, valid database access command.

NOTE: My EfCore.TestSupport library has methods to setup Sqlite in-memory and SQL Server database with logging – see this documentation page for examples of how to do this.

Pages 347 to 328 Allowing too much of a data query to be moved into the software side

With EF Core’s much stringent approach to query translation any use of client vs server evaluation will cause an exception. That means that “Allowing too much of a data query to be moved into the software side” can’t happen, but I do suggest using unit tests to check your queries so that you don’t get exceptions in your live application.

Chapter 13 – A worked example of performance tuning

I plan to write a separate article to look at EF Core performance, but in summary EF Core 3 is better at handling relationships that are collections, e.g. one-to-many and many-to-many, because it loads the collections with the primary data. But there are still areas where you can improve EF Core’s performance.

NOTE: I wrote an article “Entity Framework Core performance tuning – a worked example” soon after I finished the book which summarises chapter 13. I plan to write a new article to look at the performance tuning that EF Core can offer.

Chapter 14 – Different database types of EF Core services

Pages 403 to 405 – Finding book view changes

In table 14.1 I show the BookId and ReviewId having a temporary key values after a context.Add method has been executed. In EF Core 2 these temporary key values were put in the actual key properties, but in EF Core 2 the temporary values are stored in the entity’s tracking information, and leaves the key properties unchanged. To find out why see this explanation.

Pages 413 to 416 – Replacing an EF Core service with your own modified service

This section is out of date because the internals of EF Core 3 has changed a lot. The concept is still valid but the actual examples don’t work any more as the names or signature of the methods have changed.

NOTE: I have updated my EfSchemaCompare feature in my EfCore.TestSupport library. This calls part of the scaffolder feature in EF Core to read the model of the database we are checking. You can find the EF Core 2 and EF Core 3 code that creates the DatabaseModel service that reads the database’s Schema.

Chapter 15 – Unit testing EF Core applications

Simulating a database—using an in-memory database (pages 430 to 431)

In listing 15.7 I talk about setting the about the QueryClientEvaluationWarning to catch poor performing queries. This isn’t needed any more as the new query transaction always throws an exception if the query can’t be translated to single, valid database access command.

Capturing EF Core logging information in unit testing (pages 443 to 446)

Just as I pointed out in Chapter 12, the way to capturing EF Core logging information has changed (see the Chapter 12 sub-section for the corrected code)

NOTE: I recommend using the EfCore.TestSupport NuGet package which was built as part of Chapter 15. I have updated this library to handle both EF Core >=2.1 and  EF Core >=3.0. This library has various …CreateOptionsWithLogging methods that set up options with logging included – see the docs for these methods for an example of how this works.

Extra information

One of the big (biggest?) changes in EF Core 3 is the support of the NoSQL database, Cosmos DB. This is a big development and means EF Core can now support both SQL and NoSQL databases. I expect to see more NoSQL database providers in the future.

In Chapter 14 I built a CQRS architecture system using two databases: SQL Server for all writes and some reads and a NoSQL database (RavenDb) for the high-performance read side. I plan to write a new article doing the same thing, but using EF Core 3 and Cosmos DB. 

NOTE: I already wrote an article “Building a robust CQRS database with EF Core and Cosmos DB” when the first version of the Cosmos DB database provider was out in EF Core 2.2. It worked, but was very limited and slow. I plan to write a new version with EF Core and look at the performance.

List of EF Core 3 updates to the EfCoreInAction repo

I have updated three branches of the companion EfCoreInAction repo. They are:

  • Chapter01-NetCore3: This is useful for people who want to start with a simple application. It now runs a netcoreapp3.0 console application with EF Core 3.0
  • Chapter05- NetCore3: This provides you with a ASP.NET Core 3.0/EF Core 3.0 application and the related unit tests.
  • Chapter13-Part3-NetCore3: I mainly did this to get all the unit tests from chapter 2 to 13 updated to EF Core 3. That way I could ensure I hadn’t missed any changes.

NOTE: If you have Visual Studio Code (VSCode) then there is a great extension called GitLens which allows you to compare GIT branches. You might like to use that to compare the Chapter13-Part3-NetCore21 and Chapter13-Part3-NetCore3 to see for yourself what has changed.

Conclusion

As you have seen the EF Core 3 update hardly changes the way to write your code, but there are very big changes inside. Looking at the breaking changes I can see the reasons for all the changes and quite a few of the changes were things I had bumped up against. Overall, I think updating an application from EF Core 2 to EF Core 3 shouldn’t be that difficult from the coding side.

The part that could catch you out is the query overhaul. This could cause some of your queries to fail with an exception. If you already checked for QueryClientEvaluationWarning in EF Core 2 you should be OK but if you didn’t then EF Core 3 will throw an exception on any queries that don’t translate properly to database commands.

You can get around the exception by breaking the query into two parts and using AsEnumerable, see this example, and the exception message provides information on what part of the query is causing a problem. The up side of this change is you now know none of your queries are running parts of the query in your application instead of in the database.

Opening up EF Core to NoSQL is potentially an even bigger change, as NoSQL databases are a very important part of the database scene. In Chapter 14 of my book I used a CQRS database architecture with NoSQL database (RavenDb) for the read-side database, which significantly improved the performance of my application (see this article). Please do look out for the article “Building a robust CQRS database with EF Core and Cosmos DB – NET Core 3” when it comes out, which repeat this approach but using Cosmos DB via EF Core 3.

Happy coding!

If you have a ASP.NET Core or Entity Framework Core problem that you want help on then I am available as a freelance contractor. Please send me a contact request via my Contact page and we can talk some more on Skype.

Part 7 – Adding the “better ASP.NET Core authorization” code into your app

Last Updated: September 29, 2019 | Created: September 28, 2019

I have written a series about a better way to handle authorization in ASP.NET Core which add extra ASP.NET Core authorization features. Things like the ability to change role rules without having to edit and redeploy your web app, or adding the ability to impersonate a user without needing their password. This series has quickly become the top articles on my web site, with lots of comments and questions.

A lot of people like what I describe in these articles, but they have problems extracting the bits of code to implement the features this article describes. This article, with its improved PermissionAccessControl2 repo, is here to make it much easier to pick out the code you need and put it into your ASP.NET Core application. I then give you a step by step example of selecting a “better authorization” feature and putting it into a ASP.NET Core 2.0 app.

I hope this will help people who like the features I describe but found the code really hard to understand, and here is a list of all the articles in the series so that you can find information on each feature.

TL;DR; – summary

  • The “better authorisation” series provides a number of extra features to ASP.NET Core’s default Role-based authorisation system.
  • I have done a major refactor to the companion PermissionAccessControl2 repo on GitHub to make it easier for someone to copy the code they need for the features they want to put in their code.
  • I have also built separate authorization methods for the different combinations of my authorization features. This means it easier for you to pick the feature that works for you.
  • I have also improved/simplified some of the code to make it easier to understand.
  • I then use a step by step description of what you need to do to add a “better authorization” feature/code into your ASP.NET Core application.
  • I have made a copy of the code I produced in these steps available via the PermissionsOnlyApp repo in GitHub.

Setting the scene – the new structure of the code

Over the six articles I have added new features on top of the features that I already implemented. This meant the final version in the PermissionAccessControl2 repo was really complex, which made  it hard work for people to pick out the simple version that they needed. In this big refactor I have split the code into separate projects so that its easy to see what each part of the application does. The diagram below shows the updated application projects and how they link to each other.

This project may look complex, but many of these projects contain less than ten classes. These projects allow you to copy classes etc. from the projects that cover the features you want, and ignore projects that cover features you don’t want.

The other problem area was the database. Again, I wanted the classes relating to the added Authorization code kept separate from the classes I used to provide a simple, multi-tenant example. This let me to have a non-standard database that was hard to create/migrate. The good news is you only need to copy over parts of the ExtraAuthorizeDbContext DbContext into your own DbContext, as you will see in the next sections.

The Seven versions of the features

Before I get into the step by step guide, I wanted to list the seven different classes that provide different mixes of features that you can now easily pick and choose.

In fact in the PermissionAccessControl2 application allows you to configure different parts of the application so that you can try different features without editing any code. The feature selection is done by the “DemoSetup” section in the appsettings.json file. The AddClaimsToCookies class reads the “AuthVersion” value on startup to select what code to use to set up the authorization. Here are the seven classes that can be used, with their “AuthVersion” value.

  1. Applied when you log in (via IUserClaimsPrincipalFactory>
    1. LoginPermissions: AddPermissionsToUserClaims class.
    1. LoginPermissionsDataKey: AddPermissionsDataKeyToUserClaims class.
  2. Applied via Cookie event
    1. PermissionsOnly: AuthCookieValidatePermissionsOnly class
    1. PermissionsDataKey: AuthCookieValidatePermissionsDataKey class
    1. RefreshClaims : AuthCookieValidateRefreshClaims class
    1. Impersonation : AuthCookieValidateImpersonation class
    1. Everything : AuthCookieValidateEverything class

Having all these version makes it easier for you to select the feature set that you need for your specific application.

Step-by-Step addition to your ASP.NET Core app

I am now going to take you through the steps of copying over the most useful asked for in my “better authorization” series – that is the Roles-to-Permissions system which I describe here in the first article. This gives you two features that aren’t in ASP.NET Core’s Roles-based authorization, that is:

  • You can change the authorization rules in your application without needing to edit code and re-deploy your application.
  • You have simpler authorization code in your application (see section “Role authorization and its limitations” for more on this).

I am also going to use the more efficient UserClaimsPrincipalFactory method (see this section of article 3 for more on this) of adding the Permissions to the user’s claims. This add the correct Permissions to the user’s claims when thye log in.

And because .NET Core 3 is now out I’m going to show you how to do this for a ASP.NET Core 3.0 MVC application.

NOTE: I have great respect for Microsoft’s documentation, which has become outstanding with the advent of NET Core. The amount of information and updates on NET Core was especially good – fantastic job everyone.  I also have come to love Microsoft’s step-by-step format which is why I have tried to do the same in this article. I don’t think I made it as good as Microsoft, but I hope it helps someone.

Step 1: Is your application ready to take the code?

Firstly, you must have some form of authorisation system, i.e. something that checks the user is allowed to login. It can be any for of authorization system (my original client used Auth0 with the OAuth2 setup in ASP.NET Core). The code in my example PermissionAccessControl2 repo has some approaches that work with most authorisation, but some of the more complex ones only work with system that store claims in a cookie (although the approach could be altered to tokens etc.).

In my example I’m going to go with a ASP.NET Core 3.0 MVC app with Authentication set to “Individual User Accounts->Store user accounts in-app”. This means there is database added to your application and be default it uses cookies to hold the logged-in user’s Claims. Your system may be different, but with this refactor its easier for you to work out what code you need to copy over.

Step 2:  What code will we need for PermissionAccessControl2 repo?

You need to start out by deciding what parts of the PermissionAccessControl2 projects you need. This will depend on what features you need. Because the projects having names that fit the features this makes it easier to find things.

In my example I’m only going to use the core Roles-to-Permissions feature so I only need the code in the following projects:

  • AuthorizeSetup: just the AddPermissionsToUserClaims class
  • FeatureAuthorize: All the code.
  • DataLayer: A cut-down the ExtraAuthorizeDbContext DbContext (no DataKey, no IAuthChange) and some, but not all, of the classes in the ExtraAuthClasses folder.
  • PermissionParts: All the code.

That means I only need to look at about 50% of the projects in the PermissionAccessControl2 repo.

Step 3: Where should I put the PermissionAccessControl2 code in my app?

This is an architectural/style decision and there aren’t any firm rules here. I lean towards separating my apps into layers (see the diagram in this section of an article on business logic). There are lots of ways I could do it, but I went for a simple design as shown below.

Step 4: Moving the PermissionsParts into DataLayer

That was straightforward – no edits needed and no NuGet packages needed to be installed.

Step 5: Moving the ExtraAuthClasses into DataLayer

The ExtraAuthClasses contain code for all features, which makes this part the most complex step as you need to remove parts that aren’t used by features you want to use. Here is a list of what I needed to do.

5.a Remove unwanted ExtraAuthClasses.

I start by removing and ExtraAuthClasses that I don’t need because I’m not using those features. In my example this means I delete the following classes

  • TimeStore – only needed for “refresh claims” feature.
  • UserDataAccess, UserDataAccessBase and UserDataHierarchical – these are about the Data Authorize feature, which we don’t want.

5.b Fix errors

Because I didn’t copy over projects/code for features I don’t want then some things showed up as compile errors, and some just need to be changed/deleted.

  • Make sure the Microsoft.EntityFrameworkCore NuGet package has been added to the DataLayer.
  • You will also need the Microsoft.EntityFrameworkCore.Relational NuGet package in DataLayer for some of the configuration.
  • Remove the IChangeEffectsUser and IAddRemoveEffectsUser interfaces from classes – they are for the “refresh claims” feature.
  • Remove any using statements that don’t link to left out/moved projects.

NOTE: I used some classes/interfaces from my EfCore.GenericServices library to handle error handling in some of the methods in my ExtraAuthClasses. But I am building this app three days after the release of NET Core 3.0 and I haven’t (yet) updated GenericServices to NET Core 3.0 so I added a project called GenericServicesStandIn to host the Status classes.

5.b Adding ExtraAuthClasses to your DbContext

I assume you will have a DbContext that you will use to access the database. Your application DbContext might be quite complex, but in my example PermissionOnlyApp I started with a basic application DbContext as shown below.

public class MyDbContext : DbContext
{
    //your DbSet<T> properties go here

    public MyDbContext(DbContextOptions<MyDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        //Your configuration code will go here
    }
}

Then I needed to add the DbSet<T> for the ExtraAuthClasses I need, plus some extra configuration too.

public class MyDbContext : DbContext
{
    //your DbSet<T> properties go here

    //Add the ExtraAuthClasses needed for the feature you have selected
    public DbSet<UserToRole> UserToRoles { get; set; }
    public DbSet<RoleToPermissions> RolesToPermissions { get; set; }
    public DbSet<ModulesForUser> ModulesForUsers { get; set; }

    public MyDbContext(DbContextOptions<MyDbContext> options)
        : base(options)
    { }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        //Your configuration code will go here

        //ExtraAuthClasses extra config
        modelBuilder.Entity<UserToRole>().HasKey(x => new { x.UserId, x.RoleName });
        modelBuilder.Entity<RoleToPermissions>()
            .Property("_permissionsInRole")
            .HasColumnName("PermissionsInRole");
    }
}

Now you can replace some of the references to ExtraAuthorizeDbContext references in some of the ExtraAuthClasses.

Step 6: Move the code into the RolesToPermissions project

Now we need to move the last of the Roles-to-Permissions code into the RolesToPermissions project. Here are the steps I took.

6a. Move code into this project from the PermissionAccessControl2 version.

What you move in depends on what features you want. If you have some of the complex feature like refresh user’s claims on auth changes, or user impersonation you might want to put each part in a separate folder. But in my example, I only need code from one project and pick one class from another. Here is what I did.

  • I moved all the code in the FeatureAuthorize project (including the code in the PolicyCode folder).
  • I then copied over the specific setup code I needed for the feature I wanted I this case this was the AddPermissionsToUserClaims class.

At this point there will be lots of errors, but before we sort these out you have to select the specific setup code you need.

6b. Add project and NuGet packages needed by the code

This code accessing a number of support classes for it to work. The error messages on “usings” on in the code will show you what is missing. The most obvious is this project needs a reference to the DataLayer. Then there are NuGet packages – I only needed three, but you might need more.

6c. Fix references and “usings”

At this point I still have compile errors, either because the applciation’s DbContext is a different name, and because it has some old (bad) “usings”. The steps are:

  • The code refers to ExtraAuthorizeDbContext, but now it needs to reference your application DbContext. In my example that is called MyDbContext. You will also need to add a “using DataLayer” to reference that.
  • You will have a number incorrect “using” at the top of various files – just delete them.

Step 7 – setting up the ASP.NET Core app

We have added all the Roles-to-Permission code we need, but the ASP.NET Core code doesn’t use it yet. In this section I will add code to the ASP.NET Core Startup class to link the   AddPermissionsToUserClaims into the UserClaimsPrincipalFactory. Also I assume you have a database your already use to which I have add the ExtraAuthClasses. Here is the code that does all this

public void ConfigureServices(IServiceCollection services)
{
    // … other standard service registration removed for clarity

    //This registers your database, which now includes the ExtraAuthClasses
    services.AddDbContext<MyDbContext>(options =>
        options.UseSqlServer(
              Configuration.GetConnectionString("DemoDatabaseConnection")));

    //This registers the code to add the Permissions on login
    services.AddScoped<
         IUserClaimsPrincipalFactory<IdentityUser>, 
         AddPermissionsToUserClaims>();

    //Register the Permission policy handlers
    services.AddSingleton<IAuthorizationPolicyProvider,
         AuthorizationPolicyProvider>();
    services.AddSingleton<IAuthorizationHandler, PermissionHandler>();
}

NOTE: In my example app I also set the options.SignIn.RequireConfirmedAccount to false in the AddDefaultIdentity method that registers ASP.NET Core Indentity. This allow me to log in via a user I add at startup (see next section). This demo user won’t have its email verified so I need to turn off that constraint. In a real system you might not need that.

At this point all the code is linked up, but we need an EF Core migration to add the ExtraAuthClasses to your application DbContext. Here is the command I run in Visual Studio’s Package Manager Console – note that it has to have extra parameters because there are two DbContexts (the other one is the ASP.NET Idenity ApplicationDbContext).

Add-Migration AddExtraAuthClasses -Context MyDbContext -Project
DataLayer

Step 8 – Add Permissions to your Controller actions/Razor Pages

All the code is in place so now you can use Permissions to protect your ASP.NET Core Controller actions or Razor Pages. I explain Roles-to-Permissions concept in detail in the first article, so I’m not going to cover that again. I’m just going to show you how a) to add a Permission to the Permissions Enum and then b) protect an ASP.NET Core Controller action with that Permission.

8a. Edit the Permissions.cs file

In the code you copied into the PermissionsParts folder in the DataLayer you will find a file called Permissions.cs file, which defines the enum called “Permissions”. It has some example Permissions which you can remove, apart for the one named “AccessAll” (that is used by the SuperAdmin user). Then you can add the Permission names you want to use in your application. I typically I give each Permission a number, but you don’t have to – the compiler will do that for you. Here is my updated Permissions Enum.

public enum Permissions : short //I set this to short because the PermissionsPacker stores them as Unicode chars 
{
    NotSet = 0, //error condition

    [Display(GroupName = "Demo", Name = "Demo", Description = "Demo of using a Permission")]
    DemoPermission = 10,

    //This is a special Permission used by the SuperAdmin user. 
    //A user has this permission can access any other permission.
    [Display(GroupName = "SuperAdmin", Name = "AccessAll", Description = "This allows the user to access every feature")]
    AccessAll = Int16.MaxValue, 
}

8b. Add HasPermission Attribute to a Controller action

To protect an ASP.NET Core Controller action or Razor Page you use the “HasPermission” attribute to them. Here is an example from the PermissionOnlyApp.

public class DemoController : Controller
{
    [HasPermission(Permissions.DemoPermission)]
    public IActionResult Index()
    {
        return View();
    }
}

For this action to be executed the caller must a) by logged in and b) either have the Permission “DemoPermission” in their set of Permissions, or be the SuperAdmin user who has the special Permission called “AccessAll”.

At this point you are good to go apart from creating the databases.

Step 9 – extra steps to make it easier for people to try the application

I could stop there, but I like to make sure someone can easily run the application to try it out. I therefore am going to add:

  • Some code in Program to automatically migrate both databases on startup (NOTE: This is NOT the best way to do migrations as it fails in certain circumstances – see my articles on EF Core database migrations).
  • I’m going to add a SuperAdmin user to the system on startup if they aren’t there. That way you always have a admin user available to log in on startup – see this section from the part 3 article about the SuperAdmin and what that user can do.

I’m not going to detail this because you will have your own way of setting up your application, but it does mean you can just run this application from VS2019 and it should work OK (it does use the SQL localdb).

NOTE: The first time you start the application it will take a VERY long time to start up (> 25 seconds on my system). This is because it is applying migrations to two databases. The second time will be much faster.

Conclusion

Quite a few people have contacted me with questions about how they can add the features I described in these series into their code. I’m sorry it was so complex and I hope this new article. I also took the time to improve/simplify some of the code. Hopefully this will make it easier to understand and transfer the ideas and code that goes with this these articles.

All the best with your ASP.NET Core applications!