GenericServices: A library to provide CRUD front-end services from a EF Core database

Last Updated: April 4, 2019 | Created: April 13, 2018

This article is about a NuGet library designed to make building Create, Read,  Update and Delete (CRUD) web pages quicker to write. GenericServices acts as an adapter and command pattern between EF Core and your web/mobile/desktop application. This article describes why this is useful and how it can save you development time. My examples use ASP.NET Core Razor Pages, but it can work with any type of front-end application.

Other related articles in this series:

TL;DR – summary

The GenericServices library bridges the gap between the database and the front-end user: the database is designed around the business domain (say, e-commerce), while the used wants pages that is appropriate to their need (say, listing all the cookery books). The library implements a read/write Adapter pattern, and a Command pattern for writes to the database.

My experience with a previous library for EF6.x was it could save 2 months on a 12 month project.

Each CRUD page takes a little bit of code, but applications have lots and lots of such pages. Thankfully these pages have a set of common patterns, and GenericServices is built to implement these patterns – you just need to provide a ViewModel/DTO (Data Transfer Object) which contains the properties for the CRUD action you want to accomplish. (I use DTO in the rest of this article)

GenericServices feature are:

  • Easy to use: just mark your ViewModel/DTOs to tell GenericServices how to use it.
  • Reduces the amount of CRUD code you need to write – just a ViewModel/DTO and two calls.
  • Can work with standard EF Core entity classes and Domain-Driven Design (DDD) styled entity classes.
  • Contains good checking and security features, especially with DDD-styled entity classes.

NOTE The GenericServices library is open-source (MIT licence) on GitHub and is available on NuGet as EfCore.GenericServices. The code, including an example ASP.NET Core Razor Pages application, is available on GitHub here. There is extensive documentation in the project’s Wiki.

Who should read this article?

If you are building applications with many front-end CRUD displays, then the GenericServices library will help you develop these pages quickly. GenericServices turns each CRUD access into a common set of calls around your display-specific ViewModel/DTO (I will use DTO from now on for this class). That minimises the code you need to write, which saves you time.

You do need to be using EF Core for your database access, as GenericServices is designed around EF Core (the NuGet package is called EfCore.GenericServices). GenericServices is designed to work with EF Core entity classes (i.e. the classes mapped to the database) that use the standard style (properties are changed by setting the property) and Domain-Driven Design (DDD) styled entity classes, where data is updated via methods in the entity class.

To keep this article short, I assume you know C# and have some idea of what the ASP.NET Core framework does. The examples I use in this article use ASP.NET Core’s Razor Pages, but in the article “How to write good, testable ASP.NET Core Web API code quickly” I show how to use GenericServices with Web APIs.

The problem that GenericServices tackles

GenericServices covers CRUD displays especially in web/mobile applications. I’m going to take an “update of an entity” to describe the typical issues that come up. My example application is an e-commerce site selling technical books and I implement a feature where an authorised user can add a sales promotion to a book, by reducing its price. I will show you two forms: a hand-coded version and a version where I use GenericServices.

To set the scene let’s look at what my “add a promotion” feature on a ASP.NET Core web site. Here is the look at the display, which the user has filled in (in red), with comments on the left about the Book properties and how they are involved in the update.

In a web/mobile application a feature like this consists of two stages:

  1. The display of the data to the user where they can put in their changes (see figure above)
  2. Once the user presses the button (“Add” in this case) the information is sent back and the database update is done.

In the “Add promotion” example the first stage needs to read five properties from the database to show the user. In the second stage, triggered by the user pressing the “Add” button”, the primary key (BookId) is used to select the Book to be updated, and then the NewPrice and PromotionalText values are updated in the book.

NOTE: All my examples are synchronous, but GenericServices contains async versions of every command.

1. The hand-coded version

Let’s start by looking at a hand-coded implementation of a service in an ASP.NET Core application. This sets the standard in terms of lines of code, and performance (I compare the performance of the hand-coded version with the GenericServices version near the end of this article).

I start with the DTO, and then the code to read and update the promotion

public class AddPromotionDto
    public int BookId { get; set; }

    public decimal OrgPrice { get; set; }

    public string Title { get; set; }

    public decimal ActualPrice { get; set; }

    public string PromotionalText { get; set; }

And now the code to read in the data and then update the Book entity with the new promotion.

public class AddPromotionService : 
    StatusGenericHandler, IAddPromotionService
    private readonly EfCoreContext _context;

    public AddPromotionService(EfCoreContext context)
        _context = context;

    public AddPromotionDto GetOriginal(int id)      
        var dto = _context.Books
            .Select(p => new AddPromotionDto
                BookId = p.BookId,
                Title = p.Title,
                OrgPrice = p.OrgPrice,
                ActualPrice = p.ActualPrice,
                PromotionalText = p.PromotionalText
            .SingleOrDefault(k => k.BookId == id);
        if (dto == null)
            AddError("Sorry, I could not find the book you were looking for.");
        return dto;

    public Book AddPromotion(AddPromotionDto dto)
        var book = _context.Find<Book>(dto.BookId);
        if (book == null)
            AddError("Sorry, I could not find the book you were looking for.");
            return null;
        //This is a standard entity class, where you update by setting the properties
        Book.ActualPrice = dto.ActualPrice;   
	 Book.PromotionalText = dto.PromotionalText;
        return book;

2. The GenericServices version

Now let’s look at how GenericServices would do this. Firstly, we change the DTO slightly, as shown below.

public class AddPromotionDto : ILinkToEntity<Book>
    public int BookId { get; set; }

    public decimal OrgPrice { get; set; }

    public string Title { get; set; }

    public decimal ActualPrice { get; set; }

    public string PromotionalText { get; set; }

Two things have changed:

  • GenericServices needs to know what entity class your DTO links to, which you do by adding a ILinkToEntity<TEntity> The interface is empty, i.e. you don’t have to add anything to your DTO – it’s just there for GenericServices to find the DTO and extract the entity class type from the interface.
  • You also need to tell GenericServices which properties are read-only, so it won’t copy them back to the database (that’s a security feature to stop malicious updates). You do this by adding the ReadOnly(true) attributes to the DTO properties that are read-only.

NOTE: Technically you don’t need ReadOnly(true) attributes if you have DDD-styled entity classes, as your entity class method will control access to the data inside the entity class (but I do add them as it allows GenericServices to better match the correct create/update method). But you must add the ReadOnly(true) attributes is you are using standard entity classes as it tells GenericServices what properties should be updated, and which ones should not be updated.

Now the new code for reading and then adding the promotion using GenericServices’ CrudServices.

public class GenericAddPromotionService 
    private readonly ICrudServices _service;

    public IStatusGeneric Status => _service;

    public GenericAddPromotionService(ICrudServices service)
        _service = service;

    public AddPromotionDto GetOriginal(int id)
        return _service.ReadSingle<AddPromotionDto>(id);

    public void AddPromotion(AddPromotionDto dto)

In fact, there is little gain in wrapping the GenericServices’ calls like that and I tend to put the GenericServices directly in the front-end. Here is an example of a Razor page using

public class AddPromotionModel : PageModel
    private readonly ICrudServices _service;

    public AddPromotionModel(ICrudServices service)
        _service = service;

    public AddPromotionDto Data { get; set; }

    public void OnGet(int id)
        Data = _service.ReadSingle<AddPromotionDto>(id);
        if (!_service.IsValid)
            _service.CopyErrorsToModelState(ModelState, Data, nameof(Data));

    public IActionResult OnPost()
        if (!ModelState.IsValid)
            return Page();
        if (_service.IsValid)
            return RedirectToPage("BookUpdated", new { message = _service.Message });

        //Error state
        _service.CopyErrorsToModelState(ModelState, Data, nameof(Data));
        return Page();

What is interesting is the C# code for the hand-coded razor page is the same l length as the GenericServices razor page. This means, if use GenericServices directly in the Razor Page I have saved me from writing 38 lines of code (The AddPromotionService is 41 lines, but I needed to add three [ReadOnly(true)] attributes to the DTO).

But more than that, the Create, Read, Update and Delete patterns are same, apart from the DTO type (line 11). This means I could copy the AddPromotion PageModel code and paste it in a PageModel to say update the publication date of the book, and all I would need to change is the type of the DTO (obviously the razor .cshtml file needs to change – I can’t do that for you (yet)).

GenericServices saves you from having to write repetitive (and boring!) CRUD front-end code. So now you can concentrate on the more complex parts of the application.

For me even a small business application will have more than 50 or more CRUD views, which mean GenericServices can save me from writing many hundreds of lines of CRUD code. I know this because I build a similar library called GenericServices for EF6.x back in 2014 and I have used it on many applications. It has saved me many months of my life from writing CRUD code (and hopefully the same for the other people who downloaded it).

NOTE For anyone who has used my original GenericServices library then the new EfCore.GenericServices provides the same feature, but is built in a very different way. Partly that is because it is designed to handle DDD-styles entity classes, and partly from feedback about making things simpler. I hope you find this new library (more) useful to you, as the original GenericServices library.

How does GenericServices work inside?

GenericServices has commands that cover the Create, Read, Update and Delete database actions. And every command can work with either an entity class or a DTO with a ILinkToEntity<TEntity> interface (apart from Delete, which only works on an entity class). But my experience is that a large proportion of CRUD accesses use DTOs, either to select that is shown to the user or to limit what gets updated – in these situations GenericServices comes into its own.

How it read data from the database

For reading, e.g. ReadSingle<AddPromotionDto>(id), GenericServices uses AutoMapper’s ProjectTo method if it’s a DTO, or simple EF Core Find<TEntity>(id) if the class you provide is an entity class. AutoMapper is a great library with impressive performance (see later) and does a great job on building LINQ Select queries.

NOTE: It’s beyond the scope of this article, but you can alter the AutoMapper read mapping that GenericServices uses by creating a PerDtoConfig<TDto,TEntity> class and overriding the AlterReadMapping property. This means you can changed the default mapping on a per-DTO basis – see the Wiki documentation for more on this.

How it creates a new entity or updates an existing entity

First, let’s look are the create or update of an entity class, which is pretty simple as its already in the correct form.

  1. Create entity: It calls EF Core’s Add method and then SaveChanges
  2. Update entity: If the entity has an EF Core State of Detached then it will call Update, otherwise it assumes you have updated it and will call SaveChanges.

When it comes to create and update the process used depends on whether the entity class is standard style entity or DDD-styled entity. Personally, I really like DDD-styled entity classes but I can see some simple classes, say like entity holding an Address, are quicker to write as a standard style entity.

1. Create or update a standard style entity via a DTO

In this case it uses AutoMapper’s Map method to copy the DTO properties to the entity it has loaded. The default mapping will not copy properties that have the [ReadOnly(true)] attribute on them. Also, it won’t update a property that has a private setter.

NOTE Just like in the read case you have access to the AutoMapper save mapping by creating a PerDtoConfig<TDto,TEntity> class and overriding the AlterSaveMapping property.

2. Create or update a DDD-styled entity via a DTO

GenericServices looks for a constructor/method whose parameters’ name and type match the non-read-only properties. There are lots of features here, but let me give you an example from the AddPromotion example.

The Book entity class has a method called AddPromotion, which looks like this.

public IStatusGeneric AddPromotion(decimal actualPrice, string promotionalText)                  
    var status = new StatusGenericHandler();
    if (string.IsNullOrWhiteSpace(promotionalText))
            "You must provide some text to go with the promotion.", 
        return status;

    ActualPrice = actualPrice;  
    PromotionalText = promotionalText;

    status.Message = $"The book's new price is ${actualPrice:F}.";

    return status; 

NOTE: In this case the update method returns a type of IStatusGeneric, which allows it to provide a status result (methods that don’t need a status can return void).

GenericServices default process is to find an update method where the method’s parameter’s name/types match the name/type of the non-read-only DTO properties (the name match handles camel/pascal case matching). In this case GenericServices finds the method called AddPromotion and calls that method.

NOTE: There are obviously some complicated situations here, like having two methods that match, or methods that don’t need a property. This documentation for create and update goes into all the matching processes and how to override them.

As you can see from the AddPromotion update method there are several benefits over setting properties in a standard styled entity. In this case I included some checks and returned a success message that could be shown to the user. There are many advantage to using DDD-styled entity with its create and update methods – I recommend you look at my article “Creating Domain-Driven Design entity classes with Entity Framework Core” for more on this.

What about the performance of GenericServices?

I am concerned about performance and I checked this regularly during development. Here are some timings produced by BenchmarkDotNet library (which I found excellent for performance testing). I used the fastest database I could find, which turned out to be SQLite in-memory database, so that the database part was minimised. You can find the performance tests in the project called Benchmarking.

The first example is for the simplest possible update – in this case I update the publication date of a book. I opened up the Book’s PublishedOn property so I could update it via AuthoMapper as well as via a DDD-styled update method.

Method Mean (us) Error (us) StdDev (us)
HandCoded, PropetyUpdate 530.9 4.003 3.343
GenericService, AutoMapper 551.4 5.624 5.261
HandCoded, MethodCall 529.1 5.583 5.223
GenericServices, MethodCall 555.7 5.828 5.451


This is about the fastest database update I could find, and the difference between hand-coded and GenericServices’ versions were only 20 to 25 us. (us = 1/1,000,000 of a second), which equates to 5% slower. Note also that calling a method is just as fast as setting the property (GenericServices builds and caches LINQ expressions to call the methods, which are very fast).

The second example is a more complex update, where it adds a new Review entity to the Book. This takes a little longer inside EF Core and the database provider.

Method Mean (us) Error (us) StdDev (us)
HandCoded, MethodCall 773.8 8.882 7.874
GenericServices, MethodCall 800.7 11.262 7.874


This shows that the difference is still about 25 us, but because the database accesses take longer it now only equates to a 3% difference.

Finally, I do a complex Projection of 100 Book’s to provide the display of the user. This is using AutoMapper via GenericServices (the hand-coded version is a LINQ Select method with hand-coded mappings).

Method Mean (ms) Error (ms) StdDev (ms)
HandCoded 11.35 0.2433 0.4970
GenericServices 11.23 0.0838 0.0784


This is a bit misleading, as the GenericServices isn’t faster than the hand-coded – sometimes it comes out faster and sometimes slower. Basically, the performance cost of GenericServices is smaller than the error margin so you can’t really measure it.

How hard is it to set up GenericServices?

I have tried to make it simple to add GenericServices to an existing or new application.  There is full documentation on installing GenericServices, but here is an overview, assuming an ASP.NET Core application.

  1. You install the NuGet library EfCore.GenericServices in your ASP.NET Core project. If you hold your ViewModels/DTOs in another project, then you need to install EfCore.GenericServices in that project too.
    1. You might want to include the companion NuGet library called GenericServices.AspNetCore, which includes a method to copy errors from GenericServices into ASP.NET’s ModelState.
  2. You mark all your DTOs with the ILinkToEntity<TEntity> interface so that GenericServices can find them and work out what entity class they are linked to.
  3. You add a call to GenericServicesSimpleSetup<TContext> in the ConfigureService method in the Startup class of a ASP.NET Core application. This registers GenericServices with the DI – here is a link to the code in my Startup class showing what I did.
  4. You use the ICrudServices interface to inject an instance of the CrudServices into your controller or razor PageModel.

NOTE: There is other versions for registering GenericServices. Another simple one that takes a GenericServiceConfig and a more complex version to handle the case where you use multiple DbContexts to cover your database. There is also a non-DI version for registering one or two DTOs useful for unit testing or serverless applications (See this documentation).


I have described what the GenericServices library is designed to do – to save you time by building robust CRUD web pages. My experience with my previous library with the same name (but totally different design) for EF6.x is that it can really reduce the time it takes to build applications which have a lot of CRUD pages (see this example, which I build in 10 days using the old library).

The new EfCore.GenericServices library is designed to be easier to use and has several new features. The biggest new feature is its ability to work with standard entity classes, where you update via the properties, and DDD-styled entity classes, where you update via methods in the entity class. Personally, I have found DDD-styled entity classes are good for hiding some of the more complex rules for creating and updating entities and their aggregates (see my article that explains that).

Please do have a look at the library and see what you think. The GenericServices GitHub repo contains an example ASP.NET Core Razor Pages application – its uses an in-memory database so that it can run anywhere, and always starts at a known setting. I also have a similar example of using GenericServices  with Web APIs in this GitHub repo, which also uses a in-memory database so that its easy to run.

Happy coding.

Six things I learnt about using ASP.NET Core’s Razor Pages

Last Updated: April 13, 2018 | Created: April 9, 2018

ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages. I was interested in the new Razor Pages approach, as I hoped Razor Pages would allow me to code better by following the SOLID principals – and they do. But first I had to work out how to use Razor Pages. This article my journey in learning how to use Razor Pages.

TL;DR – summary

ASP.NET Core Razor Pages make building web applications simpler, but they work differently to the existing controller->actions->views approach. This article about the things that have changed and what is the best way to use the new approach provided by Razor Pages.

My view of Razor Pages are they are a very good improvement! They produce better structured code, with a pattern that follows the Single Responsibility Principle (SRP), and you get better Separation of Concerns (SoC).

The aims of this article

This article will be useful to people who want to use ASP.NET Core razor pages. It looks at areas that I found that were different from the existing controller->actions->views approach that has been around for many years. The six areas I cover are:

  1. Replacing the action: How should I construct my Razor Page code part?
  2. Routing: Using folders and filenames, with a different handling to invalid URLs
  3. It’s getting messy – how do I segregate the “Home” razor views?
  4. Model Binding: What is the best way to get data in/out to a razor action method?
  5. Changed: The validation name in the View is a bit more complex
  6. How can I implement AJAX requests in razor pages?

This article assumes you know C# and have some idea of what the ASP.NET Core framework does. I do talk about the existing controller->actions->views approach, as a contrast, so its useful (but not essential) you know that. The code is focused on ASP.NET Core 2.0 or above, as that is the point where Razor Pages was introduced.

NOTE: I should have read the ASP.NET Core documentation page “Introduction to Razor Pages in ASP.NET Core” before I started. It has a great overview of what Razor Pages are and how to use them. Do have a look if you have never come across Razor Pages before.

Setting the scene – how razor pages changes the design of your web application

I have used ASP.NET Core and ASP.NET MVC for many years, using its controller->actions->views approach. I found that the controller class could get very messy inside, with a whole host of action methods jumbled together. I learnt to put the absolute minimum of code in each action method, both to keep the controller as simple and to allow the code to be tested outside the controller. Even so my controllers contains a mash-up of different actions handling different URLs – definitely not a nice structure (in my opinion).

Razor pages changes all this. The controller has gone, and the action methods move into their own, per-URL class that deals with all the HTTP requests (GET, PUT, POST etc) to one specific URL. The code is associated to the View file of the same name: the code provides the dynamic data and he view generates the HTML to show to the user.

That means Razor Pages applications are much cleaner and simpler to work with. The only problem I found was working out how to do all the things I used to do with controller and actions, but now with Razor pages. That took a bit of head scratching and experimenting, which is why I wrote this article.

Replacing the action: How should I construct my Razor Page code part?

If you create a Razor Pages application in Visual Studio or via the command line (dotnet new razor) then you get an ASP.NET Core application with three views, Index.cshtml, About.cshtml and Contact.cshtml. Each view has a file with the same name and extension, but with .cs on the end, e.g. Index.cshtml.cs, which contains the code. If we look at the About.cshtml.cs file its content looks like this.

public class AboutModel : PageModel
    public string Message { get; set; }

    public void OnGet()
        Message = "Your application description page.";

This threw me to start with, as action methods inside a controller must return a result – the same About action in a ASP.NET Core controller application would return a IActionResult result. Let’s understand what is happening here, and then we can see how this is helpful.

What Razor Pages does is return the class in the …cshtml.cs file, known as the PageModel. This contains various public properties and methods. In the About page case it returns the AboutModel – this class inherits from the PageModel class, and adds one public property called Message and an OnGet method. When a HTTP GET is sent to the URL /About then

  1. The OnGet method is called, which sets up the property Message.
  2. Then the associated About.cshtml razor view is executed to create the HTML. This has access to the public property Message in the AboutModel class via the @Model.Message.

The About case is very simple, but many requests can be handled with no return from the method. But there are other options. The first is you don’t have to have a PageModel file at all if the page contains static data, or its simple enough to handle using razor (enhanced) features. But I quite like the PageModel for the typical code you would have put in the controller’s action method.

The other case is where the OnGet (or other On… methods) needs to return a result. One case is if you want to redirect, or return a HTTP response such as 404 NotFound. Here is an example of a redirect.

public IActionResult OnGet(int id)
    Data = _context.Find<Book>(id);
    if (Data == null)
        return RedirectToPage("MyError", 
              new { message = “I could not find that book” });

The other case is if you are responding to a AJAX request and need to return data. I cover this in my AJAX article – use this link to go directly to the code that returns a JsonResult.

Routing: Using folders and filenames, with a different handling to invalid URLs

The first thing to say is Razor Pages route HTTP requests in a different way to the controller->actions->views approach. It uses the folder that the razor page is in as the first part of the URL, and the name of the file for the second part. So, a razor page called “PlaceOrder” in the folder “Pages/Checkout” would give a URL of Checkout/PlaceOrder.

There are a couple of rules that are similar to the controller->actions approach.

  • Any razor page in the top level of the “Pages” folder are accesses with no URL prefix. Bit like the HomeController.
  • Any razor page called “Index” is access without needing a URL prefix, same as with the controller->actions->views approach.

NOTE: Microsoft’s full documentation on routing and URL construction can be found here.

Now we need to talk about the different HTTP requests, like GET, POST, etc, and Async handling of requests. These are all done by the name of the methods in the PageModel file: methods starting with “On” then have the HTTP request type added, e.g. OnGet, OnPost, OnPut etc. Then, if you want async you need to add “Async” to the end of the name, e.g. OnGetAsync, and make the method async. These methods are known as handler methods.

NOTE: There is a way to have named handler methods – see Microsoft’s documentation on this. I also cover this in my article on AJAX and Razor Pages.

Warning: If you leave out the On… method, or misname it then the HTTP request won’t fail, but will just show the same view again. I mistyped the OnPost method as OnPut (I’m dyslexic) and chased around for an hour before I spotted it.

It’s getting messy – how do I segregate the “Home” razor views?

As I built my application I found I had more and more razor pages at the top level of the “Pages” directory (the “Home” directory in the controller->actions->views approach). The problem was that my “Home” pages were mixed in with the “Shared” pages like _Layout.cshtml, _ViewStart.cshtml etc., which made finding things more complex.

Thanks to the article “Setting a Custom Default Page in ASP.NET Core Razor Pages” by Matthew Jones I could move my razor pages into a subdirectory, which I called “Home” and then added this code you the Startup.cs file in my ASP.NET Core application

services.AddMvc().AddRazorPagesOptions(options =>
    options.Conventions.AddPageRoute("/Home/Index", "");

That means that on startup the top URL will go to my Home/Index page. Now my “Home” razor pages are easy to find in my application.

Model Binding: What is the best way to get data in/out to a razor action method?

Model binding is what happens when you receive a HTTP request with information in the url, header, form etc. Model binding is a big topic and I’m not going to cover it all, but with Razor Pages applications you get an interesting alternative to binding in the handler method parameters.

I’m going to show you the default binding approach to use with forms, which adds the [BindProperty] attribute to the properties in the PageModel. But first let me explain the model binding that we are all used to from the controller->actions->views approach. Here is very simple example of method parameter binding taking in the a ViewModel which has the data from the form

public MyFormViewModel Data { get; set; }

public IActionResult OnPost(MyFormViewModel formInput)
    if (!ModelState.IsValid)
        return Page();
        //… do something with the formInput

That would work with razor pages, but what happens if the ModelState isn’t valid – it redisplays the form, but the data provided by the user is lost! That is because the “Data” property is empty. So, the correct way to do it is like this

public MyFormViewModel Data { get; set; }

public IActionResult OnPost()
    if (!ModelState.IsValid)
        return Page();
        //… do something with the Data

The [BindProperty] attribute (line 1 in the code above) says that when a POST HTTP request comes in then it will try and bind the data in the property(s) that have the [BindProperty] attribute. So, if the form data doesn’t pass the validation, i.e. ModelState.IsValid is false, then you can return the page and the data will still be there. That is very nice!

For a more in-depth look at form input to razor pages I recommend this page on model binding on the excellent site.

Changed: The validation name in the View is a bit more complex

I haven’t shown you a .cshtml page yet, so let’s look at one with a form to update the title of a book.

@using Home
@model Home.ChangeTitleModel
    ViewData["Title"] = "ChangeTitle";

<h2>Change book title</h2>

<div class="form-horizontal">
<form method="post">
        @Html.ValidationSummary(false, null, new { @class = "text-danger h4" })
        <input name="Data.BookId" type="hidden" value="@Model.Data?.BookId" />
<div class="form-group">           
<div class="col-sm-2">
                <label class="control-label">Book Title</label>
<div class="col-sm-10">
                <input asp-for="Data.Title" class="form-control col-sm-10" />
                <span asp-validation-for="Data.Title" class="text-danger"></span>
<div class="form-group">
<div class="col-md-offset-2 col-md-10">
                <button type="submit">Update</button>


Now, if you look at the that says asp-validation-for, then you will see that its set to “Data.Title”. When using ViewModels (which I do quite often) the name has a prefix of whatever you call your property – in my case “Data”.

The problem comes if you do your own validation tests somewhere else in your application, as you need to add the name of your property as a prefix. I do validation in my business logic and I therefore must prefix the property name with “Data.” (or the name of your property) when transferring the error to the ASP.NET ModelState. If you don’t you get a “hidden error”, where the page is redisplayed, but no error message is shown. This is very annoying to users!

Here is a very simple (naïve) approach for copying ValidationResult errors to the ModelState and adding the prefix.

var prefix = “Data.”;
foreach (var error in service.Errors)
    var properties = error.MemberNames.ToList();
        ? prefix + properties.First() 
        : "", 

NOTE: This naïve approach works, but you need to careful that the property names that your business logic correspond to actual properties in the ViewModel you are using, otherwise your error will still not be seen. I have a more sophisticated version here that checks that the MemberNames match a property in my ViewModel, otherwise I provide no name, which means the error is shown in the ValidationSummary part.

How can I implement AJAX requests in razor pages?

My application uses AJAX to fill a user-selectable filter, and I wasn’t sure how to do that in a Razor Pages application. There were some suggestions out there, but none of them were quite as optimal as I wanted.

I wrote a long article called “ASP.NET Core Razor Pages: How to implement AJAX requests” where I describe the whole process, but in summary I added another razor page where the handler method returns a JsonResult. It works quite well, and continues the use of a SOLID software approach. If you need AJAX then have a look at the article.

Further room for thought

In the controller->actions->views design I made the action methods in my controller as simple I possibly could, and moved all the real code into a project I call a “Service Layer” (see the point 2 in my “Six ways to build better Entity Framework (Core and EF6) applications” article to see what I mean by the Service Layer).

With Razor Pages, which isolate the code of each view, I find myself want to put more code in the PageModel class. The only problem then is – can I unit test it? Unit testing of ASP.NET Core is supposed to be easier, but the current approach isn’t simple. But Scott Hanselman has highlighted a new package coming out with ASP.NET Core 2.1 release called Microsoft.AspNetCore.Mvc.Testing, which looks very promising.

If that turns out to make unit testing of razor pages easier, then I think I will migrate much of the code that lives in my service layer into the razor pages handler methods. That would mean I could put more complex code in my razor pages and still be able to unit test the code. I look forward to trying that out.

NOTE: I am also working on a library to make building ASP.NET Core pages that access the database quicker to implement. Follow me on twitter @thereformedprog, or follow this technical blog site to be informed when something is available.


I am delighted by the cleaner approach that Razor Pages provides over the other approach, i.e. controller->actions->views. For human-focused applications I think Razor Pages is a good solution, but ASP.NET Core with swagger is most likely a better solution for a full-on WebAPI application.

Sure, there is bit of learning to do to find the right ways to use Razor Pages, but the effort is rewarded with a cleaner and more refactorable web/mobile application. I hope these points help you in understanding and adopting Razor Pages.

Happy coding.

ASP.NET Core Razor Pages: How to implement AJAX requests

Last Updated: December 27, 2018 | Created: April 2, 2018

ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages. I really like Razor Pages, but I needed to work out how to do a few things. In this article I describe how to handle AJAX requests. It turns out there are two ways to implement the C# code that handles HTTP requests that return json or other data.

TL;DR – summary

ASP.NET Core’s Razor Pages normally deliver HTML pages, but there is still the need to deliver data for AJAX requests. This can be done in two ways: via what is known as a named page handler, or by using a normal razor page.

I describe two approaches, but I think the second is better. I also cover how to handle HTTP POST requests via AJAX, as there is need to add the anti-forgery token to the request.

Who should read this

This article is focused on how to handle AJAX requests in Razor Pages, so if that is your thing then this is useful to you.

To keep this article short, I assume you know C# and have some idea of what the ASP.NET Core framework does. I do give a very brief introduction to Razor Pages, but there are much better overviews elsewhere.

The code in this article is focused on ASP.NET Core 2.0 or above, as that is the point where Razor Pages was introduced.

NOTE ASP.NET Core documentation page “Introduction to Razor Pages in ASP.NET Core” is a great overview of what Razor Pages. The site is also a good resource to look at.

A very brief introduction to Razor Pages

The classic way to deliver web pages in ASP.NET Core and ASP.NET MVC is via the controller->actions->views approach. The controllers contained many methods (known as actions) which dealt with specific parts of the URL. For instance, the URL /Orders/Index would run the method called Index in the controller called OrdersController.

Razor pages provides a different, simpler (better!) approach. The controller has gone, and the action methods move into their own, per-URL class in a directory. For instance, with Razor Pages, the URL /Orders/Index would look for a Razor Pages view named Index in the directory called Orders.

The Razor Pages view for the /Order/Index consists of two files:

  1. Order/Index.cshtml – the Razor Pages’ view that converts to HTML (must be there)
  2. Order/Index.cshtml.cs – the Razor Pages’ PageModel, which contains the code (optional)

The PageModel file is optional, but if you have any amount of C# code then this file is the place to put it. The only reason for not having a PageModel file is if your page static (i.e. no dynamic data), or the code is so simple you can write it using razor syntax. (Razor syntax is quite powerful now, with functions etc, but I still think the PageModel file is the right place for code to put the was typically written in the controller’s action methods before Razor Pages came alone).

What I really like about Razor Pages is they follow the Single Responsibility Principle (SRP) in that the two files deal with a specific URL and nothing else – they are very focused and meet the S.O.L.I.D. principles  of software design.

Here are two example files that implement the About page in the default Razor Pages application. This shows how they focus on one thing – the About page.


@model AboutModel 
    ViewData["Title"] = "About"; 



Use this area to provide additional information.


using Microsoft.AspNetCore.Mvc.RazorPages;

namespace RazorPageApp.Pages
    public class AboutModel : PageModel
        public string Message { get; set; }

        public void OnGet()
            Message = "Your application description page.";

The problem I am trying to solve – AJAX requests

In any web application you are bound to come across places where you want to fill in or change some data on the page without doing a whole refresh (see this intro to AJAX). AJAX really helps in making a fast, user-friendly web interface.

In my case I had an application with a filter where the user can choose how they wanted to filter a list of books (see the live example site). Once the user has selected the filter type I use an AJAX request to get the values to put in the “Filter By” dropdownlist (greyed out in the figure below because the user hasn’t yet selected the filter type).

I know how to do this in the classic controller->action->view approach (you add another action to your controller), but how do you do it with Razor Pages?

It turns out there are two ways, and I will describe both (hint: but I like the second way more!). Then I will look at dealing with AJAX requests that use a HTTP POST, as there are extra security issues to handle.

NOTE that my examples work for AJAX but they would also work for implementing WebAPI for say for an external application. But ASP.NET Core with swagger is most likely a better solution for full-on WebAPIs.

Solution 1: Using named page handlers

In a typical PageModel file you have named methods called OnGet, OnPut, OnGetAsync etc. These deal with the different types of HTTP request to the URL defined by the folder the Razor Page is in, and the name of the Razor Page. But there is a way to define a call to a method outside of this naming convention, called named page handlers. The example given by Microsoft is providing different options for a particular page, say two different ways to register a user. But you can use it for AJAX calls too.

The default format for handlers is to add “?handler=YourMethodName” to the URL. So my first solution was to alter my JQuery ajax call as shown below:

   url: '/?handler=Filter',
   data: {
       FilterBy: filterByValue
.done(function(result) {
   //… rest of code left out

Then I changed my Index.cshtml.cs file to look like this

public class IndexModel : PageModel
    private readonly IListBooksService _listService;
    private readonly IBookFilterDropdownService _filterService;

    public IndexModel(IListBooksService listService, 
        IBookFilterDropdownService filterService)
        _listService = listService;
        _filterService = filterService;

    public SortFilterPageOptions SortFilterPageData { get; private set; }
    public IEnumerable<BookListDto> BooksList { get; private set; }

    public void OnGet(SortFilterPageOptions options)
        BooksList = _listService

        SortFilterPageData = options;

    public JsonResult OnGetFilter(BooksFilterBy filterBy)
        return new JsonResult(_filterService.GetFilterDropDownValues(filterBy));

The last four lines (the OnGetFilter method) are the ones to focus on. This is the named page handler with the name Filter, and I am picking up a HTTP GET request (I talk about HTTP POST AJAX request later).

This works fine, but it has made my Index PageModel not follow the Single Responsibility Principle, in that its dealing with two types of request. Hence, I looked for a more SOLID design, which I cover next

Solution 2: A dedicated Razor Page just for the AJAX request

While a razor page normally displays HTML there is no reason why its handles methods can’t return json, or any other data type. I therefore made myself a dedicated Razor Page for my filter. Here is the code:


This is the razor page

@model RazorPageApp.Pages.FilterModel
    ViewData["Title"] = "Filter";

You must have a .cshtml file, as this sets up the routing. By trial and error, I found you must have the first two lines, but you don’t need the rest.


The code now just contains the filter code, which is much clearer.

public class FilterModel : PageModel
    private readonly IBookFilterDropdownService _filterService;

    public FilterModel(IBookFilterDropdownService filterService)
        _filterService = filterService;

    public JsonResult OnGet(BooksFilterBy filterBy)
        return new JsonResult(_filterService.GetFilterDropDownValues(filterBy));

And my ajax call now has a url of  ‘/Filter’, as seen below

   url: '/Filter',
   data: {
       FilterBy: filterByValue
.done(function(result) {
   //… rest of code left out

I think this is much more SOLID. The filter has its own PageModel which just deals with the filter and my Index PageModel isn’t ‘polluted’ with the AJAX filter code. I do need a Filter.cshtml to define the routing, but its pretty simple (and the ASP.NET Core default .cshtml is fine). This is my chosen solution because its much simpler and easy to change without any side-effects.

But I’m not finished yet, because we need to handle POST requests with the XSRF/CSRF security feature.

How to handle POST requests with the XSRF/CSRF security feature

Razor Pages provide an anti-forgery token/verification to stop possible security issues in HTTP POST page requests, such as forms (hurrah! – safe by default). The anti-forgery token (you may know it as the [ValidateAntiForgeryToken] attribute) stops cross-site request forgery (known as XSRF or CSRF).

However, for any AJAX POST we must provide the anti-forgery token ourselves. There are two parts to this:

1. Make sure the anti-forgery token is in the page where your AXJAX request comes from

Razor will create right token, if you ask it to, but there is no default for AJAX. Therefore, you need to do something to get razor to add the anti-forgery token. The easiest is to use the Html helper @Html.AntiForgeryToken(), which will add the token. But if you already have a form tag which contains method=”post” then razor will have already added the anti-forgery token.

In my case I had a form, but it used the default HTTP GET, so I had to add the  @Html.AntiForgeryToken() code to my razor page.

NOTE: Microsoft documentation on anti-forgery token in JavaScript shows another way, by adding a function, but I am used to getting the value using JQuery. Either works, but I show the JQuery way.

2. Add the anti-forgery token to your request data

You need to add the anti-forgery token to your JQuery AJAX request. The default name that ASP.NET Core is looking for is RequestVerificationToken in the request header, and the value should be the anti-forgery token. Here is my modified JavaScript to add this.

   url: '/Filter',
   type: 'POST',
   data: {
       FilterBy: filterByValue
   headers: {
.done(function(result) {
   //… rest of code left out

My approach uses JQuery to find the hidden anti-token input tag and extract the value to send back to ASP.NET Core. Having done that your AJAX POST will work – if you got either of those wrong you will get a status 400 on your POST AJAX request.

Handling json data in POST

One of my readers,

NOTE: See Andrew Lock’s detailed article “Model binding JSON POSTs in ASP.NET Core“, which explains why this is needed, and some more details on how to use [FromBody]. Its worth reading.

Typically json returns are used to return more complex objects. In this example I use a Person class, taken from Andrew Lock’s article.  The following JavaScript code shows the ajax call that return a json content type – note that the contentType is set to json and the data has to be JSON.stringified to work.

var person = {"FirstName":"Andrew","LastName":"Lock","Age":"31"};
   url: '/People',
   type = 'POST',
   contentType = 'application/json; charset=utf-8',
   headers = {
   data: JSON.stringify(person)
.done(function(result) {
   //… rest of code left out

The json data will be returned into a class, in this case a Person class, as shown below.

public class Person  
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }

And here is a simple Razor PageModel that takes the json data and binds it to a new instance of that Person class.

public class PeopleModel : PageModel
    public void OnPost([FromBody]Person person)
        //do something with the person class

NOTE: The Microsoft Binding documentation points out that you can only have one property or parameter with the [FromBody] attribute, as the binding reads the input stream. This means you can’t have both a property and a method parameter with the [FromBody] – if you do the parameter will be null.


It took me a little while to work out the best way to handle AJAX requests in Razor Pages – I even learn a few more things writing this article! But now I have an approach that works, and it follows the SOLID software principles.

I really like Razor Pages, mainly because that separate each page into its own, singled-focused Razor Page (the structure of the class controllers, with all actions mixed together, was messy and hard to work with). I may be writing more about Razor Pages in the future, so keep your eye out for more.

Happy coding.

PS. If you are using Razor pages you might be interested in two other articles.

Creating Domain-Driven Design entity classes with Entity Framework Core

Last Updated: December 8, 2018 | Created: March 6, 2018

This article is about how to apply a Domain-Driven Design (DDD) approach to the classes that the Entity Framework Core (EF Core) library maps to a database. This article is about why DDD is useful with a database, and how you can implement a DDD approach to data persistence classes using EF Core.

This article fits into the other articles on building business logic with EF Core. Here are some links to the other articles on this topic.

TL;DR – summary

EF Core has a few new features that allows a DDD approach to building classes that EF Core maps to a database (referred to as entity classes from now on).

There are a number of benefits to using a DDD-styled entity classes, but the main one is that the DDD design moves the create/update code inside the entity class, which stops a developer from misinterpreting how to create or update that class.

The aims of this article

This article shows you how to build a DDD-styled entity class and then compares/contrasts the DDD version with the standard version. The parts are:

  • Setting the scene – what DDD says about object design and persistence
  • A look at what a DDD-styled entity class looks like
    • Comparing creating a new instance of a DDD-styled entity class
    • Compare altering the properties of a DDD-styled entity class
  • Summarising the DDD-style I have developed in this article
  • Looking at the pros and cons of this DDD-styled EF Core entity classes
  • Other aspects of DDD not covered in this article
  • Conclusion

This article is aimed at developers who use Microsoft’s Entity Framework library, so I assume you are familiar with C# code and either Entity Framework 6 (EF6.x) or Entity Framework Core (EF Core) library. This article only applies to EF Core, as EF6.x doesn’t have all the features needed to “lock down” the entity class.

Setting the scene – what DDD says about object design and persistence

I read Eric Evans’s book Domain-Driven Design some years ago – it had a profound effect on me then, and it’s still a guiding light to many of my designs today. In this article I am going to focus on the part of DDD that effects the way you write and use the classes that EF Core maps to a database.

Eric Evans talks about persisting objects (classes in C#) to a database, and here are a few of the points he made about what his DDD approach should do:

  1. Present the client with a simple model for obtaining persistent objects (classes) and managing their life cycle.
  2. Your entity class design should communicate design decisions about object access.
  3. DDD has the concept of an aggregate, which is an entity that is connected to a root DDD says the aggregates should only by updated via the root entity.

NOTE: Eric talks about a DDD repository – I don’t recommend a repository pattern over EF because EF Core itself is already implements repository/UnitOfWork pattern (see my article “Is the repository pattern useful with Entity Framework Core?” for a fuller explanation on this point).

I was already applying some of these rules in my EF6.x applications, but I couldn’t fully implement what Eric was talking about, especially around the aggregates. But now with EF Core I can, and after some experimenting I have developed a pattern for building entity classes. The rest of this article describe my solution to using a DDD approach to building entity classes, and why DDD is better than the standard approach.

The design of a DDD-style entity class

I’m going to start by showing you DDD-styled entity class and then compare how they work compared to the standard way of building entity classes with EF Core. To help me I am going to use an example database I used with a web application that sells books (think super-simple Amazon). The figure below shows the database structure.

The top four tables are about a book, its authors and any reviews that book has, which is the part that this article looks at. I included the two tables at the bottom, as they are used in the business logic example, covered here.

NOTE: All the code in this article can be found in the GenericBizRunner GitHub repo. This repo contains the source of the GenericBizRunner NuGet library, plus an example ASP.NET Core application that uses the GenericBizRunner to run some business logic. You can find out more about the GenericBizRunner via the article “A library to run your business logic when using Entity Framework Core

I now create a Book entity class, which has relational links to the Review table and the Authors table, via the many-to-many BookAuthor linking table. The following code shows you the main structure of the DDD-styled Book entity class (you will see the actual code for the constructor an methods later in this article).

public class Book
    public const int PromotionalTextLength = 200;

    public int BookId { get; private set; }          
    //… all other properties have a private set

    //These are the DDD aggregate propties: Reviews and AuthorLinks
    public IEnumerable<Review> Reviews => _reviews?.ToList();
    public IEnumerable<BookAuthor> AuthorsLink => _authorsLink?.ToList();

    //private, parameterless constructor used by EF Core
    private Book() { } 

    //public constructor available to developer to create a new book
    public Book(string title, string description, DateTime publishedOn, 
        string publisher, decimal price, string imageUrl, ICollection<Author> authors)
        //code left out 

    //now the methods to update the book’s properties
    public void UpdatePublishedOn(DateTime newDate)…
    public IGenericErrorHandler AddPromotion(decimal newPrice, string promotionalText)…               
    public void RemovePromotion()…

    //now the methods to update the book’s aggregates 
    public void AddReview(int numStars, string comment, string voterName, DbContext context)…
    public void RemoveReview(Review review)…                        

Things to note from this code are:

  • Line 3: All the properties have private setters. That means the only way a developer can set the values is either via the public constructor, or via the methods shown later in the Book class.
  • Line 9 and 10: Both of the one-to-many relationships (these are the aggregates that DDD talks about) are available only as a IEnumerable<T> property. This means you cannot add/remove items from the collection – you have use the access methods provided by the Book class.
  • Line 13: EF Core needs a parameterless constructor, but it can have a private access modifier. That means no other code cannot create an instance via this parameterless constructor.
  • Lines 16 to 20: The only way you can create an instance of the Book class is via its public constructor. This requires you to provide all the information needed to create a valid book.
  • Lines 23 to 25: These three lines are the methods that allow a developer to alter the existing book’s value properties.
  • Lines 28 to 29: These methods allow a developer to alter the existing book’s relational properties – known as aggregate by DDD.

The methods listed in lines 23 to 29 are referred to as access methods in this article. These access methods are the only way to change the properties and relationships inside the entity.

The net effect of all this is the class Book is “locked down”, so the only way to create or alter the class is via specific constructor(s) and appropriately named access methods. This contrasts with the standard way of creating/updating an entity class in EF Core, that is, create an default entity with its public parameterless constructor, and set values/relationships via its public setter on all the class’s properties. The next question is – why is this better?

Comparing creating a new instance of a DDD-styled entity class

Let’s start by considering the code to create a new instance of Book entity using first the standard-style and then with the DDD-styled class. The code will take some data input from json file containing information on various books and builds a set of Book entities based on that data (this is my code to seed the database with some realistic data). I start with the original code that used the standard-styled entity class (i.e. not DDD-style).

a. Standard entity class with public constructor and properties

var price = (decimal) (bookInfoJson.saleInfoListPriceAmount ?? DefaultBookPrice)
var book = new Book
    Title = bookInfoJson.title,
    Description = bookInfoJson.description,
    PublishedOn = DecodePubishDate(bookInfoJson.publishedDate),
    Publisher = bookInfoJson.publisher,
    OrgPrice = price,
    ActualPrice = price,
    ImageUrl = bookInfoJson.imageLinksThumbnail

byte i = 0;
book.AuthorsLink = new List<BookAuthor>();
foreach (var author in bookInfoJson.authors)
    book.AuthorsLink.Add(new BookAuthor
        Book = book, Author = authorDict[author], Order = i++

b. DDD-styled entity class, with specific public constructor with parameters

Now, here is the same code, but using the DDD-styled entity class with its public constructor.

var authors = bookInfoJson.authors.Select(x => authorDict[x]).ToList();
var book = new Book(bookInfoJson.title, 
    ((decimal?)bookInfoJson.saleInfoListPriceAmount) ?? DefaultBookPrice,

The constructor inside the Book class is shown below.

public Book(string title, string description, DateTime publishedOn, 
    string publisher, decimal price, string imageUrl, 
    ICollection<Author> authors)
    if (string.IsNullOrWhiteSpace(title))
        throw new ArgumentNullException(nameof(title)); 

    Title = title;
    Description = description;
    PublishedOn = publishedOn;
    Publisher = publisher;
    ActualPrice = price;
    OrgPrice = price;
    ImageUrl = imageUrl;
    _reviews = new HashSet<Review>();       

    if (authors == null || !authors.Any())
        throw new ArgumentException(
        "You must have at least one Author for a book", nameof(authors));
    byte order = 0;
    _authorsLink = new HashSet<BookAuthor>(
        authors.Select(a => new BookAuthor(this, a, order++)));

Things to note from the DDD-styles constructor code are:

  • Lines 1 to 2: The constructor requires you to provide all the data needed to create a property initialised instance of the Book class.
  • Lines 5,6 and 17 to 19: The code has some business rule checks built in. In this case they are considered coding errors, so they throw an exception. If they were user-fixable errors I might use a static factory that returns Status<T>, where the Status type contains a list of errors.
  • Lines 21 to 23: The creating of the BookAuthor linking table is done inside the constructor. The BookAuthor entity class can have a constructor with an internal access modifier, which stops the code outside the DataLayer from creating a BookAuthor instance.

c. Comparing the two ways of creating an instance of the Book entity class

As you have seen, the amount of code to create an instance is similar in length for both cases. So, why is the DDD-style better. Here are my comments:

  • The DDD-style controls access. There is no possibility of changing a property by accident. Each change is done by a named method or constructor with defined parameters – it is very obvious what you are doing.
  • The DDD-style is DRY (don’t repeat yourself). You might need to create a Book instance in a few places. By putting all the code in the Book’s constructor then you don’t have to repeat it in different places.
  • The DDD-style hides complex parts. The Book has two properties, ActualPrice and OrgPrice, and both must be set to the same value when it is created. In the standard-style code it required the developer to know this fact, while the writer of DDD-style class knows how it should work and can put that code inside the constructor.
  • The DDD-style hides the setup of the aggregate, AuthorsLink. In the stardard-style class the code had to create the BookAuthor entity class, including the ordering. With the DDD-style version that complication is hidden from the caller.
  • The DDD-style means the property setters can be private. One reason for using a DDD-style is to “lock down” the entity, i.e. you cannot alter any of the properties or relationships directly.

This last point moves us onto the looking at the DDD-style for altering an entity’s properties.

Compare altering the properties of a DDD-styled entity class

One of the advantages the Eric Evans says for a DDD-style entity is “They communicate design decisions about object access”. I understand this to mean that the design of your entity class should a) make it obvious how to change data inside an entity, and b) make it obvious when you shouldn’t change specific data in an entity. So let’s compare two different updates that my business rules say are allowed – one is simple and the other is bit more complicated.

1. Changing the publication date of a book

Say we want to preview a book at will be published later. You might have a draft publish date, but it’s likely to change. We therefore need a way to change the PublishedOn property in an instance of a book.

a. Standard entity class with public properties

The standard-style entity class woks by setting a property via is public setter, as shown in the following code.

var book = context.Find<Book>(dto.BookId);
book.PublishedOn = dto.PublishedOn;        

b. The DDD-styled entity class with access methods

In the DDD-styled class the properties have private setters, so I have to set things via the access method.

var book = context.Find<Book>(dto.BookId);
book.UpdatePublishedOn( dto.PublishedOn);        

There is nothing obviously different about these – in fact the DDD-style is a bit longer, as the UpdatePublishedOn method must be written (OK, it’s only two lines of code, but its more work). But there is a difference – in the DDD-style entity class you know you can change the publication date because there is a method with an obvious name – UpdatePublishedOn . You also know you’re not allowed to change the Publisher property, or any other property that doesn’t have an access method. That is helpful to any developer that needs to interact with an entity class.

2. Adding/removing a promotion to a book

The other business requirement is, we want to be able to add a promotion to a book. A promotion consists of a new (lower) price, and an explanation of why there is a promotion, such as “50% off all this week!”. Now, the implementation of this feature is efficient, but not obvious. Here are the rules:

  • The OrgPrice property holds its normal price – when there is no promotion the book should sell for the OrgPrice.
  • The ActualPrice property holds the price that the book is currently selling for – if there is a promotion then the ActualPrice holds the new price. If there is not promotion it should be set to the OrgPrice.
  • The PromotionText property must hold text to show they customer when there is a promotion, otherwise it should be null when there is no promotion.

The rules are fairly obvious to the person who implemented this business feature, but it may not be so obvious to another developer when he comes implement the front-end code to add a promotion. By creating the DDD-style AddPromotion and RemovePromotion methods in the Book entity class the implementer of the feature can hide the complexities of this implementation. The user of the Book entity has obviously names method to use.

Let’s look at the AddPromotion and RemovePromotion access methods, as there is some further learning to take from these.

public IGenericErrorHandler AddPromotion(decimal newPrice, string promotionalText)
    var status = new GenericErrorHandler();
    if (string.IsNullOrWhiteSpace(promotionalText))
            "You must provide some text to go with the promotion.",
        return status;

    ActualPrice = newPrice;  
    PromotionalText = promotionalText; 
    return status; 

Things to note from this code are:

  • Line 4 to 10: It’s deemed that the PromotionalText property must always has something in it, so the method checks that. Because it’s an error that the user can fix, it returns an error to show the user.
  • Lines 12, 13: This sets up the properties in the way the initial developer decided to implement the feature. The caller of the AddPromotion method doesn’t need to know how the feature is implemented.

To add a new promotion you would write:

var book = context.Find<Book>(dto.BookId);
var status = book.AddPromotion(newPrice, promotionText);        
if (!status.HasErrors) 
return status;

The RemovePromotion is much simpler, and there is not possibility of a user-fixable error, so the method’s return type is void

public void RemovePromotion()
    ActualPrice = OrgPrice; 
    PromotionalText = null; 

These two examples are quite different. The first example, setting the PublishOn property was so simple that the standard-styled class was fine. But the second example contained implementation details on a promotion is added/removed which wouldn’t be obvious to anyone who hadn’t worked on the Book entity. In that case the DDD-style access method hides that implementation detail, which makes it easier for the developer.

Also, the second version shows that there are some value updates can have a piece of business logic in them. For small pieces of business logic like this we can still use an access method by having the method a status result, which contains any errors.

3. Handling aggregates – the Reviews collection property

The DDD approach says that aggregate entities should only be changed via the root entity. In our example the Reviews collection navigational property is an aggregate, which gives us a problem – even if we provide a private setter a developer could still add, remove or clear a property of type ICollection<T>. This is where EF Core backing fields feature comes in.

The backing field feature allows a developer to hide the actual collection in a private field and expose the collection as a IEnumerable<T>, which doesn’t allow the adding, removing or clearing of the collection. The code below shows the part of the Book entity class that defines the Reviews collection as a backing field.

public class Book
    private HashSet<Review> _reviews;
    public IEnumerable<Review> Reviews => _reviews?.ToList();
     //… rest of code not shown

The other thing you need to do is to tell EF Core that on a read in of the Reviews relationship it should write to backing field, not the property. The configuration code to do that is shown below.

protected override void OnModelCreating
    (ModelBuilder modelBuilder)
    //… other non-review configurations left out

To access the Reviews collection I added two methods, AddReview and RemoveReview, to the Book class. The AddReview methods is the most interesting one, and is shown below

public void AddReview(int numStars, string comment, string voterName, 
    DbContext context = null) 
    if (_reviews != null)    
        _reviews.Add(new Review(numStars, comment, voterName));   
    else if (context == null)
        throw new ArgumentNullException(nameof(context), 
            "You must provide a context if the Reviews collection isn't valid.");
    else if (context.Entry(this).IsKeySet)  
        context.Add(new Review(numStars, comment, voterName, BookId));
        throw new InvalidOperationException("Could not add a new review.");  

Things to note from this code are:

  • Lines 4 to 7: I purposely don’t initialise the _reviews field in the private parameterless constructor that EF Core uses when loading the entity. This allows my code to detect if the _reviews collection was loading, say via the .Include(p => p.Reviews) method. (For the create case I initialise the _reviews collection in the public constructor, so that Reviews can be added on creation).
  • Lines 8 to 12: If the Reviews collection has not been loaded then the code needs to use the DbContext, so this just checks its set.
  • Lines 13 to 16: If the Book has already been written to the database, then it will have a valid key. In that case I use a quicker technique to add a Review, by setting the foreign key in the new Review instance and write it out to the database (see section 3.4.5 in my book for this technique).
  • Line 19: If none of these work then there is a code problem, so I throw an exception.

Note: I have designed all of my access methods to handle the case where only the root entity is loaded. It is up to the access method on how it achieves the update of the aggregates – this could include loading additional relationships.

Summarising the DDD-style I have developed in this article

Let me summarise the design of the DDD-styled entity classes that works with EF Core.

  1. You provide public constructor(s) for developers to create a properly initialised instance of the class. If the code needed to create an instance includes possible user-fixable errors then it will need static factory method instead, which will return Status<T>, where T is the class it is creating.
  2. All properties should have a private setter, i.e. they are read-only from outside the class
  3. For navigational properties that are collections you need to use backing fields and a IEnumerable<T> for the public property. That stops developers from changing the collection.
  4. You need to provide methods for all the different ways you want to change the data in the entity class, both property updates and aggregate updates. These access methods either return void if the process has no user-fixable errors in it, or a status value containing success or a list of errors if the code can detect user-fixable errors.
  5. The ‘scope’ of what the code in the entity class should take on is important. I have found limiting the methods to only changing the root entity and its aggregates is a good idea. I also limit my validation checks to making sure the entities I am updating are correctly formed, i.e. I don’t check outside business rules like stock availability etc. – that is the role of proper business logic code.
  6. The access methods must assume only the root entity has been loaded. If it needs relationship that isn’t loaded, then it must load the extra relational data itself. This simplifies the calling pattern to all access methods.

This last item means there is a standard way of calling access methods – here are the two forms, starting with the access method in which no user-fixable errors could occur:

var book = context.Find<Book>(bookId);
book.UpdateSomething( someData);

If there could be an error then the code would be

var book = context.Find<Book>(bookId);
var status = book.UpdateThatCouldHaveErrors( someData);
if (!status.HasErrors)
return status;

The pros and cons of this DDD-styled EF Core entity classes

I always like to critically at any design pattern or architecture, and here are my thoughts on using DDD-style entity classes with EF Core.

The good parts of using a DDD approach with EF Core

  • Using access methods to change value property is clearer. The act of making all the properties have a private setter and then having to use meaningfully-named methods to change the properties is positive. The named access methods make it obvious what you can change, and going via a method means it can return an error status if it detects a problem.
  • Altering the aggregates via business-oriented methods works well too. Hiding the relationships, such as the Book’s one-to-many relationship to the Review class means that the details of how that update is done is kept inside the root entity class.
  • Using specific constructors to create entity instances ensures an entity is created properly. Moving the code for building a valid entity class into the class’s constructor reduces the likelihood of a developer incorrectly interpreting how a class should be initialized.

The bad parts of using a DDD approach with EF Core

  • My design includes some EF Core code inside the access methods, and this is considered an anti-pattern by some people. The problem is that your domain entities are now linked to the database access code, which in DDD terms isn’t a good thing. I found if I didn’t do this then I was relying on the caller to know what needed to be loaded, which breaks the separation of concerns rule.
  • The DDD CRUD code requires more code to be written. For simple updates, like the change to the publication date of a book, the DDD-styled access methods seem a little bit of an overkill. Is it worth it?

As you can tell, I do like the DDD-style, but it did take a while to get the format right. Now I have a pattern that I can apply to any applications I work on. I have tried this style of access methods on small applications and it works well, but I won’t fully know its pros/cons until I have used it in a sizeable project.

My decision to allow EF Core code inside the access methods was a tough call. I tried to keep EF Core commands out of the access methods, but this led to the caller of the method to have to load the various navigational properties need by the method, and if the caller didn’t do that correctly the update could fail silently (especially on one-to-one relationships). That to me was a not acceptable, so I allowed EF Core code inside some methods (by not constructors).

The other bad point was that DDD CRUD access method requires more code, and I’m still thinking about that one. Should I bite the bullet and make all updates go through a DDD access method, or is there a case for allowing some properties to have a public setter and allow direct change of the value? The purest in me would say always use access methods, but I know that there is a ton of boring updates to be done in a typical application, and it’s easier to do directly. Only a real project will tell me what works best.

Other aspects of DDD not covered in this article

This article is already very long so I’m going to finish here, but that means I will leave some big areas out. Some I have already written about and some I will be writing about in the future. Here are the things I have left out.

  • Business logic and DDD. I have used DDD concepts with business logic for some years, and with EF Core’s new features I expect to move some of the business logic into the DDD-styles entity classes. Read this article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”.
  • A DDD repository pattern. Eric Evans talks about building a repository pattern to hide your database access library. I have concluded that using a repository pattern with EF Core is a bad idea – you can read why in this article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”.
  • Multiple DBContexts/Bounded Contexts: I have been thinking a lot about spitting a database across multiple DbContext’s, for instance having a BookContext that only knows about the Book entity and its aggregates, and a separate OrderContext that handles orders. I think the bounded context idea is very important, especially if you are building an application that needs to scale up as it grows. I haven’t quite got my pattern right yet, but I expect to write an article about this in the future.


EF Core, with its backing field feature, allows the building a DDD-styled entity classes, which is “locked down”, i.e. the public constructor(s) define how to create an instance and properties can only be updated via access methods. The benefits are that makes it more obvious how the entity class should be used, and knowledge about the inner workings of the entity class are held inside the entity class itself.

It has taken me some time to get a DDD style that provided a consistent interface to the entity class access methods. Getting the access method pattern right is important because it makes the calls quicker and simpler to code. In any application that uses a database you will get a lot of CRUD (Create, Read, Update and Delete) database accesses and you need produce these quickly and easily – so a simple and consistent pattern will make you quicker at writing CRUD code.

NOTE: You can find the code in this article in GenericBizRunner GitHub repo. That repo also contains an example ASP.NET Core application where you can try changing a book using these access methods. Please do clone it and run it locally – it uses an in-memory Sqlite database so it should run anywhere.

Happy coding.

Is the repository pattern useful with Entity Framework Core?

Last Updated: January 22, 2019 | Created: February 20, 2018

I wrote my first article about the repository pattern in 2014, and it is still a popular post. This is an updated article that takes account of a) the release of Entity Framework Core (EF Core) and b) further investigations of different EF Core database access patterns.

  1. Original: Analysing whether Repository pattern useful with Entity Framework (May 2014).
  2. First solution: Four months on – my solution to replacing the Repository pattern (Sept 2014).
  3. THIS ARTICLE: Is the repository pattern useful with Entity Framework Core?
  4. Architecture of Business Layer working with Entity Framework (Core and v6).
  5. Creating Domain-Driven Design entity classes with Entity Framework Core.
  6. GenericServices: A library to provide CRUD front-end services from a EF Core database.
  7. Wrapping your business logic with anti-corruption layers – NET Core.

TL;DR – summary

No, the repository/unit-of-work pattern (shortened to Rep/UoW) isn’t useful with EF Core. EF Core already implements a Rep/UoW pattern, so layering another Rep/UoW pattern on top of EF Core isn’t helpful.

A better solution is to use EF Core directly, which allows you to use all of EF Core’s feature to produce high-performing database accesses.

The aims of this article

This article looks at

  • What people are saying about the Rep/UoW pattern with EF.
  • The pro and cons of using a Rep/UoW pattern with EF.
  • Three ways to replace the Rep/UoW pattern with EF Core code.
  • How to make your EF Core database access code easy to find and refactor.
  • A discussion on unit testing EF Core code.

I’m going assume you are familiar with C# code and either Entity Framework 6 (EF6.x) or Entity Framework Core library. I do talk specifically about EF Core, but most of the article is also relevant to EF6.x.

Setting the scene

In 2013 I started work on a large web application specifically for healthcare modelling. I used ASP.NET MVC4 and EF 5, which had just come out and supported SQL Spatial types which handles geographic data. At that time the prevalent database access pattern was a Rep/UoW pattern – see this article written by Microsoft in 2013 on database access using EF Core and the Rep/UoW pattern.

I built my application using Rep/UoW, but found it a real pain point during development. I was constantly having to ‘tweak’ the repository code to fix little problems, and each ‘tweak’ could break something else! It was this that made me research into how to better implement my database access code.

Coming more up to date, I was contracted by a start-up company at the end of 2017 to help with a performance issue with their EF6.x application. The main part of the performance issue turned out to be due to lazy loading, which was needed because the application used the Rep/UoW pattern.

It turns out that a programmer that helped start the project had used the Rep/UoW pattern. On talking to the founder of the company, who is very tech savvy, he said that he found the Rep/UoW part of the application was quite opaque and hard to work with.

What people are saying against the repository pattern

In researching as part of my review of the current Spatial Modeller™ design I found some blog posts that make a compelling case for ditching the repository. The most cogent and well thought-out post of this kind is ‘Repositories On Top UnitOfWork Are Not a Good Idea’. Rob Conery’s main point is that the Rep/UoW just duplicates what Entity Framework (EF) DbContext give you anyway, so why hide a perfectly good framework behind a façade that adds no value. What Rob calls ‘this over-abstraction silliness’.

Another blog is ‘Why Entity Framework renders the Repository pattern obsolete’. In this Isaac Abraham adds that repository doesn’t make testing any easier, which is one thing it was supposed to do. This is even truer with EF Core, as you will see later.

So, are they right?

My views on the pros and cons of repository/unit-of-work pattern

Let me try and review the pros/cons of the Rep/UoW pattern in as even-handed way as I can. Here are my views.

The good parts of the Rep/UoW pattern (best first)

  1. Isolate your database code: The big plus of a repository pattern is that you know where all your database access code is. Also, you normally split your repository into sections, like the Catalogue Repository, the Order Processing Repository, etc which makes it easy to find the code a specific query that has a bug or needs performance tuning. That is definitely a big plus.
  2. Aggregation: Domain Driven-Design (DDD) is a way to design systems, and it suggests that you have a root entity, with other associated entities grouped to it. The example I use in my book “Entity Framework Core in Action” is a Book entity with a collection of Review entities. The reviews only make sense when linked to a book, so DDD says you should only alter the Reviews via the Book entity. The Rep/UoW pattern does this by providing a method to add/remove reviews to the Book Repository.
  3. Hiding complex T- SQL commands: Sometimes you need to bypass EF Core’s cleverness and use T-SQL. This type of access should be hidden from higher layers, yet easy to find to help with maintenance/refactoring. I should point out that Rob Conery’s post Command/Query Objects can also handle this.
  4. Easy to mock/test: It is easy to mock an individual repository, which makes unit testing code that accesses the database easier. This was true some years ago, but nowadays this there are other ways around this problem, which I will describe later.

You will note that I haven’t listed “replacement of EF Core with another database access library”. This is one of the ideas behind the Rep/UoW, but my view it’s a misconception, because a) it’s very hard replace a database access library, and b) are you really going to swap such a key library in your application? You wouldn’t put up a facade around ASP.NET or React.js, so why do that to your database access library?

The bad parts of the Rep/UoW pattern (worst first)

The first three items are all around performance. I’m not saying you can’t write an efficient Rep/UoW’s, but its hard work and I see many implementations that have built-in performance issues (including Microsoft’s old Rep/UoW’s implementation). Here is my list of the bad issues I find with the Rep/UoW pattern:

    1. Performance – handling relationships: A repository normally returns a IEnumerable /IQueryable result of one type, for instance in the Microsoft example, a Student entity class. Say you want to show information from a relationship that the Student has, such as their address? In that case the easiest way in a repository is to use lazy loading to read the students’ address entity in, and I see people doing this a lot. The problem is lazy loading causes a separate round-trip to the database for every relationship that it loads, which is slower than combining all your database accesses into one database round-trip. (The alternative is to have multiple query methods with different returns, but that makes your repository very large and cumbersome – see point 4).
    2. Data not in the required format: Because the repository assembly is normally created near to the database assembly the data returned might not be in the exact format the service or user needs. You might be able to adapt the repository output, but its a second stage you have to write. I think it is much better to form your query closer to the front-end and include any adaption of the data you need  (see more on this in the section “Service Layer” in one of my articles).
    3. Performance – update: Many Rep/UoW implementations try to hide EF Core, and in doing so don’t make use of all its features. For instance, a Rep/UoW would update an entity using the EF Core’ Update method, which save every property in the entity. Whereas, using EF Core’s built-in change tracking features it will only update the properties that have changed.
    4. Too generic: The more reusable the code is, the less usable it is.” Neil Ford, from the book Building evolutionary architectures. The allure of the Rep/UoW comes from the view that you can write one, generic repository then you use that to build all your sub-repositories, for instance Catalogue Repository, Order Processing Repository, etc. That should minimise the code you need to write, but my experience is that a generic repository works at the beginning, but as things get more complex you end up having to add more and more code to each individual repository.

To sum up the bad parts – a Rep/UoW hides EF Core, which means you can’t use EF Core’s features to produce simple, but efficient database access code.

How to use EF Core, but still benefit from the good parts of the Rep/UoW pattern

In the previous “good parts” section I listed isolation, aggregation, hiding, and unit testing, which a Rep/UoW did well. In this section I’m going to talk about a number different software patterns and practices which, when combined with a good architectural design, provides the same isolation, aggregation, etc. features when you are using EF Core directly.

I will explain each one and then pull them together in a layered software architecture.

1. Query objects: a way to isolate and hide database read code.

Database accessed can be broken down into four types: Create, Read, Update and Delete – known as CRUD. For me the read part, known as a query in EF Core, are often the hardest to build and performance tune. Many applications rely on good, fast queries such as, a list of products to buy, a list of things to do, and so on. The answer that people have come up with is query objects.

I first came across them in 2013 in Rob Conery’s article (mentioned earlier), where he refers to Command/Query Objects. Also, Jimmy Bogard produced post in 2012 called ‘Favor query objects over repositories’. Using .NET’s IQueryable type and extension methods then we can improve the query object pattern over Rob and Jimmy’s examples.

The listing below gives a simple example of a query object that can select the order in which a list of integers is sorted.

public static class MyLinqExtension 
    public static IQueryable<int> MyOrder 
        (this IQueryable<int> queryable, bool ascending) 
        return ascending
            ? queryable.OrderBy(num => num)
            : queryable.OrderByDescending(num => num); 

And here is an example of how the MyOrder query object is called.

var numsQ = new[] { 1, 5, 4, 2, 3 }.AsQueryable();

var result = numsQ
    .Where(x => x > 3)  

The MyOrder query object works because the IQueryable type holds a list of commands, which are executed when I apply the ToArray method. In my simple example I’m not using a database, but if we replaced the numsQ variable with a DbSet<T> property from the application’s DbContext, then the commands in the IQueryable<T> type would be converted to database commands.

Because the IQueryable<T> type isn’t executed until the end, you can chain multiple query objects together. Let me give you a more complex example of a database query from my book “Entity Framework Core in Action”. In the code below uses four query objects chained together to select, sort, filter and page the data on some books. You can see this in action on the live site

public IQueryable<BookListDto> SortFilterPage
    (SortFilterPageOptions options)
    var booksQuery = _context.Books          


    return booksQuery.Page(options.PageNum-1,

Query objects provides even better isolation than the Rep/UoW pattern because you can split up complex queries into a series of query objects that you can chain together. This makes it easier to write/understand, refactor and test. Also, if you have a query that needs raw SQL you can either use EF Core’s FromSql method, which returns IQueryable<T> too.

2. Approaches to handling Create, Update and Delete database accesses

The query objects handle the read part of the CRUD, but what about the Create, Update and Delete parts, where you write to the database? I’m going to show you two approaches to running a CUD action: direct use of EF Core commands, and using DDD methods in the entity class. Let’s look at very simple example of an Update: adding a review in my book app (see

Note: If you want to try adding a review you can do that. There is a GitHub repo that goes with my book at To run the ASP.NET Core application then a) clone the repo, select branch Chapter05 (every chapter has a branch) and run the application locally. You will see an Admin button appear next to each book, with a few CUD commands.

Option 1 – direct use of EF Core commands

The most obvious approach is to use EF Core methods to do the update of the database. Here is a method that would add a new review to a book, with the review information provided by the user. Note: the ReviewDto is a class that holds the information returned by the user after they have filled in the review information.

public Book AddReviewToBook(ReviewDto dto)
    var book = _context.Books
        .Include(r => r.Reviews)
        .Single(k => k.BookId == dto.BookId);
    var newReview = new Review(dto.numStars, dto.comment, dto.voterName);
    return book;

The steps are:

  • Lines 3 to 5: load specific book, defined by the BookId in the review input, with its list of reviews
  • Line 6 to 7: Create a new review and add it to the book’s list of reviews
  • Line 8: The SaveChanges method is called, which updates the database.

NOTE: The AddReviewToBook method is in a class called AddReviewService, which lives in my ServiceLayer. This class is registered as a service and has a constructor that takes the application’s DbContext, which is injected by dependecy injection (DI). The injected value is stored in the private field _context, which the AddReviewToBook method can use to access the database.

This will add the new review to the database. It works, but there is another way to build this using a more DDD approach.

Option 2 – DDD-styled entity classes

EF Core offers us a new place to add your update code to – inside the entity class. EF Core has a feature called backing fields that makes building DDD entities possible. Backing fields allow you to control access to any relationship. This wasn’t really possible in EF6.x.

DDD talks about aggregation (mentioned earlier), and that all aggregates should only be altered via a method in the root entity, which I refer to as access methods. In DDD terms the reviews are an aggregate of the book entity, so we should add a review via an access method called AddReview in the Book entity class. This changes the code above to a method in the Book entity, here

public Book AddReviewToBook(ReviewDto dto)
    var book = _context.Find<Book>(dto.BookId);
    book.AddReview(dto.numStars, dto.comment, 
         dto.voterName, _context); 
    return book;

The AddReview access method in the Book entity class would look like this:

public class Book
    private HashSet<Review> _reviews;
    public IEnumerable<Review> Reviews => _reviews?.ToList();
    //...other properties left out

    //...constructors left out

    public void AddReview(int numStars, string comment, 
        string voterName, DbContext context = null) 
        if (_reviews != null)    
            _reviews.Add(new Review(numStars, comment, voterName));   
        else if (context == null)
            throw new ArgumentNullException(nameof(context), 
                "You must provide a context if the Reviews collection isn't valid.");
        else if (context.Entry(this).IsKeySet)  
            context.Add(new Review(numStars, comment, voterName, BookId));
            throw new InvalidOperationException("Could not add a new review.");  
    //... other access methods left out

This method is more sophisticated because it can handle two different cases: one where the Reviews have been loaded and one where it hasn’t. But it is faster than the original case, as it uses a “create relationship via foreign keys” approach if the Reviews are not already loaded.

Because the access method code is inside the entity class it can be more complex if need be, because its going to be the ONLY version of that code you need to write (DRY). In option 1 you could have the same code repeated in different places wherever you need to update the Book’s review collection.

NOTE: I have written an article called “Creating Domain-Driven Design entity classes with Entity Framework Core” all about DDD-styled entity classes. That has a much more detailed look at this topic. I have also updated my article on how to write business logic with EF Core to use the same DDD-styled entity classes.

Why doesn’t the method in the entity class called SaveChanges? In option 1 a single method contained all the parts: a) load entity, b) update entity, c) call SaveChanges to update the database. I could do that because I knew it was being called by a web action, and that was all I wanted to do.
With DDD entity methods you can’t call SaveChanges in the entity method because you can’t be sure the operation has finished. For instance, if you were loading a book from a backup you might want to create the book, add the authors, add any reviews, and then call SaveChanges so that everything is saved together.

Option 3: the GenericServices library

There is a third way. I noticed there was a standard pattern when using CRUD commands in the ASP.NET applications I was building, and back in 2014 I build a a library called GenericServices, which works with EF6.x. In 2018 I built a more comprehensive version called EfCore.GenericServices for EF Core (see this article on EfCore.GenericServices).

These libraries don’t really implement a repository pattern, but act as an adapter pattern between the entity classes and the actual data that the front-end needs. I have used the original, EF6.x, GenericServices and it has saved me months of writing boring front-end code. The new EfCore.GenericServices is even better, as it can work with both standard styled entity classes and DDD-styled entity classes.

Which option is best?

Option 1 (direct EF Core code) has the least code to write, but there is a possibility of duplication, because different parts of the application may want to apply CUD commands to an entity. For instance, you might have an update via the ServiceLayer when the user via changes things, but external API might not go through the ServiceLayer, so you have to repeat the CUD code.

Option 2 (DDD-styled entity classes) places the crucial update part inside the entity class, so the code going to be available to anyone who can get an entity instance. In fact, because the DDD-styled entity class “locks down” access to properties and collections everybody HAS to use the Book entity’s AddReview access method if they want to update the Reviews collection.  For many reasons this is the approach I want to use in future applications (see my article for a discussion on the pros and cons). The (slight) down side is its it needs a separate load/Save part, which means more code.

Option 3 (the EF6.x or EF Core GenericServices library) is my preferred approach, especially now I have build the EfCore.GenericServices version that handles DDD-styled entity classes. As you will see in the article about EfCore.GenericServices, this library drastically reduces the code you need to write in your web/mobile/desktop application. Of course, you still need to access the database in your business logic, but that is another story.

Organising your CRUD code

One good thing about the Rep/UoW pattern is it keeps all your data access code in one place. When swapping to using EF Core directly, then you could put your data access code anywhere, but that makes it hard for you or other team members to find it. Therefore, I recommend having a clear plan for where you put your code, and stick to it.

The following figure shows a Layered or Hexagonal architecture, with only three assemblies shown (I have left out the business logic, and in a hexagonal architecture you will have more assemblies). The three assemblies shown are:

  • ASP.NET Core: This is the presentation layer, either providing HTML pages and/or a web API. This no database access code but relies on the various methods in the ServiceLayer and BusinessLayers.
  • ServiceLayer: This contains the database access code, both the query objects and the Create, Update and Delete methods. The service layer uses an adapter pattern and command pattern to link the data layer and the ASP.NET Core (presentation) layer. (see this section from one of my articles about the service layer).
  • DataLayer: This contains the application’s DbContext and the entity classes. The DDD-styled entity classes then contain access methods to allow the root entity and its aggregates to be changed.

NOTE: The library GenericServices (EF6.x) and EfCore.GenericServices (EF Core) mentioned earlier are, in effect, a library that provides ServiceLayer features, i.e. that act as an adapter pattern and command pattern between the DataLayer and your web/mobile/desktop application.

The point I want make from this figure is, by using different assemblies, a simple naming standard (see the word Book in bold in the figure) and folders, you can build an application where your database code is isolated and it’s easy to find. As your application grows this can be critical.

Unit testing methods that use EF Core

The final part to look at is unit testing applications that use EF Core. One of the pluses of a repository pattern is you can replace it with a mock, when testing. So, using EF Core directly removed the option of mocking (technically you could mock EF Core, but it’s very hard to do well).

Thankfully things have moved on with EF Core and you can simulate the database with an in-memory database. In-memory databases are quicker to create and have a default start point (i.e. empty), so it’s much easier to write tests against. See my article, Using in-memory databases for unit testing EF Core applications, for a detailed look at how you can do that, plus an NuGet package called EfCore.TestSupport that provide methods to make writing EF Core unit tests quicker to write.


My last project that used the Rep/UoW pattern was back in 2013, and I have never used a Rep/UoW pattern again since then. I have tried a few approaches, a custom library called GenericServices with EF6.x, and now a more standard query object and DDD entity methods with EF Core. They are easy to write and normally perform well, but if they are slow it’s easy to find and performance tune individual database accesses.

In the book I wrote for Manning Publications I have a chapter where I performance tune a ASP.NET Core application that “sells” books. That process used query objects and DDD entity methods and shows that it can produce great performing database accesses (see my article Entity Framework Core performance tuning – a worked example for a summary).

My own work follows the query object for reads and DDD-styled entity classes with their access methods for CUD and business logic. I do need to use these in a proper application to really know if they work, but its promising. Wtach this space for more in DDD-styled entity classes, architecture that benefit from that, and maybe a new library :).

Happy coding!

A library to run your business logic when using Entity Framework Core

Last Updated: January 21, 2019 | Created: January 19, 2018

This article describes a library to help write and run business logic in a .NET Core application where you are using Entity Framework Core (EF Core) library for database accesses. This follows on from my article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited”, where I describe my pattern for building business logic and showed something called a BizRunner. This article details a library called EfCore.GenericBizRunner that provides a much more comprehensive solution to the BizRunner described in that first article.

The EfCore.GenericBizRunner library is available as a NuGet package, and its source, with examples, is available on GitHub at The project is an open-source (MIT licence).

Other articles about business logic and GenericBizRunner:

UPDATE: Version 3 of GenericBizRunner is out. Now works well with ASP.NET Core’s Web API.

The aims of this article

  • To tell you why the GenericBizRunner library/framework is useful.
  • To show you where you might use the GenericBizRunner library in your application.
  • To show you how to use the GenericBizRunner library via two examples in an ASP.NET Core application.
  • To provide you with links to the GitHub repo GenericBizRunner , which contains the code of the library, an example an ASP.NET Core web application you can run locally to try the business logic code out, and its README file and the documentation in the project’s Wiki.

NOTE: I refer to the GenericBizRunner package as a “library”, because you can download it and install it. However, according to Neal Ford et al, technically the GenericBizRunner package is a framework because it calls your code, while a library is called by your code. I therefore use the term “framework” in contexts where that term is helpful.

I’m going assume you are familiar with .NET, EF Core and ASP.NET Core, and this article only worth reading if you are using Entity Framework Core (EF Core) library for database accesses in your business logic.

NOTE: This is a long article covering all the library’s capabilities, so if you just want to see an example of what it can do then I suggest to go straight away to Example 1. Or you can look at the Quick Start guide in the GitHub Wiki.

Setting the scene

According to Eric Evan’s, of Domain-Drive Design fame, “The heart of software is its ability to solve domain(business)-related problems for its users”. He also goes on to say “When the domain(business problem) is complex, this is a difficult task, calling for the concentrated effort of talented and skilled people”. I totally agree and I take business logic very seriously, which is why I have built a framework to help manage and run my business logic.

Over the years I have built some quite complex business logic and I have learnt that writing business logic is hard work. During those years I have progressively improved my business logic pattern to its current form, which I describe in the article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited” and in chapter 4 of my book “Entity Framework Core in Action(use my discount code fccefcoresmith to get 37% off my book).

This article covers the next stage, where I have taken the common parts of my pattern and made them into a library. The diagram below describes a layered architecture, with the EfCore.GenericBizRunner framework acting as a  Command, Mediator and Adapter software pattern.

If you aren’t familiar with that sort of layered architecture I suggest you read the article “Architecture of Business Layer working with Entity Framework (Core and v6) – revisited” where I describe how and why I split up my code in this way.

What does the library/framework do?

As you can see by the diagram above, I try to isolate the business logic from the other layers. The EfCore.GenericBizRunner framework helps this by providing much of the code you need to isolate the business logic, yet easily use it in your presentation layer. The key elements of the framework are:

  • Better Dependency Injection: It allows the business logic to be used as a service that can be injected into a ASP.NET Core action using dependency injection (DI).
  • Templates: It provides a templates for the business logic which include error handling and the definition on the particular in/out, sync/async etc. options the business logic needs.
  • Anti-corruption layer: It provides isolation of the business logic from the presentation, via mapping DTOs (data transfer objects, also known and ViewModels in ASP.NET).
  • Controls database commits: It manages the call to EF Core’s SaveChanges method, with optional validation, so that the business logic doesn’t have to handle the database or any errors it produces.

The diagram below shows the GenericBizRunner framework being used to run the business logic for handling a customer’s e-commerce order, which I describe in Example 1. The GenericBizRunner framework acts as a ServiceLayer (see previous diagram) to isolate, adapt and control the business logic. (Click figure to see a bigger version).

I should also say that the EfCore.GenericBizRunner framework is a rewrite of a private library I built for EF6.x back in 2015. The EF6.x framework was private because it was quite complex and hard to explain, but I have used it in several projects. My experience from using the original framework has helped me to create a better GenericBizRunner library, and I have been able to take advantage of the improvements in ASP.NET Core and EF Core. The EfCore.GenericBizRunner framework is still complex inside, but is easier to use that the original EF6.x framework.

There are many options in the library so I am going to explain what it does with two example pieces of business logic that are implemented in an ASP.NET Core application in the GitHub repo.

Example 1: Basic call of business logic

In my first example I want to show you all the steps in building your business logic and then calling it. To help me with this I am going to look at the business logic for placing an order in an e-commerce site that sells books. Order processing is typically quite complex, with requests to external systems. My example business logic isn’t that complicated as I am only adding the order to the database, but shows the how the GenericBizRunner works. Here is a screenshot of the checkout page.

When the Purchase button is pressed then the ASP.NET Core action PlaceOrder is called.  This action method obtains a BizRunner service instance, linked to my PlaceOrderAction business class, via the injection of a the GenericBizRunner library’s IActionService interface. Let’s look at the ASP.NET Core Action called PlaceOrder.

public IActionResult PlaceOrder(PlaceOrderInDto dto, 
    [FromServices]IActionService<IPlaceOrderAction> service)
    if (!ModelState.IsValid)
        //model errors so return to checkout page, showing the basket
        return View("Index", FormCheckoutDtoFromCookie(HttpContext));

    //This runs my business logic using the service injected into the Action's service parameter
    var order = service.RunBizAction<Order>(dto);

    if (!service.Status.HasErrors)
        //If successful I need to clear the line items in the basket cookie
        //Then I show the order to confirm that it was placed successfully
        return RedirectToAction("ConfirmOrder", "Orders", 
            new { order.OrderId , Message = "Your order is confirmed" });

    //Otherwise errors, so I need to redisplay the basket from the cookie
    var checkoutDto = FormCheckoutDtoFromCookie(HttpContext);      
    //This copies the errors to the ModelState
    service.Status.CopyErrorsToModelState(ModelState, checkoutDto);
    return View("Index", checkoutDto);

Some of the key lines are:

  • Line 4: The [FromService] attribute tells ASP.NET Core to use dependency injection to resolve the interface I have given. This creates what I call a BizRunner, with the business class instance also injected into that BizRunner via the IPlaceOrderAction interface generic part.
  • Line 13. This is where I use the BizRunner service to execute the business logic. The order processing business logic takes in a class called PlaceOrderDto (defined in the BizLogic layer), and outputs an Order class, which is the database entity class, i.e. it holds the information written to the database.
  • Line 15: The BizRunner feeds back any errors that the business logic raises, or any validation errors found when writing to the database. You can see me checking whether I have errors or not, and taking the appropriate actions.
  • Line 27: You may notice the extension method called CopyErrorsToModelState, which I wrote and is part of the EfCore.GenericServices.AspNetCore library. This takes the errors returned by the business logic, which is as a collection of ValidationResults, and copies these errors into the ModelState so ASP.NET can display this back to the user, hopefully against the actual field that caused the problem. If you were using a WebAPI you would need a method to return these errors to the calling application.

What business logic templates does it have?

The GenericBizRunner library provides a template for your business logic. There are twelve combinations of business logic formats, each represented by an interface:

Interface In Out Async? Write to Db
IGenericAction<TIn, TOut> Yes Yes No No
IGenericActionAsync<TIn, TOut> Yes Yes Yes No
IGenericActionWriteDb<TIn, TOut> Yes Yes No Yes
IGenericActionWriteDbAsync<TIn, TOut> Yes Yes Yes Yes
IGenericActionInOnly<TIn> Yes No No No
IGenericActionInOnlyAsync<TIn> Yes No Yes No
IGenericActionInOnlyWriteDb<TIn> Yes No No Yes
IGenericActionInOnlyWriteDbAsync<TIn> Yes No Yes Yes
IGenericActionOutOnly<TOut> No Yes No No
IGenericActionOutOnlyAsync<TOut> No Yes Yes No
IGenericActionOutOnlyWriteDb<TOut> No Yes No Yes
IGenericActionOutOnlyWriteDbAsync<TOut> No Yes Yes Yes

As you can see, the GenericBizAction library provides interfaces that handle all the different types of business logic you could write. The library decodes what the business logic needs and checks that the data/call you use matches its interface.

The business logic is expected to implement the GenericBizAction’s IBizActionStatus interface which the business logic can use to report errors and other status items to the BizRunner. The GenericBizAction library provides an abstract class called BizActionStatus, which the business logic can inherit to provide that functionality. Here is an example of an In-and-Out, synchronous method that doesn’t write to the database, with all the key component parts pointed out (The business logic class shown is one I use in my unit testing, so the code inside the method is very simple).

The code for the order processing business logic is much more complicated and you can find it in the GutHub repro under PlaceOrderAction.cs . Also, if you clone the GitHub project and run the application locally the ASP.NET Core web app allows you to try out the order example just described and the next example below.

Example 2: Using the anti-corruption layer

In my second example I want to show you how the anti-corruption layer part of the GenericBizRunner framework works. The term “anti-corruption layer” is a Domain-Driven Design concept, and is normally about related to separating bounded contexts (see this article on the DDD use of this terms). I am not using my anti-corruption layer between bounded contexts, but to stop the Presentation Layer concepts/data from having any impact on my business logic.

NOTE: See Wrapping your business logic with anti-corruption layers – NET Core article for longer discussion on the anti-corruption concept.

To show the anti-corruption layer in action I have built a piece of business logic  that allows the delivery date of an order to be changed. The business logic implements certain business rules to apply to any change, and ultimately it would also need to access an external site, such as a courier service, to check if the delivery date is possible.

Before I describe the code, with its use of the anti-corruption layer feature, let me first show you a screenshot of the ChangeDelivery page that the user would see.

To make this work properly the Presentation layer need to show a dropdownlist, which isn’t something that the business logic should be even thinking about – its a Presentation layer problem. This is where the GenericBizRunner library provides a couple of classes that provide the anti-corruption layer feature.

These anti-corruption layer classes are abstract classes that you can inherit, which then allows you to deal with the presentation layer item (known as the presentation-focused DTO). The abstract class makes you define the business logic’s associated class, known as the business-focused DTOs. You will see this in action later in this example, but lets look at stages in the “Change Delivery” example.

This screenshot shows the date that an order is going to be delivered on being changed from 16/1/2018 (see second line) to 17/1/2018. The stages in this are:

  1. Show the ChangeDeliver input page – the ChangeDelivery HTTP GET request
  2. Get the user’s input and run the business logic – ChangeDelivery HTTP POST request
  3. Copy the presentation-side DTO to the business DTO and run business logic
  4. Show a confirmation page if successful, or errors if failed.

Let’s look at the code for each of these sections, and see where and how the anti-corruption feature works.

1. Showing the ChangeDelivery input page

To produce the page shown in the screenshot above I need to read the current order and then display a list of possible dates in a dropdown. I do this by adding some properties for dropdown list, etc. to the a DTO class called WebChangeDeliverDto. But how do these properties get filled in? This is where the abstract class in the GenericBizRunner library called GenericActionsToBizDto comes in.

The class GenericActionsToBizDto has number of features, but the one I want to start with is the SetupSecondaryData method. This is called by the BizRunner to set up the presentation properties. Here is the code from my WebChangeDeliverDto class.

public class WebChangeDeliveryDto : 
    GenericActionToBizDto<BizChangeDeliverDto, WebChangeDeliveryDto>
    public int OrderId { get; set; }

    public string UserId { get; set; }

    public DateTime NewDeliveryDate { get; set; }

    //Presentation layer items

    // … various properties removed to focus on the SelectList

    public SelectList PossibleDeliveryDates { get; private set; }

    protected override void SetupSecondaryData(DbContext db, IBizActionStatus status)
        var order = db.Set<Order>()
            .Include(x => x.LineItems).ThenInclude(x => x.ChosenBook)
            .SingleOrDefault(x => x.OrderId == OrderId);

        if (order == null)
            status.AddError("Sorry, I could not find the order you asked for.");

        // … other code removed to focus on creating SelectList

        PossibleDeliveryDates = new SelectList(
        var selected = PossibleDeliveryDates
            .FirstOrDefault(x => x.Text == NewDeliveryDate.ToString("d"));
        if (selected != null)
            selected.Selected = true;
    //... other code left out

The part I want you to see is the SetupSecondaryData method, at the bottom of the class. This is executed by two BizRunner methods:

  • GetDto<T>: This creates the DTO class, optionally sets some properties (see next code example below) and the runs the the SetupSecondaryData
  • ResetDto: If you have errors and want to re-display the page, you need to run the SetupSecondaryData method again, which the BizRunner’s ResetDto method does.

The idea is that the SetupSecondaryData method sets up any properties in the DTO which are needed to build the page shown to the user.

The code below displays the ChangeDelivery page. You can see it using the BizRunner’s GetDto<T> method to set up the presentation data.

public IActionResult ChangeDelivery(int id, 
    [FromServices]IActionService<IChangeDeliverAction> service)
    var dto = service.GetDto<WebChangeDeliveryDto>(x =>
        x.OrderId = id;
        x.UserId = GetUserId(HttpContext);
    service.Status.CopyErrorsToModelState(ModelState, dto); 
    return View(dto);

Some of the key lines are 4 to 8, where the service.GetDto<WebChangeDeliveryDto> method is called. This will:

  1. Create a new instance of the WebChangeDeliveryDto
  2. Executes the action I have provided in the parameter, which sets properties in the new WebChangeDeliveryDto
  3. Then it runs the SetupSecondaryData method in the WebChangeDeliveryDto class, which I described earlier.

NOTE: The code I wrote in the the SetupSecondaryData method reports an error if the order was missing. I did this to show you how something like that would work. I needed to call the CopyErrorsToModelState extension method to copy any errors found to ASP.NET Core’s ModelState.

2. Get the user’s input and run the business logic

Once the user has selected the new delivery date and pressed the Change button the ASP.NET Core’s POST Action method is called. This does certain checks, but the primary thing we are interested here is the running of the ChangeDeliverAction business class, which you can see in line 17 of the code below.

public IActionResult ChangeDelivery(WebChangeDeliveryDto dto,
    [FromServices]IActionService<IChangeDeliverAction> service)
    if (!ModelState.IsValid)
        //I have to reset the DTO, which will call SetupSecondaryData
        //to set up the values needed for the display
        //return to the same view to show the errors
        return View(dto);

    //This runs my business logic using the service injected in the
    //Action's service parameter

    //… rest of action left out. I deal with that in section 3.

3. Copy the presentation-side DTO to the business DTO and run business logic

One of my rules is that the Business Logic data classes should be isolated from other layers in the application. This is a separation of concerns principle, and helps me to focus just on the business issue I am trying to solve. To help me with this the GenericBizRunner library provides the abstract GenericActionToBizDto class for input and the abstract GenericActionFromBizDto class for output. These provide the anti-corruption layer between the business logic code and the presentation code.

In this “Change Delivery” example I needed a dropdownlist of potential delivery dates, which is a presentation-focused item so I produced a presentation-focused input class called WebChangeDeliveryDto (which you have seen already when I was talking about the SetupSecondaryData method). This class inherits from the abstract GenericActionToBizDto class, and right at the top you need to define which business logic DTO this class is linked to, in this case the BizChangeDeliveryDto, as you can see this in the class definition below.

public class WebChangeDeliveryDto :
     GenericActionToBizDto< BizChangeDeliverDto, WebChangeDeliveryDto>
 … other code left out

The abstract class uses this definition to set up a mapping, via the AutoMapper library, between a presentation-layer focused DTO and the business-focused DTO. So, in this “Change Delivery” example, when the BizRunner is asked to run some business logic it notices that the input isn’t the BizChangeDeliverDto class that the business logic needs, but is a class that inherits from GenericActionToBizDto class. It then uses the CopyToBizData method inside the GenericActionToBizDto class to copy only the properties that the business logic’s DTO needs from the input class.

The default mapping that AutoMapper uses is “copy similar named properties”, which in this case only copies the first three properties to the business input class called BizChangeDeliveryDto. This is shown in the figure below.

This simple copy-by-name works 80% of the time, but the abstract classes have plenty of options to allow you to tweak the copy – see the article Using AutoMapper: Creating Mappings.

4. Show a confirmation page if successful, or errors if failed

Finally, I want to cover what happens when the business logic returns. This depends on what happened with the business logic. Let me show you the second half for the ChangeDeliver POST action.

public IActionResult ChangeDelivery(WebChangeDeliveryDto dto,
    [FromServices]IActionService<IChangeDeliverAction> service)
    //… first part of the action removed for clarity    


    if (!service.Status.HasErrors)
        //We copy the message from the business logic to show 
        return RedirectToAction("ConfirmOrder", "Orders", 
            new { dto.OrderId, message = service.Status.Message });

    //Otherwise errors, so I need to redisplay the page to the user
    service.Status.CopyErrorsToModelState(ModelState, dto);
    //I reset the DTO, which will call SetupSecondaryData i set up the display props
    return View(dto); //redisplay the page, with the errors

Success path

If the business logic, and the write to the database, were successful then the service.Status.HasErrors property is false. In that case I want to show a confirmation page. In this example I include the service.Status.Message property, which contains a string defined by my business logic – in this example it says “Your new delivery date is 17/01/18”.

The Message property is one way to return confirmation messages when the business logic finishes successfully. If there were no errors it carries a success message, otherwise it has a message saying there were n errors. Note: there are several default messages for success and failure cases – see this documentation on the service.Status property.

Failure path

If there are errors, you normally what to redisplay the GET page, with a list of errors. You have seen the CopyErrorsToModelState extension method in the first example –  it copies the Errors into the ASP.NET ModelState. So that the errors will be displayed when the view is shown.

This “Change Delivery” example also needs to re-run the SetupSecondaryData method, so that the properties in the View are set up. I do that using the BizRunner’s ResetDto method (see line 19 in the code above).

Other features not covered in this article

This article is already very long, so I have left out several parts. Here is a list of features, with links to the documentation in the project’s Wiki.


Well done if you get to the end, as it’s a big article! The GenericBizRunner has many options to make it versatile but that does make the documentation quite long (this was one reason I didn’t make the original EF6.x GenericBizRunner library open-source – too much documentation needed!). Hopefully this article, and the runnable example application in the EfCore.GenericBizRunner GitHub repo, will make it easier for you to understand the GenericBizRunner framework.

The core concept is that writing business logic is difficult so isolating the business logic code from other layers allows the developer to concentrate on the business/domain problem they are trying to solve (see chapter 4 of my book for more on this concept). The GenericBizRunner helps me in five main ways:

  • DI-friendly. It is designed to make accessing business logic very simple, via specific interfaces and dependency injection.
  • Templates: It’s set of interfaces provides a common format for your business logic patterns, including help on error reporting and status feature.
  • Controls database writes: It provides a harness with can run any business logic that implements one of the GenericBizRunner’s interfaces, with optional writing to an EF Core database.
  • Anti-corruption layer feature: It allows the business logic to be isolated from the presentation layer through a copy system that extracts the data the business logic needs from a presentation-focused class.

This new EF Core version of the GenericBizRunner framework has several improvements from using the original EF6.x version I built some years ago. The experience of using that first framework showed me that the GenericBizRunner library was useful, so I put the time into building the EF Core version, and finally biting the bullet on writing the documentation. You can find out more about the EF Core library via the EfCore.GenericBizRunner GitHub repo and its Wiki.

Other useful articles/books

Happy coding!

Using in-memory databases for unit testing EF Core applications

Last Updated: April 18, 2019 | Created: December 16, 2017

While writing the book Entity Framework Core in Action I wrote over 600 unit tests, which taught me a lot about unit testing EF Core applications. So, when I wrote the chapter on unit testing, which was the last in the book,  I combined what I had learn into a library called EfCore.TestSupport.

Note: EfCore.TestSupport is an open-source library (MIT licence) with a NuGet package available. Documentation is available in the project’s Wiki.

In this article I want to look at using in-memory database to simulate the database when you are unit testing. I also look at how to properly test your ASP.NET Core database update code, which has to handle what is called the disconnected state (explained later), with an in-memory database.

Your options for in-memory databases

There are several ways to simulate the database when unit testing, but one of the simplest ways is to create an in-memory database. They are easy to create and are quicker to create than a real database, but they do have a few limitations, which I will describe at the end. First I want to show you how they work.

The most obvious in-memory database is EF Core’s InMemory database provider, which was written specifically for unit testing. But it turns out the better option is using the SQLite database provider, with an in-memory database. Why is that?

Its because SQLite is a true relational database while EF Core’s InMemory database provider isn’t a real relational database. This means that EF Core’s InMemory database won’t pick up on data that breaks referential integrity, such as foreign keys that don’t match the appropriate primary key. About the only good thing about EF Core’s InMemory is that its slightly faster on setup than SQLite (about 40%), but both are very fast anyway, so the extra checking makes SQLite my choice.

Note My EfCore.TestSupport library has support for both InMemory and SQLite in-memory, so you can choose. But in this article I am only going to describe the SQLite in-memory.

How to configure an in-memory SQLite database?

My EfCore.TestSupport library provides a pre-configured static method that will set up the options for creating a SQLite database that is in-memory. The code below contains a simplified version of that method

public static DbContextOptions CreateOptions()
    where T : DbContext
    //This creates the SQLite connection string to in-memory database
    var connectionStringBuilder = new SqliteConnectionStringBuilder
            { DataSource = ":memory:" };
    var connectionString = connectionStringBuilder.ToString();

    //This creates a SqliteConnectionwith that string
    var connection = new SqliteConnection(connectionString);

    //The connection MUST be opened here

    //Now we have the EF Core commands to create SQLite options
    var builder = new DbContextOptionsBuilder<T>();

    return builder.Options;

Note: The EfCore.TestSupport version has a few extra checks and options, but I left them out to focus on the in-memory bit.

How to use an in-memory database in a unit test

Having shown you the CreateOptions method now I want to show you how you can use that to create an in-memory version of the database that EF a simple unit test that uses the CreateOptions method to create an in-memory database.

Note: I am using the xUnit unit test package with fluent asserts.

public void TestSQLiteOk()
    //Here create the options using SQLite CreateOptions
    var options = SqliteInMemory         
    //I create an instance of my DbContext
    using (var context = new MyContext(options))
        //You have to create the database

        context.Books.First().Title = "New Title";

        context.Books.First().Title.ShouldEqual("New Title");

Using the pattern shown in the code just shown you can test all manner of EF Core code. But let me point out some parts of the code

  1. Lines 6-7: This is using my EfCore.TestSupport methods to set up the options to use a in-memory SQLite database.
  2. Line 9: This is where you create an instance of your application’s DbContext.
  3. Line 12: The context.Database.EnsureCreated(), is very important. This creates the in-memory database using the current entity classes and EF Core configuration of your application. In this case it will return an empty database.
  4. Line 13: You often need to fill the database (often called seeding the database) with appropriate data for a test. It turns out seeding the database can be quite complex in a big application. I tend to spend time creating helper methods, like the SeedDatabaseFourBooks method in this example, to set up specific database states.
  5. Lines 16-17: This is where you put the code you want to test. I showed a very simple example – yours is likely to call methods from you own code.
  6. Line 20: Here is the assert to check the method did what you expected it to do.

Using SQLite in-memory database for testing disconnected state updates

If you are using EF Core in an application such as ASP.NET Core, then you will have to test update methods that have to work across what is called the disconnected state. The disconnected state happens when an operation, normally an update, is split into two separate parts.

The figure below shows an example of the disconnected state where a user updates the publication date of a book. The first stage on the left is where the data is presented to the user to change. The second stage happens when the user presses the UPDATE button, which submits a new HTTP POST request to the web application which uses the information sent back to do the the update.

The important point about the disconnected state is you have two, separate instances of the application’s DbContext. And if you don’t test things in the same way you might miss some errors.

Both the InMemeory and SQLite in-memory databases can handle this problem. In SQLite’s case the database is held in the connection, so, as long as you use the same options you can create new instances of the DbContext that maps to the same database.

Let me give you an example of a bug that would only be picked up by having two instances of the application’s DbContext. In this case I want to add a new review to a book. A Book entity has has zero-to-many Review‘s and a user have add a new review, with a star rating – see figure below.

So, the standard way to add a new Review in a disconnected state is to load the Book entity with all its Reviews, add the review to the collection and then call SaveChanges.

Now, if I get that wrong, and I don’t load the current Reviews I can get a number of possible errors, but only if I replicate the disconnected state properly.

Let me show you the correct way to test this disconnected state, which shows the error.

public void CorrectTestOfDisconnectedState()
    var options = SqliteInMemory

    //In my first DbContext I make seed the database
    int bookId;
    using (var context = new MyContext(options))
        bookId = context.Books.Last().BookId;

    //In the second DbContext I test my disconnected update code
    using (var context = new MyContext(options))
        var book = context.Single(x => x.BookId == bookId);
        book.Reviews.Add( new Review{NumStars = 5});

        //… rest of unit test left out, as has errored

Note: the correct code for line 22 would be
        context.Single(x => x.BookId == bookId).Include(x => x.Reviews);

The point of all that is, if I had used one DbContext and seeded the database and ran the test of the Review update code, then it would have passed. That is because EF Core would have already been tracking the Reviews collections as from the SeedDatabaseFourBooks method, so the Reviews collection would already be present.

Note: In some of my unit tests I need to use three instances of the DbContext to correctly test something. One for seeding the database, one for running the test, and a final one to verify that the database has been written correctly.

Limitations of using SQLite to unit test EF Core

It’s likely that the actual database you use isn’t going to be SQLite. In that case you need to understand how SQLite differs from your actual database, especially if you use a feature that SQLite doesn’t support. The table below gives you the major areas that SQLite doesn’t support, or its support is so different that it will cause a problem.

These are all fairly advanced features, so if you don’t recognise them then you aren’t using them, so its fine to use SQLite for unit testing.

SQL feature SQLite Support?
Different schemas NONE
SQL Sequences NONE
SQL Computed columns Different
SQL user defined functions (UDFs) Different
SQL fragment default value Different

Note: There are other differences such as it doesn’t have the same types as other databases, for instance it stores strings in a TEXT type, while SQL Server has 8-bit varchar(nnn) and 16-bit nvarchar(nnn) types. These don’t normally cause a problem, but be aware in case some odd unit test result comes up.

So, if you use any of the features in the table then you can’t use SQLite in-memory database (nor InMemory – it has even more limitations than SQLite). If you can’t use SQLite you have to use the same database type that you are using for the application in your unit tests.

My EfCore.TestSupport library has help for other database types, with extra helpers for SQL Server databases, but that is most likely needs a new article to cover that.


This article shows you how you can simulate your database quickly and easily by using an in-memory database. I have shown you that SQLite is the best choice for an in-memory database, because it is a true relational database and will catch errors such as incorrect primary or foreign keys.

I also showed you how you can produce different instances of the application’s DbContext that all access the same in-memory database. This is useful for checking code that is used in applications such as ASP.NET Core, where you have a disconnected state.

There are some limitations to SQLite’s implementation of the SQL language, which might rule it out in your application. But for any ‘standard’ EF Core application using SQLite in-memory databases should be fine.

I hope this article, and my EfCore.TestSupport library, which is available as a NuGet package, with documentation of all its feature in the project’s Wiki, will help you write unit tests quicker.

Note: The chapter 15 of my book Entity Framework Core in Action is on unit testing EF Core application and goes into the whole topic in much more detail (its 32 pages long!). Manning Publications have offered the coupon to get 37% off the book – use fccefcoresmith (Christmas 2017 – watch out for 50% off offers too).

Happy coding.

EF Core: taking full control of the database schema

Last Updated: December 7, 2017 | Created: November 23, 2017

Entity Framework Core (EF Core) is the new(ish) version of Microsoft’s widely-used database access library. EF Core allows you to map .NET classes and properties to database tables and columns respectively, and makes writing database access code much simpler than other alternatives, like ADO.NET.

EF Core maps classes to the database by building a model of the database structure, based on the software classes and configuration you set. This means that when you change the software structure or the database structure, known as the schema, then something must happen to make them match. EF Core provides various commands to do this – you can either change the database schema based on the software structure, or change the software structure based on the database schema.

In this article, I describe an alternative way to handle a change to a database’s schema from EF Core. My approach provides a tool that will check that EF Core’s model of the database matches the actual database schema. This allows a developer to take full control of the software structure and the database structure with the confidence that the tool will catch any mistakes they make in matching the two. I will now:

  • List the two ways EF Core can update the database, and add my alternate method.
  • Describe how my alternate method works
  • Compare each approach and look at their advantages and disadvantages
  • Detail my experience of using my alternative approach in real projects

Who controls the database schema?

In EF Core terms, there are two ways in which the database scheme can be changed, or in EF Core terms ‘migrated’.

  1. Code-first: Here the classes that are mapped to the database (known as entity classes), and the EF Core’s configuration commands define what the database schema looks like. You can then run a command, such as Add-Migration, that creates code can update the database’s schema.
  2. Database-first: For the situation where you have an existing database EF Core has a command called Scaffold-DbContext that will read the schema of a given database and then produce the entity classes and the EF Core configuration to match the database.

I call my alternate approach SQL-first, for reasons that will become obvious.

  1. SQL-first. In this approach SQL change scripts controls the database structure changes. Then I use a tool I have developed to compare EF Core’s entity classes and configuration to highlight any places where they differ from the database (or the database differs from the code).

The use of SQL change scripts to change a database’s schema is well known (and liked) by many DevOps people. In fact, Microsoft recommend using SQL change scripts, created via EF Core’s migration feature, to update a production database.

The problem with writing your own SQL change scripts is making sure your changes to the database match what EF Core thinks the database looks like. This is where my new tool comes in, and in the next section I describe how I make the SQL-first approach work.

How does the SQL-first approach work?

In this part I am going to describe the steps in using the SQL-first approach. Let’s assume I want to make a change to my application to support a new feature. I work out how that affects the database and then make the following changes:

  • I change my EF Core entity classes and configuration to the form I want.
  • I create a new SQL change script that will update an existing database schema with the same change.

It doesn’t matter if I change the classes first, or the database first. What does matter is: does the database’s schema and EF Core’s internal model of the database schema match? That is where my EF Core tool, which I refer to as the EfSchemaCompare feature, comes into play.

The following figure shows the steps to check a SQL change script matches what the EF Core’s view of the database. If you follow the 1 to 5 items on the diagram it will step you through the process of ensuring the database and the software match.

As you can see, my EfSchemaCompare tool checks that my changes to the EF Core code and the update that my SQL script applied to the database match. If they don’t, then I get a list of error messages that point out what is wrong. I fix them, either in the database or my software, and run the process again.

I have not defined how you apply the SQL change scripts, but there are plenty of packages that can do that (and more). I personally have used the open-source library DbUp, which I found very useful. Whatever script application package you use needs to know what scripts has been applied, and only apply scripts that aren’t already applied to the database.

How to run the EfSchemeCompare feature

I have added the EfSchemeCompare feature to my NuGet/GitHub project EfCore.TestSupport. This library contains a whole range of methods and classes to help you unit test an application that used EF Core for database accesses. As I want to check that the EF Core’s view of the database matchs the test database in my unit tests, then putting the feature inside the EfCore.TestSupport library makes sense to me.

The code below shows an xUnit unit test that checks new database against EF Core’s view of the database. In this case there should be no errors, because the database was created using EF Core.

public void CompareViaContext()
    using (var context = new BookContext(_options))
        //These two commands ensure I have a new database that matches EF Core

        var comparer = new CompareEfSql();

        //This will compare EF Core model of the database 
        //with the database that the context's connection points to
        var hasErrors = comparer.CompareEfWithDb(context);

        //The CompareEfWithDb method returns true if there were errors.
        //The comparer.GetAllErrors property returns a string
        //where each error is on a separate line

In this case the hasErrors variable will be false, because the database was created by EF Core and therefore matches the EF Core Model. But in cases where the test fails I have arranged, via the comparer.GetAllErrors property, that the xUnit runner will list all the errors in the unit test output window. (The ShouldBeFalse extension method is part of the xUnit Fluent asserts. It takes a second parameter that is a string, which is output if the test fails).

Here is an alternative method where I supply a connection string to a database that I want to check. The connection string could point to any database you like, such as your production database. That gives you early warning that this version of your application cannot be deployed without first running a migration on the database.

public void CompareViaConnection()
    using (var context = new BookContext(_options))
        var comparer = new CompareEfSql();

        //The connection string points to a database that you want to check
        var hasErrors = comparer.CompareEfWithDb(connectionString, context);


If this process finds an error(s) you get a series of error messages. Here are some examples

  1. DIFFERENT: MyEntity->Property ‘MyString’, column type. Expected = varchar(max), found = nvarchar(max)
    This says it found a difference in the column type of the property MyString in my entity class, MyEntity. EF Core thinks the SQL type should be varchar(max), that is a 8-bit ASCII storage type, while the actual database is nvarchar(max), that is a 16-bit Unicode storage type.
  2. NOT IN DATABASE: Entity ‘LineItem’, table name. Expected = LineItems
    This says the table LineItems that entity class called LineItem maps to wasn’t found in the database.
  3. EXTRA IN DATABASE: MyEntity->PrimaryKey ‘PK_MyEntites’, column name. Found = MyEntityId
    This tells you that a column called MyEntityId, which is (part of) a primary key was found in the database, but wasn’t in EF Core’s list of primary key properties in the entity class, MyEntity.

Version 1.1.2 of the EfSchemaCompare feature uses about 20 different types of tests, of which seven are applied to each column. This isn’t the place to describe how the EfSchemaCompare feature works (see here for a detailed description) but it is quite versatile, with lots of options. For instance, you can suppress any errors that you don’t care about – say you didn’t care about any EXTRA IN DATABASE errors, then you can add that type of error to the LogsToIgnore.

The Pros and Cons of the three approaches

I like to critique any approach I create, because there is always so sort of trade-offs when there are multiple ways to do something. Here is a table that summarises the advantages and disadvantages of each approach, with a reference in brackets, e.g. (see 1), to sections with more information.

Approach Advantages / Disadvantages
1. Code-first Advantages:
·         Easy for a software developer
·         Can use all EF Core featuresDisadvantages:
·         EF Migrations can be difficult (see 1)
·         Can’t access all SQL features (see 2)
2. Database-first Advantages:
·         Easy for a software developerDisadvantages:
·         Creates “poor” entity classes (see 3)
·         Can’t use some EF Core features (see 4)
3. SQL-first Advantages:
·         Can use all SQL features
·         Can use all EF Core features
·         Easy to use in automated updateDisadvantages:
·         Developer needs to know SQL (see 5)

Note: Currently the database-first approach in EF Core 2.0.0 requires you to delete all the code that EF Core generated when you used the Scaffold-DbContext command. That makes importing updates to the database rather tedious and time consuming. But a command that will “tweak” the generated code is on the EF Core roadmap and should be out sometime in 2018.

1. EF Migrations can be difficult

EF migrations work well for a simple system, but they start to get complex when used on a live site. There are two issues:

  1. If you have changes to the database that will lose data or change data, then EF Core can’t handle that. You will need to write extra code to the migration to deal with that.
  2. The recommendation of the EF Core team is that migrations be applied via scripts, or a command line tool, in production. This means you are into handling SQL scripts anyway.

2. Can’t access all SQL features

EF Core uses the main SQL commands, like ALTER, to change a database’s schema. EF Core is richer in what SQL parts it can do than EF6.x, with commands to set column defaults or computed columns, but there are still things it won’t do, like place a CONSTRAINT on a value column.

In addition, if a database change requires data must be moved or reformatted, then EF Core can’t help – you have to write SQL code to do that.

3. Creates “poor” entity classes

EF Core’s Scaffold-DbContext command produces entity classes which has a navigational property for all the links between tables. Personally, I only define a navigational link from one entity class to another if there is a good business/development reason for it to be there. Having every navigational link possible in an entity class obscures the correct, business-based connections that I should be using.

4. Can’t use some EF Core features

EF Core’s Scaffold-DbContext command produces entity classes that are generic in nature, with every column mapped to a property which has public read/write. This means you can’t use some of EF Core’s features that allow you to ‘hide’ data and/or relationships. These can be very useful in building robust systems by limiting how certain data is accessed.

In addition, EF Core provides a number of table mapping features, table-per-hierarchy, table splitting and owned types, which the scaffolder can’t take advantage of.

5. Developer needs to know SQL

Yes, the SQL-first approach does need you as developer to understand SQL. For some that is a down side – it’s another language to learn. For those who don’t want to do that the EF Core methods are best.

But if you do want to use SQL and don’t know it that well (I’m not an expert), then you can get some help from EF Core. In EF Core, there are two commands meant for unit testing called Database.EnsureDeleted and Database.EnsureCreated. By running those two, one after the other, you can get EF Core to output SQL to create all the tables, constraints, indexes etc. that are needed to produce a database that exactly matches what EF Core need. You can capture this SQL via EF Core’s logging (my library has an extension method to help with that), which will give you a set of SQL commands to act as template for your SQL change script.

For small changes, like adding a column to an existing table, you need to snip out the relevant part from the SQL that EF Core produced and re-craft it into a SQL ALTER TABLE command. But at least you have the correct names, types, constraints that EF Core expect to see.

My experience of using the SQL-first approach

This isn’t the first time I have developed or used a EfSchemaCompare feature – I produced such a tool for EF6.x back early 2016, and I have used it in anger on two projects. It worked well, even on a multi-person project. The benefits were:

  • Complete control over the database schema
  • Complete control over the EF entity classes and configuration
  • Ability to add/alter data via SQL scripts
  • Can produce a new database much quicker than EF can (useful for testing)
  • SQL Scripts are great for production
  • I can use the EfSchemaCompare tool to check my production database matches before deploying new code
  • If done carefully then migrations can be run out-or-order – useful when different developers are working on different parts of the database.

The new EF Core version of the EfSchemaCompare feature is far superior to my EF6.x version because of EF Core. Firstly, EF6.x’s access to the database model is poor, while EF Core’s is brilliant – it allows access to every aspect of the database mapping. This mean I can handle column SQL defaults, computed columns, and the table mapping features, table-per-hierarchy, table splitting and owned types. My EF6.x couldn’t handle any of those.

Secondly, I could access EF Core’s scaffold service that is used by the Scaffold-DbContext command. This means my compare feature will work with any relational database that EF Core supports (my old EF6.x version only worked with SQL Server). That is a great feature to have.

Also, my second implementation is better because I learnt what I needed from using the EF6.x version. My EF Core version now provides more comprehensive coverage, optional suppression of specific errors, handles multiple DbContext’s mapped to the same database (known as bounded contexts). It has a small number of limitations – see here for the latest list.


In this article, I have described the alternative SQL-first approach to changing a database’s schema when working with EF Core as your database access library. The SQL-first approach has some pros and cons, but I like it because it allows me to use all the great features of EF Core, and exactly define my SQL database the way I want to.

My experience of using the SQL-first approach with EF6.x was very positive, which made it a priority to create a EF Core version, which I have now done. EF Core’s features, such as the Model property and access to EF Core internal services, made it easier to create a EfSchemaCompare tool and that tool provides a better checks than the EF6.x version did.

The EfSchemaCompare tool is available via the open-source (MIT licence) EfCore.TestSupport package on NuGet, and the code and documentation can be found on GitHub.

Happy coding.

EF Core – Combining SQL and NoSQL databases for better performance

Last Updated: December 24, 2019 | Created: October 24, 2017

This is the second article in my series on Entity Framework Core (EF Core) performance tuning. In this article I take that performance even further than part 1 did, by changing the applications to a two-database CQRS (Command Query Responsibility Segregation) architecture, where one database is SQL and the other is NoSQL (I explain what that architecture looks like later in the article).

  1. Entity Framework Core performance tuning – a worked example.
  2. EF Core – Combining SQL and NoSQL for better performance (this article).
  3. NEW IN 2019! Building a robust CQRS database with EF Core and Cosmos DB. Improved design and uses EF Core’s Cosmos DB database provider.

Note: This article is from chapter 14 (which will be available from the end of November 2017) in the book “Entity Framework Core in Action” that I am writing for Manning publications. Chapter 14 contains more on the EF Core parts on the application mentioned in this article. The book is on early access, which means you can buy the book now, and you get the chapters as I write them – plus the finished book when the book is complete.

NOTE: Manning Publications have offered the coupon to get 37% off my book, Entity Framework Core in Action” – use fccefcoresmith

Executive summary for those in a hurry

I have an example ASP.NET Core application that uses EF Core to display a set of books – called my book selling site. In this article, I look at the performance book selling site which I have converted to a CQRS (explained below) architecture. In this I write data to a primary SQL Server database, but also build a read-side NoSQL database which the application queries use to show the list of books.

I run various queries to test the differences in performance, but the main one is sorting the books by their book reviews and then showing the top 100. I do this for three sizes of database

  • 100,000 books, which has ½ million book reviews.
  • 250,000 books, which has 1.4 million book reviews.
  • 500,000 books, which has 2.7 million book reviews.

The “sort by average review votes” is a particularly difficult query for a SQL-only database, as it must dynamically calculate the average votes value of all the nook reviews in the database – and as you can see my test data has a lot of reviews!

The figure below shows the difference in performance of the “best-SQL” (hand-tuned SQL run by Dapper) against what I call the SQL+NoSQL CQRS version of my application, for the three sizes of databases.

You can try the SQL+NoSQL CQRS at It has 250,000 books in its database.

Note: The site isn’t running on some massively powerful cloud system. The NoSQL database is running courtesy of (thanks to them for doing that) and is running on the lowest level of server, so you won’t get the same performance as shown in the chart above.

Setting the scene – my example application

Note: It you read the first article then you know this – skip to the next section called “Explaining the new CQRS architecture and the SQL+NoSQL databases”.

Right from the second chapter of my book I introduce an example book selling site, and as the book progresses I add more attributes to the application. Admittedly the site only has a few features, but I purposely included some sorting and filtering features that can be a challenge when the site has a lot of books to show. The diagram below shows you the features of the site.

What makes the book list a challenge?

I tried to create an example application which was at least credible, and included some complicated queries. The diagram below shows you the listing of a book, and some information on how easy or complex it is to get that data from the database.

I ran a series of tests checking on sorting, filtering and paging, but with this design they all had the same performance (unlike the previous article). But I did run tests will different numbers of books and related data in the database.

Note: I did all my testing on my development PC using the ASP.NET Core example book selling site running in debug mode. The timings are from Chrome, and are the total time it takes for the page to return (including downloading the content). For comparison, accessing the About page, which has no database accesses in it, typically takes 11 ms to load, so the database accesses are always going to be slower than that.

Explaining the new CQRS architecture and the SQL+NoSQL databases

This article is about me upgrading my example book selling site to what is known as a CQRS (Command Query Responsibility Segregation) architecture using a polyglot database structure. Those terms might not that useful if you haven’t heard them before, so let me give you a diagram and then explain those terms in a bit more detail.

The CQRS architecture acknowledges that the read-side of an application is different from the write-side. Reads are often complicated, drawing in data from multiple places, while the write side is often much simpler. Separating the code for each part can help you focus on the specific features of each part.

A “polyglot database structure” is one which uses a combination of different storage types, for instance, relational databases, NoSQL databases, flat files, and so on. The idea is that each database type has its strengths and weaknesses, and by used two or more different database types you can obtain a better overall system.

I think a polyglot CQRS architecture, with SQL for the write-side database and NoSQL for the read-side database makes a lot of sense for business applications where there are more reads than writes, for instance, eCommerce applications, job sites, calender/todo etc. etc.

The SQL write-side database makes sense because business application often use relational. Think about a real book selling site: it would have a LOT of complex, linked data, to handle all the parts of the system, such as suppliers, inventory, pricing, orders, payment, delivery, tracking, audits and so on. I think a relational (e.g. SQL) database with its high level of data integrity for relational data, is the right choice for many business problems.

But those relationships and some aspect of database can make a SQL database slow at retrieval of data. My “sort by average votes” is particularly hard on a SQL database because of how many book reviews it needs to look at to sort by average votes. But I have seen equally complex queries in other systems I have built that don’t scale well either.

One answer is to create a read-side database that contains the exact data that the user wants to see, or sort/filter on. This could be another SQL database (I did that in Part3 of the previous article), but it makes a lot more sense for that database to be a NoSQL database. Why? Because:

  1. A NoSQL database is simpler to read, as it each entry is normally self-contained, with no links outside it. This should make it quicker at reading than a SQL database.
  2. NoSQL databases are normally easier to scale out, that is, have multiple databases containing the “same” data. This is because a write to a NoSQL database is dealing in writing one complete entry. (This is a big topic, and I’m not going to discuss this – see this Stack Overflow answer for part of the reason).
  3. I think a CQRS + NoSQL read-side database does a better job of holding “cached values” than can be done with a SQL/relational database does – I cover this later in the section about the complexity and robustness of the software.

As you will see in the rest of this article these features make my SQL+NoSQL CQRS architecture perform very well, and still have a high level of data integrity.

Why I chose RavenDB as my NoSQL database

I’m not an expert of NoSQL databases but when my first article went out I got a tweet from Oren Eini (Twitter @ayende), who is the main force behind the RavenDb NoSQL database. He quite rightly felt that his RavenDB database would be faster than the SQL equivalent at sorting/searching this data.

His tweet gave me an opportunity to get some help with my lack of NoSQL knowledge, so I sent him the tweet below, and the choice of NoSQL database was made. And it’s turned out to be a very good choice indeed, as you will see.

RavenDB has a .NET library that supports LINQ queries, which meant I could use the LINQ queries I had developed for EF Core with almost no changes. RavenDB also allowed me to create indexes on certain parameters, and would then automatically generate indexes for the various combinations. Indexes are vital to fast performance and most NoSQL systems have them.

I want to thank Oren Eini (see for his help and support, and Jonathan Matheus of RavenHQ , who has kindly provided me with free hosting of the RavenDB database that I use in my live site –

NOTE: See new article “Building a robust CQRS database with EF Core and Cosmos DB” for new version using EF Core’s Cosmos DB database provider.

How I added the CQRS architecture to my example book selling site

I only wanted to move the book list view part to the NoSQL read-side database, and not for the order processing part, because it’s only the book list view that has a performance issue. It turns out that while adding CQRS does require a fair amount of work, it is very simple to only apply the CQRS architecture to part of my application. The figure below shows the design of my changed example book selling site application, with the book list implemented as a two-database CQRS part.

My example book selling site is an ASP.NET Core application with EF Core used for the database access code. It is a layered application, as shown in the diagram above. The book selling site isn’t that complicated but I still needed to be careful as I modified the application over to a CQRS read-side database. I therefore wanted a design that minimized the changes, and isolated the new parts.

I came up with a design that keeps all the NoSQL/RavenDB parts separate. In my final design, the EF Core doesn’t know, or care, what database is being used for the read-side part of the CQRS system. This made the update simpler, plus offered the possibility of changing the NoSQL database I used – I really like RavenDB, with its support of LINQ, but EF Core version 2.1.0 will support Azure’s NoSQL database, Cosmos, so that might be an interesting alternative.

By keeping as much of the new database code in the NoSQLData layer, and using interfaces, kept the impact of my changes to a minimum. Figure 14.3 shows how I hid the NoSQL code behind interfaces to keep the NoSQL code isolated. I used dependency injection to provide both the DataLayer and the ServiceLayer with methods that allowed access to the database.

Here is a typical entry in the NoSQL database. As you can see I have pre-calculated the number of Reviews on this book, ReviewsCount, and the average of the six review votes, ReviewsAverageVotes. These two properties, plus the Id (not shown – RavenDB keeps that separate), the PublishedOn, and the ActualPrice all have indexes.

    "Title": "Adobe AIR in Action",
    "PublishedOn": "2008-07-01T00:00:00.0000000",
    "Price": 379,
    "ActualPrice": 189.5,
    "PromotionPromotionalText": "today only - 50% off! ",
    "AuthorsOrdered": "Joey Lott, Kathryn Rotondo, Sam Ahn, Ashley Atkins",
    "ReviewsCount": 6,
    "ReviewsAverageVotes": 3.5

Is changing my application to a SQL+NoSQL CQRS design worth the effort?

Implementing this two-database CQRS architecture wasn’t simple and took over a week to develop, which is long for me. Admittedly the main part was learning how to handle the RavenDB database approach, but there were also some complex EF Core parts to write to. So, was it worth the effort? To answer that question, I’m going to look at three distinct aspects.

  1. Was book list display, with its various sorts and filters, quicker?
  2. What was the impact writing to the database performance?
  3. What effect did it have on software, both in effort, complexity, and robustness?

1. Was book list display quicker?

Well, you have already seen the graph, but I’ll show it again. It beats the “best-SQL” version on sort by votes hands-down.

Now, this isn’t the whole story as in the first article I covered four levels of performance-tuning, and there was a faster version after the “best-SQL” version. In that version, referred to Part 3 (SQL cached values), I created “cached values” very like the values you see in the NoSQL database. In fact, it was the similarity of the Part 3 (SQL cached values) book query to getting json data that made me think of using a NoSQL database with CQRS.

If we compare the Part 3 (SQL cached-values) solution with the SQL+NoSQL CQRS version, then the difference is very much smaller, as shown by the graph below.

Note: You can access a live site with the Part 3 (SQL cached values) at This site has 100,000 books in it.

This second graph shows that it’s the cached values, with their indexes, that makes the difference. RavenDB still has the edge of the Part 3 (SQL cached-values) version, but there is another aspect that I didn’t measure here, that is scalability.

Scalability is about how many simultaneous users an application can handle while still providing a ‘good’ performance. Because NoSQL databases such as RavenDB are dealing with a single entry containing all the information then a read or write is simpler than the SQL case. In general, this makes NoSQL databases easier to duplicate, that is, have multiple databases all containing the same data. The effect of having multiple databases on scalability can be significant. Not only can you spread database access across multiple databases, but you can locate databases geographically around the world to provide shorter access times. This final point clinches the performance issue for me – the SQL+NoSQL CQRS solution wins on performance.

SQL+NoSQL CQRS read performance summary: Excellent

2. What was the impact writing to the database performance?

I said earlier on that a SQL+NoSQL CQRS architecture is going to be slower on writes, as it must write to two databases. I have measured this in my solution and there is an effect, but it’s pretty small.

Taking a common case, which is a user adding a review to a book the results are as follows:

Solution type Total time Notes
Part 3 (SQL cached-values) 13 ms Simple addition of Review entity to the database and a recalculate of the cached values.
SQL+NoSQL CQRS 35 ms The extra time is mainly around writing to the NoSQL database. I measured the RavenDB update taking 25ms, which is quite long compared to a SQL write.

For a function that takes less than 50ms to return to the user, it’s not worth performance tuning. But in applications with more complex updates this time might get too long. But there are plenty of ways to handle this – one way is to pass the update a background task to be executed so that the application returns immediately to the user. The only down side of that approach is the user may be shown out-of-date data, which could be confusing – these are the trade-offs you must think through.

SQL+NoSQL CQRS write performance summary: Good, and could be made better if needed.

3. What effect did it have on software, both in effort, complexity and robustness?

The table below compares the development effort, complexity and robustness of the two designs: The Part 3 (SQL cached-values) solution and the SQL+NoSQL CQRS.

Solution type Effort Complexity Robustness
Part 3 (SQL cached-values) ~ 3 days Complex concurrency Good
SQL+NoSQL CQRS ~ 8 days Complex structure Very good (see below)

The effort was bigger for the SQL+NoSQL CQRS solution, but still in a very sensible time. Remember too, that is the time to performance tune an existing, SQL-only, EF Core application – 8 days to improve performance isn’t that bad in my opinion.

On the added complexity, the two solutions added to the original application was about the same –the added complexity was just in different areas in the two solutions.

The big plus for me was on the on robustness of the application. Keeping cached values correct is really difficult, and I think the SQL+NoSQL CQRS did a much better job of that than the Part 3 (SQL cached-values) did. This is because the SQL+NoSQL CQRS’s architecture “designs-out” any concurrency issues. The diagram compares Part 3 (SQL cached-values) handling of concurrency issues that could make the cached values out of date, with the same problem in the SQL+NoSQL CQRS solution.

Unlike the Part 3 (SQL cached values) solution, where I had to consider each cached value separately, and devise a different solution for each of them, the CQRS design handles all potential problems in one go – it effectively designs them out.

NOTE: This implementation does have a flaw if the NoSQL update fails or comes out of step. If fix this in my latest version in the article “Building a robust CQRS database with EF Core and Cosmos DB” by applying the NoSQL update inside a SQL transaction. That approach could be used with the RavenDB version too.

SQL+NoSQL CQRS software summary: it took a bit more develop time, but it’s worth the effort.


I have altered my example book selling site from a SQL-only solution to one that uses a CQRS (Command Query Responsibility Segregation) architecture with a separate read-side and write-side database.  The write-side database I kept as a SQL/relational database because of its ability to track relationships with a high level of data integrity. On the read-side database I used a NoSQL database, RavenDB. I called this my SQL+NoSQL CQRS solution.

The SQL+NoSQL CQRS solution won by a large factor the on read-performance against the “best-SQL” database. Even against a SQL solution that pre-calculated cached values then SQL+NoSQL CQRS solution was only slightly faster, but has a much better chance to scale horizontally (i.e. multiple duplicate databases) than its SQL counterpart. This means the SQL+NoSQL CQRS won on all fronts for read-side queries.

The SQL+NoSQL CQRS solution did introduce a drop in performance for writes to the database, as two databases had to be updated. In my application that was small enough to ignore, but if it did become a problem there are ways around that.

The SQL+NoSQL CQRS solution took the most development time of any of the performance tuning I did – taking ~8 days to implement compared to ~3 days for the Part 3 (SQL cached values) solution took. However, I think the performance gain was impressive and the overall solution is more robust.

I hope you have found this article and the previous article interesting. Please do add comments and suggestions. I’m always learning.

Further reading

Happy coding!

Entity Framework Core performance tuning – a worked example

Last Updated: December 24, 2019 | Created: September 22, 2017

This is the first in a series of articles on performance tuning EF Core. In this article I take an example book selling site that I have built for my book and see how fast I can get it to perform. I have got some really excellent performance out of Entity Framework Core (EF Core) and I wanted to share it with you all.

  1. Entity Framework Core performance tuning – a worked example (this article).
  2. EF Core – Combining SQL and NoSQL for better performance.
  3. NEW IN 2019 – “Building a robust CQRS database with EF Core and Cosmos DB

Note: This article is from chapter 13 of my book, and gives more information on the implementation (the chapter is 33 pages long). Manning Publications have offered the coupon to get 37% off my book, “Entity Framework in Action” – use fccefcoresmith.

Executive summary for those in a hurry

I build an example book selling web site using EF Core version 2.0.0 and filled it with 100,000 books and ½ million reviews. I then went through four staged of performance tuning. The figure below shows you the results through these stages for two different, but demanding areas of performance:

  • Displaying 100 books out of 100,000 possible books.
  • Displaying 100 books out of 100,000 possible books, sorted by average votes
    (timings are how long the web request took to finish, as measured by Chrome on my development PC).

The four stages on the graph are:

  1. Part 1: Working with EF Core built queries
    1. I get my LINQ code in a form that EF Core can turn it into good SQL
    2. I performance tune the existing EF Core query by adding a DbFunction.
  2. Part 2: I build a SQL query by copying EF Core’s SQL and ‘tweaked’ one bit to get some extra performance
  3. Part 3: I added cached values to the database that held hard-to-calculate values, such as the average votes for a book.

You can try the final, Part 3, “tuned up” example book selling site at

Note: The site isn’t running on some massively powerful cloud system, but on shared hosting site that costs £3.50/month. (see Web Wiz, who are a great hosting site for .NET. They have always been early supporters of new .NET features.)

Setting the scene – my example application

Right from the second chapter of my book I introduce an example book selling site, and as the book progresses I add more feature to the application. Admittedly the site only has a few features, but I purposely included some sorting and filtering features that can be a challenge when the site has a lot of books to show. The diagram below shows you the features of the site.

What makes the book list a challenge?

I tried to create an example application which was at least credible, and included some complicated queries. The diagram below shows you the listing of a book, and some information on how easy or complex it is to get that data from the database.

On top of that the sorting, filtering and paging commands adds another level of complexity. I came up with four sort and filter tests that got progressively difficult, each done with page size of 10 and 100 to see how much showing more books effected performance. Here are the four tests.

  1. Easy – just show the page, with default everything.
  2. Moderate – sort the books by their publication date.
  3. Hard – sort by average votes, which is calculated from all the reviews linked to a book.
  4. Double hard – filter by average votes and the sort by price, both of which are calculated values.

Note: I did all my testing on my development PC using the ASP.NET Core example book selling site running in debug mode. The timings are from Chrome, and are the total time it takes for the page to return (including downloading the content). For comparison, accessing the About page, which has no database accesses in it, typically takes 11 ms to load, so the database accesses are always going to be slower than that.

My four steps in performance tuning

I went through four iterations of tuning of the application, and I will cover each one in this article. The stages are:

  1. Tuning up my EF Core query
    1. Making sure EF Core could translate all parts of my LINQ query to SQL
    2. Tweaking my query by adding a user defined function
  2. Going SQL – moving the query to Dapper
  3. Modifying the database – adding cached values (this is running on

Part 1a – Getting my query to translate into SQL

I started this performance tuning process by looking at what EF Core can do with well written code. Throughout my book, I have recommended the LINQ Select query, as this creates the most efficient database access. The Select query works well for the book display as it needs to get data from other tables, this the Reviews, and it doesn’t need some of the columns in the Books table, such as the book’s Description column, which could be very long. Here is my Select query object.

public static IQueryable<BookListDto>           
    MapBookToDto(this IQueryable<Book> books)   
    return books.Select(p => new BookListDto
        BookId = p.BookId,                      
        Title = p.Title,                        
        Price = p.Price,                        
        PublishedOn = p.PublishedOn,            
        ActualPrice = p.Promotion == null       
                ? p.Price                       
                : p.Promotion.NewPrice,         
        PromotionPromotionalText =              
                p.Promotion == null             
                  ? null                        
                  : p.Promotion.PromotionalText,
        AuthorNamesOrdered = p.AuthorsLink
                .OrderBy(q => q.Order)
                .Select(q => q.Author.Name),
        ReviewsCount = p.Reviews.Count,        
        ReviewsAverageVotes =                  
            p.Reviews.Select(y => (double?)y.NumStars).Average() 

Note the calculation at the end of the MapBookToDto code, which calculates the average votes from all the reviews linked to this book. I took me a bit of time to get the LINQ command in the right format – finally solved it by asking a question of the EF Core issues page. But the result is that average is done in the database using the SQL command AVG. This was critical in getting the performance I needed, not only in calculating the value to so in each book info, but also to make the “sort by average votes” as quick as it was.

Here are the results for 100,000 books (same data as on the site).

Test 10 books 100 books Comment
All books 45 ms 230 ms Sorts on primary key
Sort by publication dates 45 ms 230 ms Sorts on PublishedOn, with index
Sort by votes 500 ms 650 ms Must average ½ million reviews!
Sort by price, filter by 4+ votes 450 ms 600 ms Less books, so quicker

The “sort on votes” performance, at 500ms sounds slow, but it was a lot quicker than I expected. EF Core produced the ideal SQL, using the SQL AVG command, but it had to be called 100,000 times, and had a total of over ½ million Reviews to process. For some applications, this level of performance would be acceptable, but for this example I deem that this is too slow, and I will be looking again at the “sort on votes” in Part 2 and Part 3.

The one item that did seem very slow was the display of 100 books. I analysed that and, because the Authors names came back as a collection, EF Core ran a separate query for each book – so, instead of 2 database access it had 12 database accesses for 10 books, and 102 database accesses for the 100 books.

I know that every separate database access costs, so I wanted to remove those extra database accesses. And because I knew something that EF Core didn’t know, then I could fix it, as you will see in the next Part.

Part 1b – tweak the query with the DbFunction feature

So, in my first performance tweak, I wanted to see if I keep all the good SQL that EF Core has produced, but inject a bit of my own SQL to improve the performance of the AuthorNamesOrdered part. What I knew was I only wanted the names to concatenate them into a comma-delimited string, and I could do that in SQL very easily.

EF Core 2.0.0, released in August 2017, has a great new feature contribution by Paul Middleton. This allows you to register a SQL user defined function (UDF) with EF Core and then use it in the query. This is a really useful tool, and I expect to be using DbFunctions a lot in the future.

So, with help from Stack Overflow I produced a UDF, as shown below.

CREATE FUNCTION AuthorsStringUdf (@bookId int)
-- Thanks to
SELECT @Names = COALESCE(@Names + ', ', '') + a.Name
FROM Authors AS a, Books AS b, BookAuthor AS ba 
WHERE ba.BookId = @bookId
      AND ba.AuthorId = a.AuthorId 
      AND ba.BookId = b.BookId
ORDER BY ba.[Order]

I could then replace the extraction of a list of author’s names with a call to my UDF, as shown below.

public static IQueryable<BookListDto>            
    MapBookToDto(this IQueryable<Book> books)    
    return books.Select(p => new BookListDto
        //… other code removed for clarity
        AuthorsOrdered = UdfDefinitions.AuthorsStringUdf(p.BookId),
        //… etc.

The UDF returns a single string, so there is no need for extra database accessed. After this change, there were only two databases accessed, a count for paging and then the real query, no matter how many books in a page.

Note: I have not covered how to register the UDF with EF Core, nor the best way to add the UDF to the EF Core created database. There just isn’t room in this article. There is some information on this in the this GitHub documentation. I do have more information on DbFunctions in  chapter 8 of my book, and how to add UDFs and other SQL code to an EF Core database in chapter 11.

Before I show you the improved performance figures let me first show you the new SQL that EF Core produces. EF Core now produces very readable SQL, almost like how a human would lay it out (I did a couple of minor changes to the layout to make the diagram work, but is mostly as EF Core created it).

The only change in the SQL from the Part 1a version is the call to the UDF, but the effect on performance is great, especially on the 100-book display, is good. See the figures below

Here are the results for using the 100,000 book test data, with the ‘Improvement’ columns showing how much faster the Part 1b implementation is compared to Part 1a’s performance.

Test 10 books Improvement 100 books Improvement
All books 30 ms 150% 80 ms 250%
Sort by publication dates 30 ms 150% 80 ms 250%
Sort by votes 480 ms minor 530 ms minor
Sort by price, filter by 4+ votes 420 ms minor 470 ms minor

Part 2 – converting the query to hand-written SQL

When I studied the SQL from part 1b, I noticed that EF Core repeated any calculation in the SQL ORDER BY section, rather than referring to the SELECT’s alias name of the calculated value. I did some tests and they showed using the SELECT’s alias name, ReviewsAverageVote, which holds the calculated average vote, in the ORDER BY section would improve performance.

Based on this I decided to replace the EF Core query with a fully, hand-code SQL command. That meant I had to write the SQL code, but because EF Core’s SQL is so great (and readable), I just copied it (!), and replaced the one area, the ORDER BY part, with some tweaked SQL as shown below – see the ReviewsAverageVotes value used in the ORDER BY at the end of the SQL

SELECT TOP(@pageSize) [b].[BookId], [b].[Title], 
     [b].[Price], [b].[PublishedOn], 
    WHEN [p.Promotion].[PriceOfferId] IS NULL
    THEN [b].[Price] ELSE [p.Promotion].[NewPrice]
END AS [ActualPrice], 
[p.Promotion].[PromotionalText] AS [PromotionPromotionalText], 
[dbo].AuthorsStringUdf([b].[BookId]) AS [AuthorsOrdered], 
    FROM [Review] AS [r]
    WHERE [b].[BookId] = [r].[BookId]
) AS [ReviewsCount], (
    SELECT AVG(CAST([y].[NumStars] AS float))
    FROM [Review] AS [y]
    WHERE [b].[BookId] = [y].[BookId]
) AS [ReviewsAverageVotes]
FROM [Books] AS [b]
LEFT JOIN [PriceOffers] AS [p.Promotion] 
    ON [b].[BookId] = [p.Promotion].[BookId]
WHERE ([b].[SoftDeleted] = 0) 
ORDER BY [ReviewsAverageVotes] DESC

The next problem was how I would execute the SQL. I could have used ADO.NET but the well-respected mini-OR/M called Dapper. Dapper has a feature that maps the data returned from the SQL to a .NET class by matching the class property names to the column names returned. EF Core also has a feature that can provide the correct type of connection for Dapper to access the database, which makes using Dapper pretty easy.

The unit test code below shows how to use Dapper with an EF Core’s application DbContext – the Dapper part is the line “Query<Book>(“SELECT * FROM Books”)” in the code below.

public void DapperReadBooksEfCoreSqlServer()
    //… I removed the options and database setup 
    //so that it’s easier to see the Dapper part 
    using (var context = new EfCoreContext(options))
        var books = context.Database 
            .Query<Book>("SELECT * FROM Books"); 


The book list query I implemented in chapter 2 was split up into four separate query objects: MapBookToDto, OrderBooksBy, FilterBooksBy, and a generic paging method. That meant each part was easy to understand and could be separately tested. Duplicating this query in Dapper requires me to sacrifice the elegance of query objects and move to a design that combines snippets of SQL to create the query. But that’s what I need to do to squeeze out the best performance.

The results of swapping over to this improved SQL is shown in the table below, with the big change in bold (Sort on Average votes).

Test scenario Part 1b – EF Core Part 2 – SQL Comment
Show books  10 =30 ms
100 = 80 ms
10 =30 ms
100 = 85 ms
No real change – within normal variations
Sort on PublishedOn 10 = 30 ms
100 = 80 ms
10 = 30 ms
100 = 90 ms
No real change – within normal variations
Sort by average votes 10 = 500 ms
100 = 530 ms
10 = 325 ms
100 = 390 ms
The improved SQL is about 40% faster than EF Core
Filter by average votes (4 or more), sort by price 10 = 440 ms
100 = 490 ms
10 = 455 ms
100 = 520 ms
No real change – within normal variations

As you can see, the only part that improved was the “sort by average votes” situation. What I found interesting is that none of the other more mundane queries had improved. I kind of expected them to because Dapper’s current site states its ten times faster or more than EF (most likely EF6.x), which didn’t show up in these results.

I therefore tried a simple “read one book row” test, and yes, in that case Dapper is about seven times faster than EF Core. That makes sense, as EF Core has several things to do on loading, like relational fixup, while Dapper just executes the SQL. But it turns out, when it comes to large or complex queries where the database execution is the limiting factor, then the performance is down to the quality SQL produced, and not the library you use. The fact is, small accesses aren’t normally the ones that need performance tuning, as they are quick anyway, using Dapper or EF Core only has a difference if the SQL is different, which in the ”sort by average votes” case it was.

But the question is, “was all that effort to swap to SQL worth it?” Before you decide that you should read Part 3, where I use another approach that take more work, but improves performance much more than my hand-tuned SQL does.

Part 3 –  Modifying the database to increase performance

The final stage I decided to try modifying the database and adding cached values for the calculations that take a lot of time. These are:

  1. The Price, which varies depending on whether there is a PriceOffer promotion linked to the book.
  2. The average votes and number of reviews for this book
  3. The AuthorsString, containing the comma-delimited list of authors of a book.

Caching is a great tool for improving performance, but ensuring the cached value is up to date needs very careful handing. This is especially true in a database, where you may have a simultaneous update of the same row, which could make the cached value incorrect.

In chapter 13 in my book I cover how I handled the three types of caching, which need different techniques. But for this article I’m going to focus on the most difficult one of the three, that is the caching of the average votes and number of reviews for this book. This is difficult because I still need to keep the actual Review entities so that users can look at the Review’s comments on a book. This means I have two versions of the average votes value: one that is found by averaging the reviews dynamically, as I have done in Part 1a to 2, or the cached value held in the AverageVotes in my modified Book entity class. Keeping these in set is a challenge.

To keep my cached values up to date I need to make some fairly large changes to how things work, as summarised below:

  1. I still need the Reviews property, which sets up the one-to-many relationship between a Book entity and its Review But I must ensure that Review entities cannot be added or removed from the Reviews property, that is, all adds or removes must go through my AddReview and RemoveReview methods.I also I need to “lock down” the Review entity class by making its properties have private setters, and stop any methods outside of the DataLayer from being able to create a Review entity instance.
  2. Because the AverageVotes and ReviewsCount properties are cached values then there is the possibility of a concurrent adding or removing of a Review, which could invalidate the cached values.

I will deal with each of these two stages separately

Lock down access to Reviews collection property

EF Core allows navigational properties to be set up which EF Core will use them when doing certain read and writes to the database. My initial Book entity class had a property called Reviews that contained a list of Review entities for this book instance. I would then add or remove a Review instance from the list to change what reviews where linked to the Book.

For my caching of the values related to the Reviews to be kept up to date I need intercept all changes to the Book’s Reviews. To do that I use a feature in EF Core called backing fields. This allow me to ‘hide’ the Reviews list, and provide a IEnumerable<Review> Reviews property, which a developer cannot add or remove from. Here is what the updated Book entity class looks like

public class Book
    private readonly List<Review> _reviews 
        = new List<Review>(); 

    //… other properties removed for clarity

    public IEnumerable<Review> Reviews => 

Now I can add two methods, AddReview and RemoveReview, to the Book class that update its backing field _reviews and at the same time recalculate the AverageVotes and ReviewCount cache properties.

The other back door to changing the Reviews is if the develop loaded a Review entity and altered its values, say changing the reviews vote from 1 to 5. To stop the developer from do that I set all the Review class property setters to private, and I make the constructor internal, so only my DataLayer can create a new instance of the Review class.

The figure below shows the backing field in the Book class and its AddReview method in blue, with the “locked down” Review class in purple.

Handling simultaneous additions of a Reviews to the same book

The other problem is multiple users adding Reviews at the same time. There is a small window of time between the loading of the current Reviews linked to a Book and the saving of the new review collection and the associated cached values. In that time window, another Review could be added/removed from the same book by another user, which would make the cached values out of step with the database.

The solution to this problem is to use EF Core’s concurrency conflict feature. What I do is add the attribute [ConcurrencyCheck] to the AverageVotes and ReviewsCount properties. This means that EF Core will check they haven’t changed since the Book instance was loaded, which is window where other updates could make my calculations invalid. If they have been changed then I can write some code that recalculates the cached values using the current state of the database.  I’m not going to show the code for that because the article is already very long. You can find my concurrency code here.

The performance results from Part 3 – cached values

The results from all this is impressive, as you can see in the table below

Test 10 books Improvement over Part 2 (SQL) 100 books Improvement over Part 2 (SQL)
All books 30 ms no improvement 80 ms no improvement
Sort by publication dates 30 ms no improvement 80 ms no improvement
Sort by votes 30 ms 12 times better 80 ms 5 times better
Sort by price, filter by 4+ votes 30 ms 12 times better 80 ms 5 times better

Extra notes on performance:

  • I tried ½ million books: the 10-book time was 85 ms and the 100-book time was 150 ms. For this amount of books the database server is starting to be a limiting factor.
  • In Parts 1a, 1b and 2 if the sort by price, filtered by +1 votes, then the timings were all over a second in length. But this Part 3 implementation stills comes in at 30ms.
  • I took the SQL produced by EF Core in Part 3 and used Dapper to execute it, but there was no further performance improvement so it wasn’t worth doing.

Obviously, the standout improvement are the last two items, which were the ones I was targeting. You can see this code in action, with 100,000 books, at

Summary of the whole performance tuning work

Below is a repeat of the chart I had at the beginning that shows you the results for Part 1a to 3 for the 100-book display and 100-book display with sort on average votes.

However, what this graph does convey is that the Part 3 performance improvements make all the 10-book displays, or any sort or filter, come in at around 30 ms. As the 10-book display is the default then the site is now really fast. Also, the 30 ms is the time for the ASP.NET page to display, which includes other overheads outside the database access – in actual fact, the database accesses are between 10 ms to 15 ms.

The other questions I asked myself was how much effort did each stage take and where there any problem or adverse effects to the application? The table below gives my answers to these questions.

Part Effort Comments
1a. Straight EF Core Few days Finding the correct format for the LINQ average wasn’t easy. I had to ask the EF Core team via EF Core’s GitHub issue page. But once I got this right EF Core produced great SQL code.
1b. +DbFunction ½ day Very easy. I like the DbFunction a lot, and in this case, it made a significant improvement to the performance.
2. Better SQL One day Fairly easy, as I simply copied the SQL code that EF Core produces and tweaked the bit that my experiments said would help. But only useful if you can come up with some better SQL, i.e. the difference in speed of Dapper over EF Core only helps on small, very quick database accesses, which you don’t need to performance tune anyway!
3. Cached values Three days Definitely hard work, but also a fantastic result. I think the performance changes to the code don’t hinder future development of the application at all, and in some cases improves it.

Overall I am very pleased with the process. EF Core produces great SQL code from the get go, but only because I made sure my LINQ queries in a way such that EF Core can translate every part into SQL. I had to persevere with the LINQ Average method to get EF Core to translate that to SQL.

Note I think plenty of developer will fall foul of this problem and think EF Core is slow – but before you blame EF Core check your logs for QueryClientEvaluationWarning warnings, which tell you part of your LINQ query didn’t translate to SQL and is being run in software. See the section “Understanding the limitations of Client vs. Server evaluation” in my  article about EF Core’s Client vs. Server evaluation feature.

But when I had to go beyond straight EF Core I found there was plenty I could do. And I didn’t go as far as I wanted to due to time and limits on the chapter length – my next step was to change the application’s architecture (I give a few ideas at the end of chapter 13).


I took a query that was designed around a fictitious, but believable example, which wasn’t a walk over to implement. I got the best out of EF Core, and I think that performance was quite credible. I then pushed through two more stages, ending with a complex, but eminently implementable solution where the slow parts of the query were between eight and fifteen times faster from where I started. But more importantly all the queries for a 10-book display are at 30 ms.

You most likely selected EF Core for your database access library because it allowed you to develop your application rapidly (good choice!). That means you should have a working application quickly, and can then you can performance tune the parts that need it.

When you start performance tuning I would recommend persevering with conventional EF Core (see my note about QueryClientEvaluationWarning). But at some point you might decide that the standard EF Core approach won’t produce the performance you need. That’s when this article could be useful to you, so bookmark this URL for that day (or buy my book 🙂 ).

If you are interested in .NET in general I recommend the ASP.NET weekly newsletter, curated by Jerrie Pelser. This contains a list of articles on ASP.NET, Entity Framework and .NET in general (mostly .NET Core versions)  – sign up at