EF Core – Combining SQL and NoSQL databases for better performance

This is the second article in my series on Entity Framework Core (EF Core) performance tuning. In this article I take that performance even further than part 1 did, by changing the applications to a two-database CQRS (Command Query Responsibility Segregation) architecture, where one database is SQL and the other is NoSQL (I explain what that architecture looks like later in the article).

  1. Entity Framework Core performance tuning – a worked example.
  2. EF Core – Combining SQL and NoSQL for better performance (this article).

Note: This article is from chapter 14 (which will be available from the end of November 2017) in the book “Entity Framework in Action” that I am writing for Manning publications. Chapter 14 contains more on the EF Core parts on the application mentioned in this article. The book is on early access, which means you can buy the book now, and you get the chapters as I write them – plus the finished book when the book is complete.

NOTE: Manning Publications have offered the coupon to get 37% off my book, Entity Framework in Action” – use fccefcoresmith

Executive summary for those in a hurry

I have an example ASP.NET Core application that uses EF Core to display a set of books – called my book selling site. In this article, I look at the performance book selling site which I have converted to a CQRS (explained below) architecture. In this I write data to a primary SQL Server database, but also build a read-side NoSQL database which the application queries use to show the list of books.

I run various queries to test the differences in performance, but the main one is sorting the books by their book reviews and then showing the top 100. I do this for three sizes of database

  • 100,000 books, which has ½ million book reviews.
  • 250,000 books, which has 1.4 million book reviews.
  • 500,000 books, which has 2.7 million book reviews.

The “sort by average review votes” is a particularly difficult query for a SQL-only database, as it must dynamically calculate the average votes value of all the nook reviews in the database – and as you can see my test data has a lot of reviews!

The figure below shows the difference in performance of the “best-SQL” (hand-tuned SQL run by Dapper) against what I call the SQL+NoSQL CQRS version of my application, for the three sizes of databases.

You can try the SQL+NoSQL CQRS at http://cqrsravendb.efcoreinaction.com/. It has 250,000 books in its database.

Note: The site cqrsravnedb.efcoreinaction.com isn’t running on some massively powerful cloud system. The NoSQL database is running courtesy of www.RavenHQ.com (thanks to them for doing that) and is running on the lowest level of server, so you won’t get the same performance as shown in the chart above.

Setting the scene – my example application

Note: It you read the first article then you know this – skip to the next section called “Explaining the new CQRS architecture and the SQL+NoSQL databases”.

Right from the second chapter of my book I introduce an example book selling site, and as the book progresses I add more attributes to the application. Admittedly the site only has a few features, but I purposely included some sorting and filtering features that can be a challenge when the site has a lot of books to show. The diagram below shows you the features of the site.

What makes the book list a challenge?

I tried to create an example application which was at least credible, and included some complicated queries. The diagram below shows you the listing of a book, and some information on how easy or complex it is to get that data from the database.

I ran a series of tests checking on sorting, filtering and paging, but with this design they all had the same performance (unlike the previous article). But I did run tests will different numbers of books and related data in the database.

Note: I did all my testing on my development PC using the ASP.NET Core example book selling site running in debug mode. The timings are from Chrome, and are the total time it takes for the page to return (including downloading the content). For comparison, accessing the About page, which has no database accesses in it, typically takes 11 ms to load, so the database accesses are always going to be slower than that.

Explaining the new CQRS architecture and the SQL+NoSQL databases

This article is about me upgrading my example book selling site to what is known as a CQRS (Command Query Responsibility Segregation) architecture using a polyglot database structure. Those terms might not that useful if you haven’t heard them before, so let me give you a diagram and then explain those terms in a bit more detail.

The CQRS architecture acknowledges that the read-side of an application is different from the write-side. Reads are often complicated, drawing in data from multiple places, while the write side is often much simpler. Separating the code for each part can help you focus on the specific features of each part.

A “polyglot database structure” is one which uses a combination of different storage types, for instance, relational databases, NoSQL databases, flat files, and so on. The idea is that each database type has its strengths and weaknesses, and by used two or more different database types you can obtain a better overall system.

I think a polyglot CQRS architecture, with SQL for the write-side database and NoSQL for the read-side database makes a lot of sense for business applications where there are more reads than writes, for instance, eCommerce applications, job sites, calender/todo etc. etc.

The SQL write-side database makes sense because business application often use relational. Think about a real book selling site: it would have a LOT of complex, linked data, to handle all the parts of the system, such as suppliers, inventory, pricing, orders, payment, delivery, tracking, audits and so on. I think a relational (e.g. SQL) database with its high level of data integrity for relational data, is the right choice for many business problems.

But those relationships and some aspect of database can make a SQL database slow at retrieval of data. My “sort by average votes” is particularly hard on a SQL database because of how many book reviews it needs to look at to sort by average votes. But I have seen equally complex queries in other systems I have built that don’t scale well either.

One answer is to create a read-side database that contains the exact data that the user wants to see, or sort/filter on. This could be another SQL database (I did that in Part3 of the previous article), but it makes a lot more sense for that database to be a NoSQL database. Why? Because:

  1. A NoSQL database is simpler to read, as it each entry is normally self-contained, with no links outside it. This should make it quicker at reading than a SQL database.
  2. NoSQL databases are normally easier to scale out, that is, have multiple databases containing the “same” data. This is because a write to a NoSQL database is dealing in writing one complete entry. (This is a big topic, and I’m not going to discuss this – see this Stack Overflow answer for part of the reason).
  3. I think a CQRS + NoSQL read-side database does a better job of holding “cached values” than can be done with a SQL/relational database does – I cover this later in the section about the complexity and robustness of the software.

As you will see in the rest of this article these features make my SQL+NoSQL CQRS architecture perform very well, and still have a high level of data integrity.

Why I chose RavenDB as my NoSQL database

I’m not an expert of NoSQL databases but when my first article went out I got a tweet from Oren Eini (Twitter @ayende), who is the main force behind the RavenDb NoSQL database. He quite rightly felt that his RavenDB database would be faster than the SQL equivalent at sorting/searching this data.

His tweet gave me an opportunity to get some help with my lack of NoSQL knowledge, so I sent him the tweet below, and the choice of NoSQL database was made. And it’s turned out to be a very good choice indeed, as you will see.

RavenDB has a .NET library that supports LINQ queries, which meant I could use the LINQ queries I had developed for EF Core with almost no changes. RavenDB also allowed me to create indexes on certain parameters, and would then automatically generate indexes for the various combinations. Indexes are vital to fast performance and most NoSQL systems have them.

I want to thank Oren Eini (see https://ayende.com/blog/) for his help and support, and Jonathan Matheus of RavenHQ , who has kindly provided me with free hosting of the RavenDB database that I use in my live site – http://cqrsravendb.efcoreinaction.com/.

How I added the CQRS architecture to my example book selling site

I only wanted to move the book list view part to the NoSQL read-side database, and not for the order processing part, because it’s only the book list view that has a performance issue. It turns out that while adding CQRS does require a fair amount of work, it is very simple to only apply the CQRS architecture to part of my application. The figure below shows the design of my changed example book selling site application, with the book list implemented as a two-database CQRS part.

My example book selling site is an ASP.NET Core application with EF Core used for the database access code. It is a layered application, as shown in the diagram above. The book selling site isn’t that complicated but I still needed to be careful as I modified the application over to a CQRS read-side database. I therefore wanted a design that minimized the changes, and isolated the new parts.

I came up with a design that keeps all the NoSQL/RavenDB parts separate. In my final design, the EF Core doesn’t know, or care, what database is being used for the read-side part of the CQRS system. This made the update simpler, plus offered the possibility of changing the NoSQL database I used – I really like RavenDB, with its support of LINQ, but EF Core version 2.1.0 will support Azure’s NoSQL database, Cosmos, so that might be an interesting alternative.

By keeping as much of the new database code in the NoSQLData layer, and using interfaces, kept the impact of my changes to a minimum. Figure 14.3 shows how I hid the NoSQL code behind interfaces to keep the NoSQL code isolated. I used dependency injection to provide both the DataLayer and the ServiceLayer with methods that allowed access to the database.

Here is a typical entry in the NoSQL database. As you can see I have pre-calculated the number of Reviews on this book, ReviewsCount, and the average of the six review votes, ReviewsAverageVotes. These two properties, plus the Id (not shown – RavenDB keeps that separate), the PublishedOn, and the ActualPrice all have indexes.

{
    "Title": "Adobe AIR in Action",
    "PublishedOn": "2008-07-01T00:00:00.0000000",
    "Price": 379,
    "ActualPrice": 189.5,
    "PromotionPromotionalText": "today only - 50% off! ",
    "AuthorsOrdered": "Joey Lott, Kathryn Rotondo, Sam Ahn, Ashley Atkins",
    "ReviewsCount": 6,
    "ReviewsAverageVotes": 3.5
}

Is changing my application to a SQL+NoSQL CQRS design worth the effort?

Implementing this two-database CQRS architecture wasn’t simple and took over a week to develop, which is long for me. Admittedly the main part was learning how to handle the RavenDB database approach, but there were also some complex EF Core parts to write to. So, was it worth the effort? To answer that question, I’m going to look at three distinct aspects.

  1. Was book list display, with its various sorts and filters, quicker?
  2. What was the impact writing to the database performance?
  3. What effect did it have on software, both in effort, complexity, and robustness?

1. Was book list display quicker?

Well, you have already seen the graph, but I’ll show it again. It beats the “best-SQL” version on sort by votes hands-down.

Now, this isn’t the whole story as in the first article I covered four levels of performance-tuning, and there was a faster version after the “best-SQL” version. In that version, referred to Part 3 (SQL cached values), I created “cached values” very like the values you see in the NoSQL database. In fact, it was the similarity of the Part 3 (SQL cached values) book query to getting json data that made me think of using a NoSQL database with CQRS.

If we compare the Part 3 (SQL cached-values) solution with the SQL+NoSQL CQRS version, then the difference is very much smaller, as shown by the graph below.

Note: You can access a live site with the Part 3 (SQL cached values) at http://efcoreinaction.com/. This site has 100,000 books in it.

This second graph shows that it’s the cached values, with their indexes, that makes the difference. RavenDB still has the edge of the Part 3 (SQL cached-values) version, but there is another aspect that I didn’t measure here, that is scalability.

Scalability is about how many simultaneous users an application can handle while still providing a ‘good’ performance. Because NoSQL databases such as RavenDB are dealing with a single entry containing all the information then a read or write is simpler than the SQL case. In general, this makes NoSQL databases easier to duplicate, that is, have multiple databases all containing the same data. The effect of having multiple databases on scalability can be significant. Not only can you spread database access across multiple databases, but you can locate databases geographically around the world to provide shorter access times. This final point clinches the performance issue for me – the SQL+NoSQL CQRS solution wins on performance.

SQL+NoSQL CQRS read performance summary: Excellent

2. What was the impact writing to the database performance?

I said earlier on that a SQL+NoSQL CQRS architecture is going to be slower on writes, as it must write to two databases. I have measured this in my solution and there is an effect, but it’s pretty small.

Taking a common case, which is a user adding a review to a book the results are as follows:

Solution type Total time Notes
Part 3 (SQL cached-values) 13 ms Simple addition of Review entity to the database and a recalculate of the cached values.
SQL+NoSQL CQRS 35 ms The extra time is mainly around writing to the NoSQL database. I measured the RavenDB update taking 25ms, which is quite long compared to a SQL write.

For a function that takes less than 50ms to return to the user, it’s not worth performance tuning. But in applications with more complex updates this time might get too long. But there are plenty of ways to handle this – one way is to pass the update a background task to be executed so that the application returns immediately to the user. The only down side of that approach is the user may be shown out-of-date data, which could be confusing – these are the trade-offs you must think through.

SQL+NoSQL CQRS write performance summary: Good, and could be made better if needed.

3. What effect did it have on software, both in effort, complexity and robustness?

The table below compares the development effort, complexity and robustness of the two designs: The Part 3 (SQL cached-values) solution and the SQL+NoSQL CQRS.

Solution type Effort Complexity Robustness
Part 3 (SQL cached-values) ~ 3 days Complex concurrency Good
SQL+NoSQL CQRS ~ 8 days Complex structure Very good (see below)

The effort was bigger for the SQL+NoSQL CQRS solution, but still in a very sensible time. Remember too, that is the time to performance tune an existing, SQL-only, EF Core application – 8 days to improve performance isn’t that bad in my opinion.

On the added complexity, the two solutions added to the original application was about the same –the added complexity was just in different areas in the two solutions.

The big plus for me was on the on robustness of the application. Keeping cached values correct is really difficult, and I think the SQL+NoSQL CQRS did a much better job of that than the Part 3 (SQL cached-values) did. This is because the SQL+NoSQL CQRS’s architecture “designs-out” any concurrency issues. The diagram compares Part 3 (SQL cached-values) handling of concurrency issues that could make the cached values out of date, with the same problem in the SQL+NoSQL CQRS solution.

Unlike the Part 3 (SQL cached values) solution, where I had to consider each cached value separately, and devise a different solution for each of them, the CQRS design handles all potential problems in one go – it effectively designs them out.

NOTE One other cache-invalidating error I can envisage is if the two updates to the NoSQL database arrive in the wrong order, say due to a retry. But that problem is simpler to fix, say by adding timestamps to each update and never allowing an older update to be applied.

SQL+NoSQL CQRS software summary: it took a bit more develop time, but it’s worth the effort.

Conclusion

I have altered my example book selling site from a SQL-only solution to one that uses a CQRS (Command Query Responsibility Segregation) architecture with a separate read-side and write-side database.  The write-side database I kept as a SQL/relational database because of its ability to track relationships with a high level of data integrity. On the read-side database I used a NoSQL database, RavenDB. I called this my SQL+NoSQL CQRS solution.

The SQL+NoSQL CQRS solution won by a large factor the on read-performance against the “best-SQL” database. Even against a SQL solution that pre-calculated cached values then SQL+NoSQL CQRS solution was only slightly faster, but has a much better chance to scale horizontally (i.e. multiple duplicate databases) than its SQL counterpart. This means the SQL+NoSQL CQRS won on all fronts for read-side queries.

The SQL+NoSQL CQRS solution did introduce a drop in performance for writes to the database, as two databases had to be updated. In my application that was small enough to ignore, but if it did become a problem there are ways around that.

The SQL+NoSQL CQRS solution took the most development time of any of the performance tuning I did – taking ~8 days to implement compared to ~3 days for the Part 3 (SQL cached values) solution took. However, I think the performance gain was impressive and the overall solution is more robust.

I hope you have found this article and the previous article interesting. Please do add comments and suggestions. I’m always learning.

Further reading

Happy coding!

Entity Framework Core performance tuning – a worked example

This is the first in a series of articles on performance tuning EF Core. In this article I take an example book selling site that I have built for my book and see how fast I can get it to perform. I have got some really excellent performance out of Entity Framework Core (EF Core) and I wanted to share it with you all.

  1. Entity Framework Core performance tuning – a worked example (this article).
  2. EF Core – Combining SQL and NoSQL for better performance.

Note: This article is from chapter 13 of my book, which  will be available from the end of October 2017, and has lot more detail (its 33 pages long). The book is on early access, which means you can buy the book now, and you get the chapters as I write them – plus the finished book when the book is complete.

NOTE: Manning Publications have offered the coupon to get 37% off my book, Entity Framework in Action” – use fccefcoresmith

Executive summary for those in a hurry

I build an example book selling web site using EF Core version 2.0.0 and filled it with 100,000 books and ½ million reviews. I then went through four staged of performance tuning. The figure below shows you the results through these stages for two different, but demanding areas of performance:

  • Displaying 100 books out of 100,000 possible books.
  • Displaying 100 books out of 100,000 possible books, sorted by average votes
    (timings are how long the web request took to finish, as measured by Chrome on my development PC).

The four stages on the graph are:

  1. Part 1: Working with EF Core built queries
    1. I get my LINQ code in a form that EF Core can turn it into good SQL
    2. I performance tune the existing EF Core query by adding a DbFunction.
  2. Part 2: I build a SQL query by copying EF Core’s SQL and ‘tweaked’ one bit to get some extra performance
  3. Part 3: I added cached values to the database that held hard-to-calculate values, such as the average votes for a book.

You can try the final, Part 3, “tuned up” example book selling site at  http://efcoreinaction.com/.

Note: The site www.efcoreinaction.com isn’t running on some massively powerful cloud system, but on shared hosting site that costs £3.50/month. (see Web Wiz, who are a great hosting site for .NET. They have always been early supporters of new .NET features.)


Setting the scene – my example application

Right from the second chapter of my book I introduce an example book selling site, and as the book progresses I add more feature to the application. Admittedly the site only has a few features, but I purposely included some sorting and filtering features that can be a challenge when the site has a lot of books to show. The diagram below shows you the features of the site.

What makes the book list a challenge?

I tried to create an example application which was at least credible, and included some complicated queries. The diagram below shows you the listing of a book, and some information on how easy or complex it is to get that data from the database.

On top of that the sorting, filtering and paging commands adds another level of complexity. I came up with four sort and filter tests that got progressively difficult, each done with page size of 10 and 100 to see how much showing more books effected performance. Here are the four tests.

  1. Easy – just show the page, with default everything.
  2. Moderate – sort the books by their publication date.
  3. Hard – sort by average votes, which is calculated from all the reviews linked to a book.
  4. Double hard – filter by average votes and the sort by price, both of which are calculated values.

Note: I did all my testing on my development PC using the ASP.NET Core example book selling site running in debug mode. The timings are from Chrome, and are the total time it takes for the page to return (including downloading the content). For comparison, accessing the About page, which has no database accesses in it, typically takes 11 ms to load, so the database accesses are always going to be slower than that.


My four steps in performance tuning

I went through four iterations of tuning of the application, and I will cover each one in this article. The stages are:

  1. Tuning up my EF Core query
    1. Making sure EF Core could translate all parts of my LINQ query to SQL
    2. Tweaking my query by adding a user defined function
  2. Going SQL – moving the query to Dapper
  3. Modifying the database – adding cached values (this is running on efcoreinaction.com)

Part 1a – Getting my query to translate into SQL

I started this performance tuning process by looking at what EF Core can do with well written code. Throughout my book, I have recommended the LINQ Select query, as this creates the most efficient database access. The Select query works well for the book display as it needs to get data from other tables, this the Reviews, and it doesn’t need some of the columns in the Books table, such as the book’s Description column, which could be very long. Here is my Select query object.

public static IQueryable<BookListDto>           
    MapBookToDto(this IQueryable<Book> books)   
{
    return books.Select(p => new BookListDto
    {
        BookId = p.BookId,                      
        Title = p.Title,                        
        Price = p.Price,                        
        PublishedOn = p.PublishedOn,            
        ActualPrice = p.Promotion == null       
                ? p.Price                       
                : p.Promotion.NewPrice,         
        PromotionPromotionalText =              
                p.Promotion == null             
                  ? null                        
                  : p.Promotion.PromotionalText,
        AuthorNamesOrdered = p.AuthorsLink
                .OrderBy(q => q.Order)
                .Select(q => q.Author.Name),
        ReviewsCount = p.Reviews.Count,        
        ReviewsAverageVotes =                  
            p.Reviews.Select(y => (double?)y.NumStars).Average() 
    });
}

Note the calculation at the end of the MapBookToDto code, which calculates the average votes from all the reviews linked to this book. I took me a bit of time to get the LINQ command in the right format – finally solved it by asking a question of the EF Core issues page. But the result is that average is done in the database using the SQL command AVG. This was critical in getting the performance I needed, not only in calculating the value to so in each book info, but also to make the “sort by average votes” as quick as it was.

Here are the results for 100,000 books (same data as on the www.efcoreinaction.com site).

Test 10 books 100 books Comment
All books 45 ms 230 ms Sorts on primary key
Sort by publication dates 45 ms 230 ms Sorts on PublishedOn, with index
Sort by votes 500 ms 650 ms Must average ½ million reviews!
Sort by price, filter by 4+ votes 450 ms 600 ms Less books, so quicker

The “sort on votes” performance, at 500ms sounds slow, but it was a lot quicker than I expected. EF Core produced the ideal SQL, using the SQL AVG command, but it had to be called 100,000 times, and had a total of over ½ million Reviews to process. For some applications, this level of performance would be acceptable, but for this example I deem that this is too slow, and I will be looking again at the “sort on votes” in Part 2 and Part 3.

The one item that did seem very slow was the display of 100 books. I analysed that and, because the Authors names came back as a collection, EF Core ran a separate query for each book – so, instead of 2 database access it had 12 database accesses for 10 books, and 102 database accesses for the 100 books.

I know that every separate database access costs, so I wanted to remove those extra database accesses. And because I knew something that EF Core didn’t know, then I could fix it, as you will see in the next Part.

Part 1b – tweak the query with the DbFunction feature

So, in my first performance tweak, I wanted to see if I keep all the good SQL that EF Core has produced, but inject a bit of my own SQL to improve the performance of the AuthorNamesOrdered part. What I knew was I only wanted the names to concatenate them into a comma-delimited string, and I could do that in SQL very easily.

EF Core 2.0.0, released in August 2017, has a great new feature contribution by Paul Middleton. This allows you to register a SQL user defined function (UDF) with EF Core and then use it in the query. This is a really useful tool, and I expect to be using DbFunctions a lot in the future.

So, with help from Stack Overflow I produced a UDF, as shown below.

CREATE FUNCTION AuthorsStringUdf (@bookId int)
RETURNS NVARCHAR(4000)
AS
BEGIN
-- Thanks to https://stackoverflow.com/a/194887/1434764
DECLARE @Names AS NVARCHAR(4000)
SELECT @Names = COALESCE(@Names + ', ', '') + a.Name
FROM Authors AS a, Books AS b, BookAuthor AS ba 
WHERE ba.BookId = @bookId
      AND ba.AuthorId = a.AuthorId 
      AND ba.BookId = b.BookId
ORDER BY ba.[Order]
RETURN @Names
END
GO

I could then replace the extraction of a list of author’s names with a call to my UDF, as shown below.

public static IQueryable<BookListDto>            
    MapBookToDto(this IQueryable<Book> books)    
{
    return books.Select(p => new BookListDto
    {
        //… other code removed for clarity
        AuthorsOrdered = UdfDefinitions.AuthorsStringUdf(p.BookId),
        //… etc.
    });
}

The UDF returns a single string, so there is no need for extra database accessed. After this change, there were only two databases accessed, a count for paging and then the real query, no matter how many books in a page.

Note: I have not covered how to register the UDF with EF Core, nor the best way to add the UDF to the EF Core created database. There just isn’t room in this article. There is some information on this in the this GitHub documentation. I do have more information on DbFunctions in  chapter 8 of my book, and how to add UDFs and other SQL code to an EF Core database in chapter 11.

Before I show you the improved performance figures let me first show you the new SQL that EF Core produces. EF Core now produces very readable SQL, almost like how a human would lay it out (I did a couple of minor changes to the layout to make the diagram work, but is mostly as EF Core created it).

The only change in the SQL from the Part 1a version is the call to the UDF, but the effect on performance is great, especially on the 100-book display, is good. See the figures below

Here are the results for using the 100,000 book test data, with the ‘Improvement’ columns showing how much faster the Part 1b implementation is compared to Part 1a’s performance.

Test 10 books Improvement 100 books Improvement
All books 30 ms 150% 80 ms 250%
Sort by publication dates 30 ms 150% 80 ms 250%
Sort by votes 480 ms minor 530 ms minor
Sort by price, filter by 4+ votes 420 ms minor 470 ms minor

Part 2 – converting the query to hand-written SQL

When I studied the SQL from part 1b, I noticed that EF Core repeated any calculation in the SQL ORDER BY section, rather than referring to the SELECT’s alias name of the calculated value. I did some tests and they showed using the SELECT’s alias name, ReviewsAverageVote, which holds the calculated average vote, in the ORDER BY section would improve performance.

Based on this I decided to replace the EF Core query with a fully, hand-code SQL command. That meant I had to write the SQL code, but because EF Core’s SQL is so great (and readable), I just copied it (!), and replaced the one area, the ORDER BY part, with some tweaked SQL as shown below – see the ReviewsAverageVotes value used in the ORDER BY at the end of the SQL

SELECT TOP(@pageSize) [b].[BookId], [b].[Title], 
     [b].[Price], [b].[PublishedOn], 
CASE
    WHEN [p.Promotion].[PriceOfferId] IS NULL
    THEN [b].[Price] ELSE [p.Promotion].[NewPrice]
END AS [ActualPrice], 
[p.Promotion].[PromotionalText] AS [PromotionPromotionalText], 
[dbo].AuthorsStringUdf([b].[BookId]) AS [AuthorsOrdered], 
(
    SELECT COUNT(*)
    FROM [Review] AS [r]
    WHERE [b].[BookId] = [r].[BookId]
) AS [ReviewsCount], (
    SELECT AVG(CAST([y].[NumStars] AS float))
    FROM [Review] AS [y]
    WHERE [b].[BookId] = [y].[BookId]
) AS [ReviewsAverageVotes]
FROM [Books] AS [b]
LEFT JOIN [PriceOffers] AS [p.Promotion] 
    ON [b].[BookId] = [p.Promotion].[BookId]
WHERE ([b].[SoftDeleted] = 0) 
ORDER BY [ReviewsAverageVotes] DESC

The next problem was how I would execute the SQL. I could have used ADO.NET but the well-respected mini-OR/M called Dapper. Dapper has a feature that maps the data returned from the SQL to a .NET class by matching the class property names to the column names returned. EF Core also has a feature that can provide the correct type of connection for Dapper to access the database, which makes using Dapper pretty easy.

The unit test code below shows how to use Dapper with an EF Core’s application DbContext – the Dapper part is the line “Query<Book>(“SELECT * FROM Books”)” in the code below.

[Fact]
public void DapperReadBooksEfCoreSqlServer()
{
    //SETUP
    //… I removed the options and database setup 
    //so that it’s easier to see the Dapper part 
    using (var context = new EfCoreContext(options))
    {
        //ATTEMPT
        var books = context.Database 
            .GetDbConnection() 
            .Query<Book>("SELECT * FROM Books"); 

        //VERIFY
        books.Count().ShouldEqual(4);
    }
}

The book list query I implemented in chapter 2 was split up into four separate query objects: MapBookToDto, OrderBooksBy, FilterBooksBy, and a generic paging method. That meant each part was easy to understand and could be separately tested. Duplicating this query in Dapper requires me to sacrifice the elegance of query objects and move to a design that combines snippets of SQL to create the query. But that’s what I need to do to squeeze out the best performance.

The results of swapping over to this improved SQL is shown in the table below, with the big change in bold (Sort on Average votes).

Test scenario Part 1b – EF Core Part 2 – SQL Comment
Show books  10 =30 ms
100 = 80 ms
10 =30 ms
100 = 85 ms
No real change – within normal variations
Sort on PublishedOn 10 = 30 ms
100 = 80 ms
10 = 30 ms
100 = 90 ms
No real change – within normal variations
Sort by average votes 10 = 500 ms
100 = 530 ms
10 = 325 ms
100 = 390 ms
The improved SQL is about 40% faster than EF Core
Filter by average votes (4 or more), sort by price 10 = 440 ms
100 = 490 ms
10 = 455 ms
100 = 520 ms
No real change – within normal variations

As you can see, the only part that improved was the “sort by average votes” situation. What I found interesting is that none of the other more mundane queries had improved. I kind of expected them to because Dapper’s current site states its ten times faster or more than EF (most likely EF6.x), which didn’t show up in these results.

I therefore tried a simple “read one book row” test, and yes, in that case Dapper is about seven times faster than EF Core. That makes sense, as EF Core has several things to do on loading, like relational fixup, while Dapper just executes the SQL. But it turns out, when it comes to large or complex queries where the database execution is the limiting factor, then the performance is down to the quality SQL produced, and not the library you use. The fact is, small accesses aren’t normally the ones that need performance tuning, as they are quick anyway, using Dapper or EF Core only has a difference if the SQL is different, which in the ”sort by average votes” case it was.

But the question is, “was all that effort to swap to SQL worth it?” Before you decide that you should read Part 3, where I use another approach that take more work, but improves performance much more than my hand-tuned SQL does.

Part 3 –  Modifying the database to increase performance

The final stage I decided to try modifying the database and adding cached values for the calculations that take a lot of time. These are:

  1. The Price, which varies depending on whether there is a PriceOffer promotion linked to the book.
  2. The average votes and number of reviews for this book
  3. The AuthorsString, containing the comma-delimited list of authors of a book.

Caching is a great tool for improving performance, but ensuring the cached value is up to date needs very careful handing. This is especially true in a database, where you may have a simultaneous update of the same row, which could make the cached value incorrect.

In chapter 13 in my book I cover how I handled the three types of caching, which need different techniques. But for this article I’m going to focus on the most difficult one of the three, that is the caching of the average votes and number of reviews for this book. This is difficult because I still need to keep the actual Review entities so that users can look at the Review’s comments on a book. This means I have two versions of the average votes value: one that is found by averaging the reviews dynamically, as I have done in Part 1a to 2, or the cached value held in the AverageVotes in my modified Book entity class. Keeping these in set is a challenge.

To keep my cached values up to date I need to make some fairly large changes to how things work, as summarised below:

  1. I still need the Reviews property, which sets up the one-to-many relationship between a Book entity and its Review But I must ensure that Review entities cannot be added or removed from the Reviews property, that is, all adds or removes must go through my AddReview and RemoveReview methods.I also I need to “lock down” the Review entity class by making its properties have private setters, and stop any methods outside of the DataLayer from being able to create a Review entity instance.
  2. Because the AverageVotes and ReviewsCount properties are cached values then there is the possibility of a concurrent adding or removing of a Review, which could invalidate the cached values.

I will deal with each of these two stages separately

Lock down access to Reviews collection property

EF Core allows navigational properties to be set up which EF Core will use them when doing certain read and writes to the database. My initial Book entity class had a property called Reviews that contained a list of Review entities for this book instance. I would then add or remove a Review instance from the list to change what reviews where linked to the Book.

For my caching of the values related to the Reviews to be kept up to date I need intercept all changes to the Book’s Reviews. To do that I use a feature in EF Core called backing fields. This allow me to ‘hide’ the Reviews list, and provide a IEnumerable<Review> Reviews property, which a developer cannot add or remove from. Here is what the updated Book entity class looks like

public class Book
{
    private readonly List<Review> _reviews 
        = new List<Review>(); 

    //… other properties removed for clarity

    public IEnumerable<Review> Reviews => 
         _reviews.ToList();
…

Now I can add two methods, AddReview and RemoveReview, to the Book class that update its backing field _reviews and at the same time recalculate the AverageVotes and ReviewCount cache properties.

The other back door to changing the Reviews is if the develop loaded a Review entity and altered its values, say changing the reviews vote from 1 to 5. To stop the developer from do that I set all the Review class property setters to private, and I make the constructor internal, so only my DataLayer can create a new instance of the Review class.

The figure below shows the backing field in the Book class and its AddReview method in blue, with the “locked down” Review class in purple.

Handling simultaneous additions of a Reviews to the same book

The other problem is multiple users adding Reviews at the same time. There is a small window of time between the loading of the current Reviews linked to a Book and the saving of the new review collection and the associated cached values. In that time window, another Review could be added/removed from the same book by another user, which would make the cached values out of step with the database.

The solution to this problem is to use EF Core’s concurrency conflict feature. What I do is add the attribute [ConcurrencyCheck] to the AverageVotes and ReviewsCount properties. This means that EF Core will check they haven’t changed since the Book instance was loaded, which is window where other updates could make my calculations invalid. If they have been changed then I can write some code that recalculates the cached values using the current state of the database.  I’m not going to show the code for that because the article is already very long. You can find my concurrency code here.

The performance results from Part 3 – cached values

The results from all this is impressive, as you can see in the table below

Test 10 books Improvement over Part 2 (SQL) 100 books Improvement over Part 2 (SQL)
All books 30 ms no improvement 80 ms no improvement
Sort by publication dates 30 ms no improvement 80 ms no improvement
Sort by votes 30 ms 12 times better 80 ms 5 times better
Sort by price, filter by 4+ votes 30 ms 12 times better 80 ms 5 times better

Extra notes on performance:

  • I tried ½ million books: the 10-book time was 85 ms and the 100-book time was 150 ms. For this amount of books the database server is starting to be a limiting factor.
  • In Parts 1a, 1b and 2 if the sort by price, filtered by +1 votes, then the timings were all over a second in length. But this Part 3 implementation stills comes in at 30ms.
  • I took the SQL produced by EF Core in Part 3 and used Dapper to execute it, but there was no further performance improvement so it wasn’t worth doing.

Obviously, the standout improvement are the last two items, which were the ones I was targeting. You can see this code in action, with 100,000 books, at http://efcoreinaction.com/

Summary of the whole performance tuning work

Below is a repeat of the chart I had at the beginning that shows you the results for Part 1a to 3 for the 100-book display and 100-book display with sort on average votes.

However, what this graph does convey is that the Part 3 performance improvements make all the 10-book displays, or any sort or filter, come in at around 30 ms. As the 10-book display is the default then the site is now really fast. Also, the 30 ms is the time for the ASP.NET page to display, which includes other overheads outside the database access – in actual fact, the database accesses are between 10 ms to 15 ms.

The other questions I asked myself was how much effort did each stage take and where there any problem or adverse effects to the application? The table below gives my answers to these questions.

Part Effort Comments
1a. Straight EF Core Few days Finding the correct format for the LINQ average wasn’t easy. I had to ask the EF Core team via EF Core’s GitHub issue page. But once I got this right EF Core produced great SQL code.
1b. +DbFunction ½ day Very easy. I like the DbFunction a lot, and in this case, it made a significant improvement to the performance.
2. Better SQL One day Fairly easy, as I simply copied the SQL code that EF Core produces and tweaked the bit that my experiments said would help. But only useful if you can come up with some better SQL, i.e. the difference in speed of Dapper over EF Core only helps on small, very quick database accesses, which you don’t need to performance tune anyway!
3. Cached values Three days Definitely hard work, but also a fantastic result. I think the performance changes to the code don’t hinder future development of the application at all, and in some cases improves it.

Overall I am very pleased with the process. EF Core produces great SQL code from the get go, but only because I made sure my LINQ queries in a way such that EF Core can translate every part into SQL. I had to persevere with the LINQ Average method to get EF Core to translate that to SQL.

Note I think plenty of developer will fall foul of this problem and think EF Core is slow – but before you blame EF Core check your logs for QueryClientEvaluationWarning warnings, which tell you part of your LINQ query didn’t translate to SQL and is being run in software. See the section “Understanding the limitations of Client vs. Server evaluation” in my  article about EF Core’s Client vs. Server evaluation feature.

But when I had to go beyond straight EF Core I found there was plenty I could do. And I didn’t go as far as I wanted to due to time and limits on the chapter length – my next step was to change the application’s architecture (I give a few ideas at the end of chapter 13).

Conclusion

I took a query that was designed around a fictitious, but believable example, which wasn’t a walk over to implement. I got the best out of EF Core, and I think that performance was quite credible. I then pushed through two more stages, ending with a complex, but eminently implementable solution where the slow parts of the query were between eight and fifteen times faster from where I started. But more importantly all the queries for a 10-book display are at 30 ms.

You most likely selected EF Core for your database access library because it allowed you to develop your application rapidly (good choice!). That means you should have a working application quickly, and can then you can performance tune the parts that need it.

When you start performance tuning I would recommend persevering with conventional EF Core (see my note about QueryClientEvaluationWarning). But at some point you might decide that the standard EF Core approach won’t produce the performance you need. That’s when this article could be useful to you, so bookmark this URL for that day (or buy my book 🙂 ).


If you are interested in .NET in general I recommend the ASP.NET weekly newsletter, curated by Jerrie Pelser. This contains a list of articles on ASP.NET, Entity Framework and .NET in general (mostly .NET Core versions)  – sign up at https://www.getrevue.co/profile/aspnetweekly

Entity Framework Core: Client vs. Server evaluation

Entity Framework Core (EF Core) is a new, ground-up rewrite of the highly successful Entity Framework 6 (EF6.x) database access library for the .NET platform.  I have been commissioned by Manning Publishing to write a book on EF Core called Entity Framework Core in Action. In writing this book I have looked at any and all aspects of EF Core, and I am really impressed with the topic I am going to cover in this article, that is, Client vs. Server evaluation.

Client vs. Server evaluation is a feature that makes it easier to write queries to the database, because it allows you to include methods in your query that cannot be run on the database, that is, methods that EF Core cannot convert to SQL commands.

In my Entity Framework Core in Action book I am writing I use a database containing various technical books as an example (seemed appropriate) – you can see a live site at http://efcoreinaction.com/. The diagram below shows you just the the Book table with a many-to-many link to each Book’s Author(s).

Now I want to list a book, with its primary key, title and the Author(s) as a comma delimited string. The code below will do this, in this case I take the first book in the database:

var book = context.Books
    .Select(p => new
    {
        p.BookId,
        p.Title,
        AuthorsString = string.Join(", ",
            p.AuthorsLink
            .OrderBy(q => q.Order)
            .Select(q => q.Author.Name)),
    }
    ).First();

A typical output would be

BookId Title AuthorsString
1 Beginning Scala Vishal Layka, David Pollak

What happens is that EF Core converts the query into SQL to run on the database, but one method, string.Join, isn’t supported as a SQL command by EF Core’s SQL database provider, and at this point Client vs. Server evaluation comes in.

What happens is that EF Core can sort the commands into two groups:

  1. Commands that can be converted into SQL commands and run on the database server
  2. Commands that cannot be converted into SQL, so therefore must be run in software in the application, known as the client.

The figure below gives you a better view of this process. This shows the effect of running the EF Core query I just listed on a book that had two authors, Jack and Jill. As you can see Authors property will be set to “Jack, Jill”, while the BookId and Title would be set to the value of the corresponding columns in the Books Table.

Stage 1, the translation of the LINQ query, is where (with the help of the database provider) EF Core splits up the LINQ commands and other methods into either “run on database” (Server) or “run in software” (Client).

Stage 2 is where the SQL commands are run on the database server and the values are read back. The SQL that EF Core creates is given below.

SELECT TOP(1) [p].[BookId], [p].[Title]
FROM [Books] AS [p]

SELECT [q.Author].[Name]
FROM [BookAuthor] AS [q]
INNER JOIN [Authors] AS [q.Author] ON 
    [q].[AuthorId] = [q.Author].[AuthorId]
WHERE @_outer_BookId = [q].[BookId]
ORDER BY [q].[Order]

Note that EF Core always reads collections using a separate SQL command (EF6.x would have output one commands to read it all, but that can be inefficient in cases where the collection is large).

Stage 3 is run because EF Core detected commands that had to be run on the Client. It gets the data it needs from the SQL data and runs the software-only methods on the Client.

In stage 4 you see that EF Core combines the Server data and the Client data and provides the composite data back to the caller.

The Client vs. Server evaluation feature gives you as a developer the ability to create complex queries, and EF Core will optimize the query to run as much it can on the database Server. But if there is some method in your query that cannot be run on the database server then the query won’t fail, but EF Core will apply that method after SQL Server has done its part.

The example I showed is simple, but the possibilities are endless. But there are a few things to watch out for.

What about Entity Framework 6.x (EF6.x)?

The Client vs. Server evaluation feature is new in EF Core, and is not available in EF6.x. This means that I have to add extra stages in my query if I want to use methods such as string.Join. This makes it harder work in those cases. Obviously in EF Core this gets easier.

If you are still on EF6.x then I can recommend Alexander Zaytsev’s DelegateDecompiler (DD for short) which provides a similar feature (see this article, where I use the DD). While I have used DD, it does require me to add a command at the end of every query. Also DD is not (currently) available on .NET Core.

Understanding the limitations of Client vs. Server evaluation

I think the Client vs. Server evaluation feature is a very useful addition to EF. But, like all powerful features, it is best to understand what is going on so that you can use it in the right way.

Firstly, the obvious thing is the method you provide is run on every entity (row) you read from the database. If you have 10,000 rows in the database and don’t filter/limit what is loaded then, as well as a SQL command that takes a long time, your processor will spend a long time running your method 10,000 times.

The second point is subtler: The Client vs. Server evaluation feature blurs the lines between what is run in the database and what is run in the client. It would be possible to create a query that worked, but was slower than it could be because it had to use client-side evaluation. To give you some context, in EF6.x this form of mixed client/server query would have failed because it did not support that. That meant in EF6.x you had to do something about it – often by changing the query to better suit the database. Now your query may work, but could perform worse than one you write such that EF Core can convert it directly to SQL commands.

One extreme example of the problem is that Client vs. Server evaluation allows you to sort on a client-side evaluated property, which means the sorting is done in the client rather than in the database server. I tried this by replacing the .First() command with .Sort(p => p. AuthorsString) in listing 2.9 and returning a list of books. In that case EF Core produces SQL code that reads all the books, then read each row individually, twice, which is definitely not optimal.

Tips on performance tuning Client vs. Server evaluation

EF Core’s Client vs. Server evaluation feature has some very useful features you can use to make sure you aren’t producing poor-performing.

Firstly, EF Core will log a warning the first time a Client vs. Server evaluation will adversely affect performance, and this logging is intelligent. In the case of my book query that used string.Join then no warning is given because the string.Join does not affect the performance. However if I added the .Sort(p => p. AuthorsString) command then I get a warning.

Secondly, if you want to stop any poorly performing Client vs. Server evaluation queries from happening then you can configure EF Core to throw an exception if a poorly performing query is found.  See code below:

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder .UseSqlServer(
       @"Server=(localdb)\mssqllocaldb;Database=MyDatabase;Trusted_Connection=True;")
       .ConfigureWarnings(warnings
            warnings.Throw(RelationalEventId.QueryClientEvaluationWarning));
}

Note: The Microsoft documentation for Client vs. Server evaluation can be found at https://docs.microsoft.com/en-us/ef/core/querying/client-eval

Conclusion

EF Core has a load of nice features and, at the moment, misses some EF6.x features that that I rather liked. But I think Client vs. Server evaluation is one of those great features that will help me to build queries quickly, and often at no reduction of performance. For that I am very grateful.

Happy coding

More information on my book, Entity Framework Core in Action

Quite a bit is going on around the book I am writing, called Entity Framework Core in Action, now that it is on early release. In this short article I want to point you to some resource you might like to look at, and also give you a bit more on the planned content of the book.

Resources available for the book

The following resources are available:

  • A slideshare presentation giving you an overview of what’s in the book.
    IMPORTANT: discount code in the slideshare presentation! See first slide.
  • The book page, with links to the first chapter, which is free.
  • The GitHub repo with all the code that goes with the book.
    NOTE: I use a branch-per-chapter, so you need to select a branch to see the code.
  • The live site, at http://efcoreinaction.com/ where you can see the example book selling site that I build in chapters 1 to 5, and improve in later chapters.

Note: You can buy an early-access version of my book. You will get the chapters as I write them, and the full book when the book is released. This means you have early access to the information and you get the final book with any updates I make to the chapters before it is releases.

Table of Content for the book

Here is the current table of contents for my book. It is quite possible that a couple of chapter in part 2 and 3 might get too big and be split into two.

Part 1:  Introduction to Entity Framework Core

By the end of part 1 you should be able to build a .NET application that uses a relational database, but how the database is organized is left to EF Core; for instance, EF Core’s default configuration will set the type and size of the database columns, which can be a bit wasteful on space. I also look at building and deploying a web application that uses EF Core to handle its database.

The chapters are:

  1. Introduction to Entity Framework Core
  2. Querying the database
  3. Updating the database: Create, Update & Delete
  4. Using EF Core in Business Logic
  5. Using EF Core in ASP.NET Core web applications

Part 2:  Entity Framework Core In Depth

Part 2 is more of a EF Core reference section. By the end of part 2 you will be able to create a database with the exact type and size of each column, or link to an existing database which has a specific schema. I go deeper into the commands in EF Core’s DbContext and look at different ways of migrating a database.

The chapters are:

  1. Modelling options for scalar properties
  2. Modelling options for relationships
  3. Going deeper into the DbContext
  4. Handling database migrations

Part 3: Using Entity Framework in real-world applications

Part 3 is all about improving your skills and making you a better developer, and debugger, of EF Core applications. I look at real-world applications of EF Core starting with a range of known patterns and practices that you can use with EF Core with chapters on Unit Testing EF Core applications, extending ER Core, and most importantly how to find and fix EF Core performance issues.

The chapters are:

  1. Patterns for using EF Core
  2. Unit Testing EF Core applications
  3. EF Core performance Tuning
  4. Extending EF Core

Appendices

A brief introduction to LINQ

Conclusion

A pretty short and sweet post, but I hope it helpful. My next post will be much bigger.

Happy coding!

 

Updating many to many relationships in Entity Framework Core

I wrote an article called Updating many to many relationships in entity framework back on 2014 which is still proving to be popular in 2017. To celebrate the release of my book Entity Framework Core in Action I am producing an updated version of that article, but for Entity Framework Core (EF Core).

All the information and the code comes from Chapter 2 of my book. In the book I use a book selling site, with books linked to authors via a many-to-many relationship. You can see the live example site and if you scroll down you will see books with multiple authors.

All the unit tests and classes I use in this article can be found in the Chapter03 branch of the Git repo associated with this book.

NOTE: If you are in a hurry I have added summaries for each section – just search for the word Summary to get the answers without needing to look at the code examples.

Creating a many-to-many relationship in EF Core

The example I am going to use is one taken from the book I am writing. It has a Book entity linked its Author(s) entities via a BookAuthor table. The database diagram below shows the three tables, with the foreign key links from the BookAuthor linking table to the other two tables.

A linking table like this has foreign keys (FK in the diagram) which link to the primary key(s) (PK in the disgarm) of each end of the relationship – in this case the BookId of the Book entity and the AuthorId of the Author entity. These foreign keys then form a composite primary key for the BookAuthor table.

EF6.x created this table for you when you defined a many-to-many relationship, but EF Core, which takes a leaner approach, doesn’t – you need to do it. Its not hard, so let me give you the code to define the BookAuthor entity class.

public class BookAuthor               
{
    public int BookId { get; set; }   
    public int AuthorId { get; set; } 
    public byte Order { get; set; }   

    //-----------------------------
    //Relationships

    public Book Book { get; set; }    
    public Author Author { get; set; }
}

NOTE: I have a property called Order because the order of the authors matter. If the order of items in your many-to-many list don’t matter then you can leave that out.

EF Core will find the relationships using its by convention rules because I have used names that it understands:

  • I used the same names, BookId and AuthorId, of the primary keys in the Book class and the Author class.
  • Because I used the classes Book and Author, which EF Core knows are part of the database, then it knows its a relationship.

Now, the one thing that EF Core can’t work out is what the primary key of the BookAuthor table because they don’t follow the normal convension. There are a number of ways to define the primary key(s) of an entity, but I have used EF Core’s Fluent API that I access via the OnModelCreating method inside my DbContext, as shown below:

public class EfCoreContext : DbContext
{
    public DbSet<Book> Books { get; set; }            
    public DbSet<Author> Authors { get; set; } 
    public DbSet<PriceOffer> PriceOffers { get; set; }       

    public EfCoreContext(                             
        DbContextOptions<EfCoreContext> options)      
        : base(options) {}

    protected override void
        OnModelCreating(ModelBuilder modelBuilder)    
    {                                                 
        modelBuilder.Entity<BookAuthor>()             
            .HasKey(x => new {x.BookId, x.AuthorId});
    }                                                 
}

You can see the .HasKey method towards the bottom of the code where I define the composite key consisting of the BookId and the AuthorId.

The easiest approach to setting up relationships is by using EF Core’s conventions, which is what I have done. But if you want to explicitly define the relationships you can, using the .HasOne/.HasMany Fluent API commands, as shown below.

Note: There are loads of configuration rules, which I cover in chapter 6 and 7 of my book, or you can look at the EF Core docs.

//NOTE: I only show the OnModelCreating part of the DbContext
protected override void
    OnModelCreating(ModelBuilder modelBuilder) 
{
    modelBuilder.Entity<BookAuthor>() 
       .HasKey(x => new {x.BookId, x.AuthorId});

    //If you name your foreign keys correctly, then you don't need this.
    modelBuilder.Entity<BookAuthor>() 
        .HasOne(pt => pt.Book)
        .WithMany(p => p.AuthorsLink)
        .HasForeignKey(pt => pt.BookId);

    modelBuilder.Entity<BookAuthor>() 
        .HasOne(pt => pt.Author) 
        .WithMany(t => t.BooksLink)
        .HasForeignKey(pt => pt.AuthorId);
}

NOTE: You can set the keys, but not the relationships, via attributes. The [Key] and [Order] attributes allow you to define the primary key, but I find the Fluent API is quite clean and easy to understand.

Summary – how to add a many-to-many relationship

To add a many-to-many relationship in EF Core you need to:

  1. Create a linking entity class that has the foreign keys(s) of the two entities you want to form a many-to-many link between. In my example these are BookId and AuthorId.
  2. It is efficient (but not necessary) to make these two foreign keys into the composite primary key. This will ensure that there is only ever one link between the two entities you want. It is also more efficient in terms of size of the database table.
    1. If you do want the two foreign keys to be the composite primary key you need to configure this in EF Core, either via Fluent API or by [Key]/[Order] attributes, because it cannot find them using its normal “by convention” method.
    2. If you use properties for the foreign key that don’t  have the same name as the primary key it links to, then you must manually configure the relationship. There are a number of ways to do that, but EF Core’s Fluent API is quick and fairly straightforward to understand.
  3. The linking class can, but don’t have to, have relationship links to the two entities it links. In my example my AuthorLinks class has a property Book of type Book, and a property Author of type Author.
  4. You most likely want a collection relationship to the linking table in one or both of the linked entities. In my example I have property called AuthorLinks with the type ICollection<BookAuthor> in my Book class, and a property called BooksLink of type ICollection<BookAuthor> in my Author class.

Adding a Book many-to-many Author link

Creating new many-to-many link is fairly easy in EF Core (but not as easy as EF6.x). The code below creates a new book with a new author.

Note: the variable context used at the bottom of the code below is an instance of the application’s DbContext, called EfCoreContext in this case (see its definition above).

var book = new Book
{
  Title = "Quantum Networking",
  Description = "faster-than-light data communications",
  PublishedOn = new DateTime(2057, 1, 1),
  Price = 220
};
var author = new Author { Name = "Future Person" };
book.AuthorsLink = new List<BookAuthor>
{
  new BookAuthor { 
    Author = author, 
    Book = book,
    Order = 0
  }
};

//Now add this book, with all its relationships, to the database
context.Books.Add( book);
context.SaveChanges();

Updating a many-to-many relationship

It turns out there are two scenarios under which you want to update a relationship. They are:

  1. Connected state: This is where the load of the data and the update is done in one go, that is within one instance of the application’s DbContext. You find this sort of update that happens in a console application, or inside your business logic (see this link on business logic and EF Core)
  2. Disconnected state: This is where the update is split into two halves: a) select the entities you want to change, b) and make the change. Each stage has a different instance of the application’s DbContext. This happens on web sites, where in the first stage the user picks what they want done and posts it back. The second stage then has to re-read the data and update it.

I am going to describe these two approaches separately, so you can go to the one that fits the application you are building.

1. Connected state update

In the connected state we read in the Book entity and update it immediately, i.e. within the same instance of the DbContext. For my Book/Author example I am going to read in the Book, with its linking entities, BookAuthor, as tracked entities. – that means that EF Core will take a copy of the entities we read in so that it can spot any changes when you call SaveChanges.

The listing below is one of my unit tests which adds the existing author “Martin Fowler” to the book called “Quantum Networking”, which currently has one author called “Future Person”. After the test has finished the book “Quantum Networking” has two authors, “Future Person” and “Martin Fowler”

NOTE: I am using a Sqlite, in-memory database, which I seed with four books with known titles and authors.

public void TestAddExtraAuthorOk()
{
    //SETUP
    var inMemDb = new SqliteInMemory();

    using (var context = inMemDb.GetContextWithSetup())
    {
        context.SeedDatabaseFourBooks();

        //ATTEMPT
        var book = context.Books                          
            .Include(p => p.AuthorsLink)                  
            .Single(p => p.Title == "Quantum Networking");

        var newAuthor = context.Authors                   
            .Single(p => p.Name == "Martin Fowler");

        book.AuthorsLink.Add(new BookAuthor
        {
            Book = book,
            Author = newAuthor,
            Order = (byte) book.AuthorsLink.Count
        });
        context.SaveChanges();

        //VERIFY
        var bookAgain = context.Books
            .Include(p => p.AuthorsLink)
            .Single(p => p.BookId == book.BookId);
        bookAgain.AuthorsLink.ShouldNotBeNull();
        bookAgain.AuthorsLink.Count.ShouldEqual(2);
        var authorsInOrder = bookAgain.AuthorsLink.OrderBy(p => p.Order).ToList();
        authorsInOrder.First().Author.Name.ShouldEqual("Future Person");
        authorsInOrder.Last().Author.Name.ShouldEqual("Martin Fowler");
    }
}

The most important parts of the code are lines 11 to 13. Here I load the Book entity using an Include method to load the AuthorLinks at the same time, as tracked entities. The AuthorLinks property holds a a collection of BookAuthor entities linking to the Author entities.

Note: The default way of loading data in EF Core is as tracked entities, that is, EF Core takes a copy of the loaded data so that it can detect if any changes have happened since they were loaded. You can turn off tracking by including the method .AsNoTracking to your query (useful in read-only queries, as it makes them a little faster).

In lines 15 to 16 I also load, again as a tracked entity, the Author entity who’s name is “Martin Fowler” as that the author I want to add to this book.

Then on lines 18 to 23 I create a new BookAuthor linking entry linking the Book entity I loaded to the Author entity of “Martin Fowler”.

When SaveChanges is called on line 24 it finds that the AuthorLinks property  of the book instance it loaded has changed. Because this is a relationship it looks at the entities and finds that there is a new BookAuthor entity, which it adds to the database, and a author which is already in the database so it doesn’t have to add it.

Summary – Connected state many-to-many update

To update a many-to-many relationship in the connected state:

  1. Load an entity at one end of the relationship (doesn’t matter which end) with the collection of links in the many-to-many linking table. In my example I loaded the Book entity I wanted to change along with its AuthorLinks property, which holds the collection of the linking entities in the BookAuthor table.
  2. Alter the linking table collection, either by adding or removing entries from the collection. In my example I added a new BookAuthor class to the AuthorLinks property collection.
    1. Note: if you want to replace the whole collection then you can simply assign a new list to the linking collection (there is an example of this in my unit tests – see test named TestChangeAuthorsNewListOk).

2. Disconnected state

The disconnected state happens when the initial read of the data and update of the data are done separately, i.e. they use different instances of the applications DbContext. This happens in a web application which has two stages:

  1. The first stage is where the user is presented with the book, its current author(s) and a list of possible authors to choose from. Once they have chosen the author to add then press a button which takes their choice back to the web application
  2. In the second stage the web application needs to re-read the data and execute the database update.

The unit test below does the same as the previous example, that is, it adds the existing author “Martin Fowler” to the book called “Quantum Networking”, which starts out with one author called “Future Person”. After the test has finished the book “Quantum Networking” has two authors, “Future Person” and “Martin Fowler”.

I simulate the disconnected state by having two instances of the application’s DbContext and passing the primary keys of the Book and the new Author to add via a small class called ChangeAuthorDto.

Note: This unit test uses two, separate instances of the DbContext to simulate the disconnected state. With the help of Daria (see comment below) I have found a way to use Sqlite in-memory with multiple instances of the DbContext. Daria pointed out that the in-memory part is held in the Sqlite connection, so I create one options (which uses the Sqlite connection) and used that to create two separate DbContext instances  – thanks Daria.

[Fact]
public void TestAddAuthorDisconnectedOk()
{
    //SETUP
    var options = SqliteInMemory.CreateOptions<EfCoreContext>();
    ChangeAuthorDto dto;
    using (var context = new EfCoreContext(options))
    {
        context.Database.EnsureCreated();
        context.SeedDatabaseFourBooks();
        var book = context.Books
            .Include(p => p.AuthorsLink)
            .Single(p => p.Title == "Quantum Networking");

        var newAuthor = context.Authors
            .Single(p => p.Name == "Martin Fowler");
        dto = new ChangeAuthorDto
        {
            BookId = book.BookId,
            NewAuthorId = newAuthor.AuthorId
        };
    }

    using (var context = new EfCoreContext(options))
    {
        //ATTEMPT
        var book = context.Books
            .Include(p => p.AuthorsLink)
            .Single(p => p.BookId == dto.BookId);
        var newAuthor = context.Authors.Find(dto.NewAuthorId);

        book.AuthorsLink.Add(new BookAuthor
        {
            Book = book,
            Author = newAuthor,
            Order = (byte)book.AuthorsLink.Count
        });
        context.SaveChanges();

        //VERIFY
        var bookAgain = context.Books
            .Include(p => p.AuthorsLink).ThenInclude(p => p.Author)
            .Single(p => p.BookId == dto.BookId);
        bookAgain.AuthorsLink.ShouldNotBeNull();
        bookAgain.AuthorsLink.Count.ShouldEqual(2);
        var authorsInOrder = bookAgain.AuthorsLink.OrderBy(p => p.Order).ToList();
        authorsInOrder.First().Author.Name.ShouldEqual("Future Person");
        authorsInOrder.Last().Author.Name.ShouldEqual("Martin Fowler");
    }
}

The important lines are 27 to 30, where I load in the Book, with its BookAuthor linking collection accessed via the Book’s AuthorLinks property, and I read in the Author that the user wants to add.

Having loaded that book I can use the same code I had in the connected state, that is, I  create a new BookAuthor linking entry linking the Book entity I loaded to the Author entity of “Martin Fowler” and add it to the existing AuthorLinks collection – see lines 32 to 37.

Summary – Disconnected state many-to-many update

To update a many-to-many relationship in the disconnected state:

  1. In the first stage you should return the primary key(s) of the two ends of the many-to-many relationships that you want to link/unlink. In my example I provide the BookId and the AuthorId I want to add to the book.
  2. In stage two you load one end of the relationship (doesn’t matter which end) with the collection of links the the linking table. In my example I loaded the Book entity using the BookId key along with its AuthorLinks property, which held the collection of the linking entities in the BookAuthor table.
  3. Now load the other end of the relationship. In my example the Author entity, using the AuthorId returned from the first stage.
  4. Alter the linking table collection, either by adding or removing entries from the collection. In my example I added a new BookAuthor class to the AuthorLinks property collection.
    1. Note: if you want to replace the whole collection then you can simply assign a new list to the linking collection with the new BookAuthor class.

2a. Another, quicker way to update a many-to-many relationships in the disconnected state

In chapter 3 of my book, Entity Framework Core in Action , I point out that you can often change relationships using the foreign keys, and it can me more efficent. This is especially true of many-to-many relationships because you can directly add/delete entries from the linking table, in this case BookAuthor.

By directly adding/deleting the linking table entries then it saves you having to load the Book entity with its AuthorLinks property and the Author you want to add. Coupled with the fact that the first stage is going to supply you with the primary keys anyway then it saves quite a few database queries.

The unit test below shows this in action. The first stage of the unit test hands back primary keys of the Book and the new Author to add via the ChangeAuthorDto class. The second stage, which uses a new instance of the DbContext, uses those primary keys to build a new BookAuthor linking entity to add to the database.

NOTE: The one downside of this approach for my Book/Author example is the setting the Order property, as it should be set to get the correct order of the authors. For this example know there is one author with an Order set to 0 so I simply set it to 1. In real application the first stage would have to define what order the authors should be shown in.

[Fact]
public void TestAddAuthorViaForeignKeyOk()
{
    //SETUP
    var options =
        this.NewMethodUniqueDatabaseSeeded4Books();

    ChangeAuthorDto dto;
    using (var context = new EfCoreContext(options))
    {
        var book = context.Books
            .Include(p => p.AuthorsLink)
            .Single(p => p.Title == "Quantum Networking");

        var newAuthor = context.Authors
            .Single(p => p.Name == "Martin Fowler");
        dto = new ChangeAuthorDto
        {
            BookId = book.BookId,
            NewAuthorId = newAuthor.AuthorId
        };
    }

    using (var context = new EfCoreContext(options))
    {
        //ATTEMPT
        context.Set<BookAuthor>().Add(new BookAuthor    
        {                                               
            BookId = dto.BookId,                        
            AuthorId = dto.NewAuthorId,                 
            Order = 1                                   
        });                                             
        context.SaveChanges();                          

        //VERIFY
        var bookAgain = context.Books
            .Include(p => p.AuthorsLink).ThenInclude(p => p.Author)
            .Single(p => p.BookId == dto.BookId);
        bookAgain.AuthorsLink.ShouldNotBeNull();
        bookAgain.AuthorsLink.Count.ShouldEqual(2);
        var authorsInOrder = bookAgain.AuthorsLink.OrderBy(p => p.Order).ToList();
        authorsInOrder.First().Author.Name.ShouldEqual("Future Person");
        authorsInOrder.Last().Author.Name.ShouldEqual("Martin Fowler");
    }
}

The important lines are 27 to 32, where I create a new BookAuthor class using the BookId and the AuthorId of the new author I want to add to the book. If you compare this to the last disconnected example you will see this avoids three database loads:

  1. The Book we wanted to update
  2. The Books AuthorLinks collection of BookAuthor entities
  3. And the Author that I wanted to add.

In this case I added an extra author. I could have removed an author by deleting the appropriate BookAuthor entry.

I do recommend this approach for many-to-many relationship updates because it is more efficient and still quite clear in its intent, i.e. its not complex so its obvious what is happening.

Summary – quicker way to update a many-to-many relationships in the disconnected state

For updates to many-to-many relationships in the disconnected state you can:

  1. In the first stage you should return the primary key(s) of the two ends of the many-to-many relationships that you want to link/unlink. In my example I provide the BookId and the NewAuthorId I wanted to add to the book.
  2. In the second stage you should add/delete entries from the linking table directly. For instance to add a new many-to-many relationship you add a new entry in the linking table. In my example I added a new BookAuthor link  with its BookId property set from the BookId returned by the first stage, and its AuthorId property set to the NewAuthorId value from the first stage.

Conclusion

Setting up and changing many-to-many links in EF Core is harder than in EF6.x. EF 6.x used the same approach as EF Core, but “hid” the linking table and the manipulation of that table from you. That certainly made it easier, but as the continued access to my original article on updating many-to-many in EF6.x shows, it can cause confusion sometimes.

At the moment EF Core makes you configure the linking table and update that linking table yourself. The EF Core roadmap says it will look at automating this in the future, but not for now. But I think there are some benefits in knowing what is going on in the database, and, as my last disconnected example shows, you can sometimes be more efficient that EF6.x is at updating many-to-many relationships.

I cover this an much more in my book Entity Framework Core in Action and you can access the unit test code you see here via the Chapter03 branch of the Git repo that goes with the book – see the test class Ch03_ManyToManyUpdate.

Happy coding!