EF Core – Combining SQL and NoSQL databases for better performance

Last Updated: July 31, 2020 | Created: October 24, 2017

This is the second article in my series on Entity Framework Core (EF Core) performance tuning. In this article I take that performance even further than part 1 did, by changing the applications to a two-database CQRS (Command Query Responsibility Segregation) architecture, where one database is SQL and the other is NoSQL (I explain what that architecture looks like later in the article).

  1. Entity Framework Core performance tuning – a worked example.
  2. EF Core – Combining SQL and NoSQL for better performance (this article).
  3. NEW IN 2019! Building a robust CQRS database with EF Core and Cosmos DB. Improved design and uses EF Core’s Cosmos DB database provider.

Note: This article is from chapter 14 (which will be available from the end of November 2017) in the book “Entity Framework Core in Action” that I am writing for Manning publications. Chapter 14 contains more on the EF Core parts on the application mentioned in this article. The book is on early access, which means you can buy the book now, and you get the chapters as I write them – plus the finished book when the book is complete.

NOTE: Manning Publications have offered the coupon to get 37% off my book, Entity Framework Core in Action” – use fccefcoresmith

Executive summary for those in a hurry

I have an example ASP.NET Core application that uses EF Core to display a set of books – called my book selling site. In this article, I look at the performance book selling site which I have converted to a CQRS (explained below) architecture. In this I write data to a primary SQL Server database, but also build a read-side NoSQL database which the application queries use to show the list of books.

I run various queries to test the differences in performance, but the main one is sorting the books by their book reviews and then showing the top 100. I do this for three sizes of database

  • 100,000 books, which has ½ million book reviews.
  • 250,000 books, which has 1.4 million book reviews.
  • 500,000 books, which has 2.7 million book reviews.

The “sort by average review votes” is a particularly difficult query for a SQL-only database, as it must dynamically calculate the average votes value of all the nook reviews in the database – and as you can see my test data has a lot of reviews!

The figure below shows the difference in performance of the “best-SQL” (hand-tuned SQL run by Dapper) against what I call the SQL+NoSQL CQRS version of my application, for the three sizes of databases.

You can try the SQL+NoSQL CQRS at http://cqrsravendb.efcoreinaction.com/. It has 250,000 books in its database.

Note: The site cqrsravnedb.efcoreinaction.com isn’t running on some massively powerful cloud system. The NoSQL database is running courtesy of www.RavenHQ.com (thanks to them for doing that) and is running on the lowest level of server, so you won’t get the same performance as shown in the chart above.

Setting the scene – my example application

Note: It you read the first article then you know this – skip to the next section called “Explaining the new CQRS architecture and the SQL+NoSQL databases”.

Right from the second chapter of my book I introduce an example book selling site, and as the book progresses I add more attributes to the application. Admittedly the site only has a few features, but I purposely included some sorting and filtering features that can be a challenge when the site has a lot of books to show. The diagram below shows you the features of the site.

What makes the book list a challenge?

I tried to create an example application which was at least credible, and included some complicated queries. The diagram below shows you the listing of a book, and some information on how easy or complex it is to get that data from the database.

I ran a series of tests checking on sorting, filtering and paging, but with this design they all had the same performance (unlike the previous article). But I did run tests will different numbers of books and related data in the database.

Note: I did all my testing on my development PC using the ASP.NET Core example book selling site running in debug mode. The timings are from Chrome, and are the total time it takes for the page to return (including downloading the content). For comparison, accessing the About page, which has no database accesses in it, typically takes 11 ms to load, so the database accesses are always going to be slower than that.

Explaining the new CQRS architecture and the SQL+NoSQL databases

This article is about me upgrading my example book selling site to what is known as a CQRS (Command Query Responsibility Segregation) architecture using a polyglot database structure. Those terms might not that useful if you haven’t heard them before, so let me give you a diagram and then explain those terms in a bit more detail.

The CQRS architecture acknowledges that the read-side of an application is different from the write-side. Reads are often complicated, drawing in data from multiple places, while the write side is often much simpler. Separating the code for each part can help you focus on the specific features of each part.

A “polyglot database structure” is one which uses a combination of different storage types, for instance, relational databases, NoSQL databases, flat files, and so on. The idea is that each database type has its strengths and weaknesses, and by used two or more different database types you can obtain a better overall system.

I think a polyglot CQRS architecture, with SQL for the write-side database and NoSQL for the read-side database makes a lot of sense for business applications where there are more reads than writes, for instance, eCommerce applications, job sites, calender/todo etc. etc.

The SQL write-side database makes sense because business application often use relational. Think about a real book selling site: it would have a LOT of complex, linked data, to handle all the parts of the system, such as suppliers, inventory, pricing, orders, payment, delivery, tracking, audits and so on. I think a relational (e.g. SQL) database with its high level of data integrity for relational data, is the right choice for many business problems.

But those relationships and some aspect of database can make a SQL database slow at retrieval of data. My “sort by average votes” is particularly hard on a SQL database because of how many book reviews it needs to look at to sort by average votes. But I have seen equally complex queries in other systems I have built that don’t scale well either.

One answer is to create a read-side database that contains the exact data that the user wants to see, or sort/filter on. This could be another SQL database (I did that in Part3 of the previous article), but it makes a lot more sense for that database to be a NoSQL database. Why? Because:

  1. A NoSQL database is simpler to read, as it each entry is normally self-contained, with no links outside it. This should make it quicker at reading than a SQL database.
  2. NoSQL databases are normally easier to scale out, that is, have multiple databases containing the “same” data. This is because a write to a NoSQL database is dealing in writing one complete entry. (This is a big topic, and I’m not going to discuss this – see this Stack Overflow answer for part of the reason).
  3. I think a CQRS + NoSQL read-side database does a better job of holding “cached values” than can be done with a SQL/relational database does – I cover this later in the section about the complexity and robustness of the software.

As you will see in the rest of this article these features make my SQL+NoSQL CQRS architecture perform very well, and still have a high level of data integrity.

Why I chose RavenDB as my NoSQL database

I’m not an expert of NoSQL databases but when my first article went out I got a tweet from Oren Eini (Twitter @ayende), who is the main force behind the RavenDb NoSQL database. He quite rightly felt that his RavenDB database would be faster than the SQL equivalent at sorting/searching this data.

His tweet gave me an opportunity to get some help with my lack of NoSQL knowledge, so I sent him the tweet below, and the choice of NoSQL database was made. And it’s turned out to be a very good choice indeed, as you will see.

RavenDB has a .NET library that supports LINQ queries, which meant I could use the LINQ queries I had developed for EF Core with almost no changes. RavenDB also allowed me to create indexes on certain parameters, and would then automatically generate indexes for the various combinations. Indexes are vital to fast performance and most NoSQL systems have them.

I want to thank Oren Eini (see https://ayende.com/blog/) for his help and support, and Jonathan Matheus of RavenHQ , who has kindly provided me with free hosting of the RavenDB database that I use in my live site – http://cqrsravendb.efcoreinaction.com/.

NOTE: See new article “Building a robust CQRS database with EF Core and Cosmos DB” for new version using EF Core’s Cosmos DB database provider.

How I added the CQRS architecture to my example book selling site

I only wanted to move the book list view part to the NoSQL read-side database, and not for the order processing part, because it’s only the book list view that has a performance issue. It turns out that while adding CQRS does require a fair amount of work, it is very simple to only apply the CQRS architecture to part of my application. The figure below shows the design of my changed example book selling site application, with the book list implemented as a two-database CQRS part.

My example book selling site is an ASP.NET Core application with EF Core used for the database access code. It is a layered application, as shown in the diagram above. The book selling site isn’t that complicated but I still needed to be careful as I modified the application over to a CQRS read-side database. I therefore wanted a design that minimized the changes, and isolated the new parts.

I came up with a design that keeps all the NoSQL/RavenDB parts separate. In my final design, the EF Core doesn’t know, or care, what database is being used for the read-side part of the CQRS system. This made the update simpler, plus offered the possibility of changing the NoSQL database I used – I really like RavenDB, with its support of LINQ, but EF Core version 2.1.0 will support Azure’s NoSQL database, Cosmos, so that might be an interesting alternative.

By keeping as much of the new database code in the NoSQLData layer, and using interfaces, kept the impact of my changes to a minimum. Figure 14.3 shows how I hid the NoSQL code behind interfaces to keep the NoSQL code isolated. I used dependency injection to provide both the DataLayer and the ServiceLayer with methods that allowed access to the database.

Here is a typical entry in the NoSQL database. As you can see I have pre-calculated the number of Reviews on this book, ReviewsCount, and the average of the six review votes, ReviewsAverageVotes. These two properties, plus the Id (not shown – RavenDB keeps that separate), the PublishedOn, and the ActualPrice all have indexes.

{
    "Title": "Adobe AIR in Action",
    "PublishedOn": "2008-07-01T00:00:00.0000000",
    "Price": 379,
    "ActualPrice": 189.5,
    "PromotionPromotionalText": "today only - 50% off! ",
    "AuthorsOrdered": "Joey Lott, Kathryn Rotondo, Sam Ahn, Ashley Atkins",
    "ReviewsCount": 6,
    "ReviewsAverageVotes": 3.5
}

Is changing my application to a SQL+NoSQL CQRS design worth the effort?

Implementing this two-database CQRS architecture wasn’t simple and took over a week to develop, which is long for me. Admittedly the main part was learning how to handle the RavenDB database approach, but there were also some complex EF Core parts to write to. So, was it worth the effort? To answer that question, I’m going to look at three distinct aspects.

  1. Was book list display, with its various sorts and filters, quicker?
  2. What was the impact writing to the database performance?
  3. What effect did it have on software, both in effort, complexity, and robustness?

1. Was book list display quicker?

Well, you have already seen the graph, but I’ll show it again. It beats the “best-SQL” version on sort by votes hands-down.

Now, this isn’t the whole story as in the first article I covered four levels of performance-tuning, and there was a faster version after the “best-SQL” version. In that version, referred to Part 3 (SQL cached values), I created “cached values” very like the values you see in the NoSQL database. In fact, it was the similarity of the Part 3 (SQL cached values) book query to getting json data that made me think of using a NoSQL database with CQRS.

If we compare the Part 3 (SQL cached-values) solution with the SQL+NoSQL CQRS version, then the difference is very much smaller, as shown by the graph below.

Note: You can access a live site with the Part 3 (SQL cached values) at http://efcoreinaction.com/. This site has 100,000 books in it.

This second graph shows that it’s the cached values, with their indexes, that makes the difference. RavenDB still has the edge of the Part 3 (SQL cached-values) version, but there is another aspect that I didn’t measure here, that is scalability.

Scalability is about how many simultaneous users an application can handle while still providing a ‘good’ performance. Because NoSQL databases such as RavenDB are dealing with a single entry containing all the information then a read or write is simpler than the SQL case. In general, this makes NoSQL databases easier to duplicate, that is, have multiple databases all containing the same data. The effect of having multiple databases on scalability can be significant. Not only can you spread database access across multiple databases, but you can locate databases geographically around the world to provide shorter access times. This final point clinches the performance issue for me – the SQL+NoSQL CQRS solution wins on performance.

SQL+NoSQL CQRS read performance summary: Excellent

2. What was the impact writing to the database performance?

I said earlier on that a SQL+NoSQL CQRS architecture is going to be slower on writes, as it must write to two databases. I have measured this in my solution and there is an effect, but it’s pretty small.

Taking a common case, which is a user adding a review to a book the results are as follows:

Solution type Total time Notes
Part 3 (SQL cached-values) 13 ms Simple addition of Review entity to the database and a recalculate of the cached values.
SQL+NoSQL CQRS 35 ms The extra time is mainly around writing to the NoSQL database. I measured the RavenDB update taking 25ms, which is quite long compared to a SQL write.

For a function that takes less than 50ms to return to the user, it’s not worth performance tuning. But in applications with more complex updates this time might get too long. But there are plenty of ways to handle this – one way is to pass the update a background task to be executed so that the application returns immediately to the user. The only down side of that approach is the user may be shown out-of-date data, which could be confusing – these are the trade-offs you must think through.

SQL+NoSQL CQRS write performance summary: Good, and could be made better if needed.

3. What effect did it have on software, both in effort, complexity and robustness?

The table below compares the development effort, complexity and robustness of the two designs: The Part 3 (SQL cached-values) solution and the SQL+NoSQL CQRS.

Solution type Effort Complexity Robustness
Part 3 (SQL cached-values) ~ 3 days Complex concurrency Good
SQL+NoSQL CQRS ~ 8 days Complex structure Very good (see below)

The effort was bigger for the SQL+NoSQL CQRS solution, but still in a very sensible time. Remember too, that is the time to performance tune an existing, SQL-only, EF Core application – 8 days to improve performance isn’t that bad in my opinion.

On the added complexity, the two solutions added to the original application was about the same –the added complexity was just in different areas in the two solutions.

The big plus for me was on the on robustness of the application. Keeping cached values correct is really difficult, and I think the SQL+NoSQL CQRS did a much better job of that than the Part 3 (SQL cached-values) did. This is because the SQL+NoSQL CQRS’s architecture “designs-out” any concurrency issues. The diagram compares Part 3 (SQL cached-values) handling of concurrency issues that could make the cached values out of date, with the same problem in the SQL+NoSQL CQRS solution.

Unlike the Part 3 (SQL cached values) solution, where I had to consider each cached value separately, and devise a different solution for each of them, the CQRS design handles all potential problems in one go – it effectively designs them out.

NOTE: This implementation does have a flaw if the NoSQL update fails or comes out of step. If fix this in my latest version in the article “Building a robust CQRS database with EF Core and Cosmos DB” by applying the NoSQL update inside a SQL transaction. That approach could be used with the RavenDB version too.

SQL+NoSQL CQRS software summary: it took a bit more develop time, but it’s worth the effort.

Conclusion

I have altered my example book selling site from a SQL-only solution to one that uses a CQRS (Command Query Responsibility Segregation) architecture with a separate read-side and write-side database.  The write-side database I kept as a SQL/relational database because of its ability to track relationships with a high level of data integrity. On the read-side database I used a NoSQL database, RavenDB. I called this my SQL+NoSQL CQRS solution.

The SQL+NoSQL CQRS solution won by a large factor the on read-performance against the “best-SQL” database. Even against a SQL solution that pre-calculated cached values then SQL+NoSQL CQRS solution was only slightly faster, but has a much better chance to scale horizontally (i.e. multiple duplicate databases) than its SQL counterpart. This means the SQL+NoSQL CQRS won on all fronts for read-side queries.

The SQL+NoSQL CQRS solution did introduce a drop in performance for writes to the database, as two databases had to be updated. In my application that was small enough to ignore, but if it did become a problem there are ways around that.

The SQL+NoSQL CQRS solution took the most development time of any of the performance tuning I did – taking ~8 days to implement compared to ~3 days for the Part 3 (SQL cached values) solution took. However, I think the performance gain was impressive and the overall solution is more robust.

I hope you have found this article and the previous article interesting. Please do add comments and suggestions. I’m always learning.

Further reading

Happy coding!

0 0 votes
Article Rating
Subscribe
Notify of
guest
17 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Roque L Lucero P
Roque L Lucero P
3 years ago

Hi Jon, I’m reading this one year after you posted it, but I have to say that I like very match this approach and actually quite impressed by the way it improves the overall performance. One concern I have is how would you manage the write operations as an atomic one. Unit of work maybe? I mean, since you are working with two different databases I was wondering which is the best way to keep the consistency between them. Thanks in advance!

Jon P Smith
3 years ago

I had the same concern about the two databases getting out of step. Did you see the section near the end were talk about “Effort, Complexity and Robustness” (see https://www.thereformedprogrammer.net/ef-core-combining-sql-and-nosql-databases-for-better-performance/#3-what-effect-did-it-have-on-software-both-in-effort-complexity-and-robustness ). I think this design is very good at keeping the two databases eventually in step (see the diagram as to why), but there is one possible problem area. That is if the updates to the NoSQL database come out of order due to retry after a failed NoSQL update. That could mean an older value could overwrite a newer value (I am assuming there is a retry mechanism in the system).

In the note in “Effort, Complexity and Robustness” section I suggest a timestamp be added to the update, and an update would only be applied if the timestamp in the NoSQL data’s was older than the timestamp in the update request. If the NoSQL database doesn’t have an atomic “update if” statement (many don’t) then another way would use a reliable queue (like Azure Service Bus) and you don’t process the next update until the last one was successful.

I hope that answers your questions about consistency.

Roque L Lucero P
Roque L Lucero P
4 years ago
Reply to  Jon P Smith

Thanks a lot Jon, I would consider that in a personal project, just to improve some skills and learn more about NoSQL by practicing with a more complex architecture. Thanks again!

Jon P Smith
4 years ago

Hi Roque,

I have just released a new article that fixes the problem of the two databases getting out of step – https://www.thereformedprogrammer.net/building-a-robust-cqrs-database-with-ef-core-and-cosmos-db/ . I even give a shout out to you in the article (see paragraph 4).

I’m very happy with this solution, but it does make the write a bit slow. I can see some much more complex solutions, but this wins for simplicity.

Maciej Szymczyk
3 years ago

Great article! Since book list’s NoSQL schema is not so complex, I wonder how effective would be solution using SQL with additional denormalized table instead of seperate NoSQL.

Jon P Smith
3 years ago

Hi Maciej,

I did just that! If you look at the previous article in this series you will see just that! Its part 3, cached values with denormalised tables. I also show in this article a graph comparing the cached values SQL database against the CQRS (about 3/4 through article) – CQRS is faster, but not by much.

Very interesting study – can’t wait for EF Core version 3, where it properly supports CosmosDb NoSQL database. That will make this SQL/NoSQL CQRS easier.

Matt Welke
Matt Welke
3 years ago

Just wanted to come back and let you know that reading this blog post when you published got me started down a path of exploring a whole bunch of different database types and putting more thought into how I arrange my web apps. Combining SQL and NoSQL like this makes a ton of sense for some types of apps. You’ve literally changed me as a developer for the better! I’ve got your EF Core book and it’s on my to-read list as soon as I’ve got the time. πŸ™‚

Jon P Smith
3 years ago
Reply to  Matt Welke

Hi Matt,

Thanks for you comment, I really appreciate that. It does take quite a bit of time to do these articles so knowing it helps others is really nice. Thanks.

You should also look out for version 3 of EF Core, which supports CosmosDb NoSQL database. That should mean building a Combining SQL and NoSQL easier as you can do it all from EF Core. The design will stay the same, but you most likely doing need another assembly for the NoSQL part because it can all be done EF Core.

PS. I hope you enjoy the book.

Frank
Frank
3 years ago

Hi Jon, this is a very interesting article and a very fast performance scenario. I don’t know the SQL database you are using, but if you try to make the SQL database handle the write query to send the data to NoSQL? In PostgreSQL, you can achieve that with foreign data wrappers, and with this you let the CQRS in the database side. And this would be interesting to implement and compare the results.
Greetings

Jon P Smith
3 years ago
Reply to  Frank

Hi Frank,

This was an performance improvement exercise for EF Core, so I used EF Core to do the update. The performance gain was impressive as you have seen. There may be ways at the database level, but I was focusing on EF Core as its often easier that working inside a database.

PS. Watch out for a new version of this article where I use Cosmos DB NoSQL database. EF Core will support Cosmos DB in version 3, which makes building this SQL/NoSQL application a lot easier.

Muhammad Akbar Yasin
Muhammad Akbar Yasin
3 years ago

Very well Sir

Luka Cetina
Luka Cetina
5 years ago

Hi, Where can I download the code you used for this article?
Thanks

Jon P Smith
5 years ago
Reply to  Luka Cetina

Hi Luke,

I have a GitHub repo at https://github.com/JonPSmith/EfCoreInAction. Each chapter has a branch, so you need to select branch Chapter14 for this application. As well as .NET Core 2.0 you also need a local copy of the RavenDB server to run the application locally, or you can look at the live version at http://cqrsravendb.efcoreinaction.com/.

Chapter 14 of the my book, “Entity Framework in Action” covers a lot about how I capture changes to a book entity in the CQRS application to update the the NoSQL database. That chapter will be out at the end of November 2017.

Luka Cetina
Luka Cetina
5 years ago
Reply to  Jon P Smith

Thanks. I already bought your book.

Jon P Smith
5 years ago
Reply to  Luka Cetina

Hi Luka,

Great! You should be used to the basic design of application from the book. The Chapter14 branch has the same example book selling site, but altered to CQRS.

Sorry that Chapter 14 won’t be out till the end of November – I haven’t quite finished yet and each chapter is put though two different reviews by the publishers, then I have to update the chapter to deal with any issues before its ready for MEAP.

PS. The MEAP coming out in a week’s time has two new chapters: chapter 13, which is the “worked example of performance tuning” and chapter 15 on unit testing. Enjoy.

Tomas Grosup
Tomas Grosup
5 years ago

Hello Jon, thanks for the article, this series is really a good read and I enjoyed reading it.

I reviewed your concurrency extension at https://github.com/JonPSmith/EfCoreInAction/blob/Chapter13-Part3/DataLayer/EfCode/SaveChangesBookFixer.cs and I don not think this will work, as you are still calling context.SaveChanges() in the end and doing the same thing which you tried to prevent in the first place, only in a possible smaller time window. If you would continue calling SaveChangesWithReviewCheck recursively instead, you might end up in a live-lock scenario at systems under constant load.

Traditionally, this can be solved by 2 orthogonal approaches:
1/ Having a normalised data model like you had before caching, and adding the cache layer as a materialised view (SQL,noSQL,in-memory cache,…)
2/ Instead of replacing the properties directly, make use of the transactional systems within databases and use
SET ReviewsCount = ReviewsCount + 1
That way, database ensures this is consistent even under heavy loads.
For the average, this would need one more step for keeping the average as ReviewsSum instead, and doing the ReviewsSum/ReviewsCount at query time.

The question is, is there any way to hint EF CORE to generate a “SET ReviewsCount = ReviewsCount + 1” statement whenever you add to the property in .NET ?

Jon P Smith
5 years ago
Reply to  Tomas Grosup

Hi Tomas,

Thank you – I missed that, and I have replaced the call to SaveChanges with a call to SaveChangesWithReviewCheck both in the code and in the book too. The problem is its impossible to unit test a concurrent update while in the concurrency fixing code. I believe the SQL cached-values with your fix will now work, but I think this problem makes the point that the CQRS solution is much more robust because it designs out the problem.

I liked Mateusz Stasch article (see https://www.future-processing.pl/blog/cqrs-simple-architecture/) where he calls the read-side database a β€œa legitimate cache”.

Even when I build the SQL cached-values I had in mind the CQRS version, but I didn’t know that the CQRS architecture with a polyglot database structure would work out so well. I certainly think this architecture is worthy of consideration, and with EF Core version 2.1.0 planning on supporting Azure Cosmos NoSQL database then it will be even easier for developers to build.