GenericServices Masterclass: a deeper look at deleting entries

I have been building a new application using GenericServices and I thought it would be useful to others to reflect on the approaches I have used when working with more complex data. In this second master class article I am looking at deleting entries, i.e. data rows, in a SQL Server database using Entity Framework. The master class first article looked at creating and updating entries and can be found here.

I wrote this article because the more I used GenericServices IDeleteService in real applications I ended up using the more complex DeleteWithRelationships version of the command. This article says why and suggests some ways of tackling things, including not deleting at all!

What is GenericServices??

For those of you who are not familiar with GenericServices it is an open-source .NET library designed to simplify the interface between an ASP.NET MVC application and a database handled by Microsoft’s Entity Framework (EF) data access technology. You can read more about it at where the Readme file has links to two example web sites. The GenericServices’ Wiki also has lots more information about the library.

While this article talks about GenericServices I think anyone who uses EF might find it useful as it deals with the basics behind deleting entities.

An introduction to relationships and deletion

Note: If you already understand the different types of referential relations in a relational database and how they affect deletion then you can skip this part and go to the section with title Deleting with GenericServices bookmark.

On the face of it deleting rows of data, known in EF at entities, seems like a simple action. In practice in relational databases like SQL Server it can have far reaching consequences, some of which might not be obvious at first glance when using EF. Let me give you a simple example to show some different aspects to delete.

Below is a database diagram of the data used in the example application SampleMvcWebApp, which contains:

  1. A selection of Tags, e.g. Programming, My Opinion, Entertainment, etc.  that can be applied to a post.
  2. A list of Posts, where each is a article on some specific topic.
  3. A list of Bloggers, which are the authors of each of the Posts.


Now this example shows the two most used type of relationships we can have in a relational database. They are:

  • One-to-many (1-*): A Post has one Blogger, and a Blogger can have from zero to many Posts.
  • Many-to-many (*-*). A Post can have zero to many Tags, and Tags may be used in zero to many Posts.

The other type of basic relationship is a one-to-one relationship. This is where one entity/table is connected to another. You don’t see this type of relationship so much as if both ends are required (see below) then they can be combined. However you do see one-on-one relationships where one end is optional.

So the other aspect to the ‘One’ part of a relationship is whether it is required or optional (required: 1, optional: 0..1). An example of a required relationship is that the Post must have a Blogger, as that defines the author. An example of an optional relationship would allowing the Blogger to add an optional ‘more about the author’ entry in another table. The Author can choose to set up that data, or not bother.

How EF models these relationships

EF has multiple ways of setting up relationships and I would refer you to their documentation. Here is my super simple explanation to help you relate the section above to Entity Framework classes:

  • The ‘One’ end of a relationship has a property to hold the key (or multiple properties if a composite key).
    • If the key property(ies) is/are nullable, e.g. int? or string without [Required] attribute, then the relationship is optional.
    • If the key property(ies) is/are not nullable, e.g. int or string with [Required] attribute, then the relationship is required. See the BlogId property in the Post class of SampleMvcWebApp.
  • The ‘Many’ end of a relationship is represented by a collection, normally ICollection<T> where T is the class of the other end of the relationship. See the Posts property in the Blog class of SampleMvcWebApp.
  • Many-to-Many relationships have Collections at each end, see the Tags property in the Post class and the Posts property in the Tag Class of SampleMvcWebApp.
    EF is very clever on many-to-many relationships and automatically creates a new table that links the two classes. See my article Updating a many to many relationship in entity framework for a detailed look at how that works.

How these relationships affect Deletion?

If you delete something in a relational database that has some sort of relationship then it is going to affect the other parts of the relationship.  Sometimes the consequences are so small that they don’t cause a problem. However, especially in one-to-one or one-to-many relationships, the effect of a delete does have consequences that you need to think about. Again, let me give you two examples you can actually try yourself on the SampleMvcWebApp web site.

  1. Delete a Many-to-Many relationship. If you go to the Tags Page of SampleMvcWebApp and delete a Tag then when you look at the list of Posts then you will see that that tag has been removed from all Posts that used it (Now press Reset Blogs Data to get it back).
  2. Delete a One-to-Many. However if we go to the list of Bloggers on SampleMvcWebApp and delete one of the Bloggers then when you look at the list of Posts you will see that all Posts by that author have gone. (Now press Reset Blogs Data to get them all back).

So, what has happened on the second one? Well, in this case the database could not keep the relationship without doing something because the Post’s Blog link wasn’t optional. There were two possibilities: either it  could delete all the Post for that Blogger or it could have refused to carry out the delete.

By default EF sets up what is called ‘cascade deletes‘, which is how SampleMvcWebApp is set up. In this case is what deleted the related Posts for that Blogger. If we turned off cascade deletes then the we would get a ‘DELETE statement conflicted with COLUMN REFERENCE’ (547) and the delete would fail.

The simplified rule is if entity A is involved in a required relationship with entity B then when A is deleted something has to happen: either B is deleted or the delete fails. Which happens depends on how you configure EF.


Deleting with GenericServices

GenericServices has a service called IDeleteService which has two delete methods:

  1. Delete<T>(key1, key2…) (sync and async) which deletes the row with the given key, or keys if it has a composite key, from the EF entity referred to by class T, e.g. Delete<Post>(1) would delete the Post with the key of 1.
  2. DeleteWithRelationships<T>(removeRelationshipsFunc, key1, key2…) (sync and async) which does the the same, but called the removeRelationshipsFunc as part of the delete.

I am not going to detail how to use them as the GenericServices Wiki has a good description.You can also find an example of the use of Delete<T> at line 121 in PostsController and an example of the use of DeleteWithRelationships<T> at line 218 in CustomerController.

The only other thing I would say is that deleting entries with composite keys is straightforward – just supply the keys in the order in which they occur in the database. Note: some programmer don’t like composite keys, but I do find them useful. There are places where composite keys are good at segregating data into groups: the primary key can be the group name, the secondary key is the name of the item itself.

When I first wrote GenericServices I just had a Delete<T> method. I very soon found that wasn’t enough, so I added DeleteWithRelationships<T>. Now I find I am using DeleteWithRelationships 80% of the time.

The rest of the article is about why I use DeleteWithRelationships so much, and a few pointers on alternatives to using Delete at all.

Why I use DeleteWithRelationships so much

I have used DeleteWithRelationships in three situations that I will describe:

  1. To provide better error messages when a delete would fail.
  2. To delete other associated entities that would not be caught be cascade deletes.
  3. To delete associated files etc. outside the relational database.

1. To provide better error messages when a delete would fail

I have many instances where I don’t want cascade deletes to work, so if I deleted I would get a SQL error 547. While GenericServices catches this and provides a fairly helpful error it isn’t that informative. I therefore often (actually, nearly always) use DeleteWithRelationships to provide a better error message. Let me give you an example.

In a web site I was building designers could set up designs with text fields. Each field had a ColourSpec, which is a database entity. I allowed a designer to delete a ColourSpec as long as that colour isn’t used in any of the text fields. If it is used then I output a list of designs where it is used so that the designer can decide if they want to remove those reference and try the delete again.

2. To delete other associated entities that would not be caught be cascade deletes

This happens rarely, but sometimes I have a complex relationship that needs more work. I found one in the AdvertureWorks database that I use in my example application In this the customer address consists of two parts: the CustomerAddress with has a relationship to an Address. Now, if we want to delete one of the customer’s  addresses, say because they have closed that office, we want to try and delete the associated Address part, which isn’t linked by cascade deletes.

By using DeleteWithRelationships I can pick up the associated Address relationship and delete that too. In fact I might need to do a bit more work to check if I can delete it as it might be references in an old order. By calling a method which I can write specifically for this case then I insert special logic into the delete service.

NOTE: because the delete is done by GenericServices and any extra delete done in the DeleteWithRelationships method are all executed in one commit. That means if either part of the delete fails then both are rolled bakc, which is exactly what you need.

3. To delete associated files etc. outside the relational database

In a recent web application the system included image files. I chose to store then in the Azure blob storage, which I have to say works extremely well. The database stored all the information about the image, include a url to access the image from the blob, while the images, which can be very large were stored in a blob table.

Now, when I delete the database entity about the image I will end up with an orphan image in the blob storage. I could have a WebJob that runs in the background to delete orphan images, but that is complicated. What I did do was add code to the DeleteWithRelationships to delete the images as part of the delete process.

There is obviously a danger here. Because the database and the blob are not connected then there is no combined ‘commit’, i.e. I might delete the images and the SQL database delete might then fail. Then my url links don’t point to a valid image. In this case I have made a pragmatic choice: I check that the delete should work by checking all the SQL relationships before I delete the blob images. I could get a failure at the SQL end which would make things go wrong, but other parts of the system are designed to be resilient to not getting an image so I accept that small possibility for a simpler system.  Note that if I fail to delete the images from the blob I don’t stop the SQL delete – I just end up with orphan images.

An alternative to Delete

In working web applications I find it is sometimes better not to delete important entities, especially if I want to keep a history of orders or usage. In these cases, which happen a lot in live systems, I am now adding an enum ‘Status’ to my key entities which has an ‘Archive’ setting. So, instead of deleting something I set the Status to Archive.

All I have to do to make this useful is filter which entities are shown based on this Status flag. That way entries with a Status setting of ‘Archive’ are not shown on the normal displays, but can be seen in audit trails or admin screens.

I needed a Status flag anyway, so there is little extra overhead to providing this on my key entities. Also, in EF 6 you can now set a property to have a index, so the query is fairly quick. You need to think about your application, but maybe this would work for you too.

Note: See the comment below by Anders Baumann, with a link to an article called ‘Don’t Delete – Just Don’t‘ for a well formed argument for not using delete.


Deleting looks so simple, but my experience is that it is often more complicated than you think. Hopefully this article gives you both the background on why and the detail on how to use GenericServices to delete entities. I hope its helpful.

Happy coding!

GenericServices Masterclass: a deeper look at Create and Update

I have been building a new application using GenericServices and I thought it would be useful to others to reflect on the approaches I have used when working with more complex data. In this first master class article I am looking at Create and Update of data, especially via DTOs (Data Transfer Objects).

What is GenericServices??

For those of you who are not familiar with GenericServices it is an open-source .NET library designed to simplify the interface between an ASP.NET MVC application and a database handled by Microsoft’s Entity Framework (EF) data access technology. You can read more about it at where the Readme file has links to two example web sites. The GenericServices’ Wiki also has lots more information about the library.

If you are not familiar or interested in GenericServices then save yourself time and skip this post.

Introduction to create and update with DTOs

GenericServices has five primary data access methods: List, Detail, Create, Update and Delete. Create and Update are the most complicated as they write data back to the database. If you are working directly with the database classes then it is fairly straightforward: in this case GenericServices is executing the EF commands for you through a standardised interface that adds some extra code to check and return any validation or selected SQL errors.

Create and update become more complex when you are using DTOs. However, if you have read any of my articles, such as Why DTOs, then you will know that 95% of my accesses to the database go through DTOs. Therefore handling DTOs is critical and GenericServices is built primarily for DTO mode database accesses.

The most common use of DTO is in reading data, but for Create or Update the DTOs need to be transformed back to the database entity before that entity is written out to the database. There are four levels of complexity, starting with the simplest and working up to the more complex. I will list them here and then go into more detail on what to do later on.

  1. Simple one-to-one mapping: Each property is a simple data type, i.e not another user class or relationship, and has an equivalent DTO property. In this case GenericServices ‘just works’.
  2. One-to-one mapping, but some properties are read-only. This case is like the first, but maybe you have properties you want to show the user, but you don’t want them to update them. A obvious example might be the price of something they can buy. In this case you will use the [DoNotCopyBackToDatabase] attribute – see later for an example.
  3. Contains properties that are developer defined classes or collections. In this case you might have a relationship class, a collection or an EF complex type. In this case AutoMapper will read these property using its conventions, but by default will not write them back. You need to add extra code to handle this which I will cover later.
  4. Contains properties that must be calculated on write. This happens when the value to be written depends on the user’s input, or we need to do some drastic reformat to the data before writing back. Again, I will show you some example later.

Before I describe these four cases I would like to point out that AutoMapper and the DelegateDecompiler’s [Computed] attribute (see GenericServices Wiki page on Calculated Properties) are all designed to work when READing from the database. When writing to the database GenericServices does use AutoMapper, but it has to supplement extra commands as, by default, AutoMapper is not meant to be used for write to database actions. That is why this article and the various examples exist – to show you how to handle this write operation successfully.

So, with that background let me go through all the four cases above.

1. Simple one-to-one mapping

To be honest if you have a simple one to one mapping between the data class and the DTO then you most likely should be using GenericServices with the data class itself as it is quicker and more transparent. I can’t think of any cases that I have had a ‘simple’ mapping where I used a DTO. However it will work if you do.

2. One-to-one mapping, but some properties are read-only

[DoNotCopyBackToDatabase] is GenericServices’ version of MVC’s [Bind(Exclude = “SomeProperty”)]. Both allow a property to be marked as not being overwritten by user input. You can get more about this in the GenericServices Wiki DoNotCopyBackToDatabase attribute page.

The good example is given in the Wiki page, i.e. you want a user to see the price of their purchase, but they should not be able to change the price. I use this myself in one of my applications to hold a status flag that can be seen by the user, but can only be changed by the business logic. Therefore I mark the property in the DTO with [DoNotCopyBackToDatabase] to ensure it can not be changed either by accident or maliciously.

In SampleMvcWebApp I apply [DoNotCopyBackToDatabase] in the DetailPostDto.cs class to stop the LastUpdated property being coped back. However that is really a comment as any value would be overwritten by the code inside the SampleMvcWebApp’s .SaveChanged() method which sets that property at save time.

 3. Contains properties that are developer defined classes or collections

This is where it starts to get more complicated. There are a few possible scenarios and I will deal with each in turn:

a. A relationship class

This is where you have a class that is a relationship. An example from SampleMvcWebAppComplex is a CustomerAddress class, which has a one-to-one relationship with the Address class, which holds the address details. In SampleMvcWebAppComplex I use a top level DTO called CrudCustomerAddressDto to hold all the customer address data. This type of situation doesn’t happen often, but when they do you need a way to handle them.

Firstly let me explain why the normal mappings wouldn’t work. If we used AutoMapper’s flattening convention it could read properties in the associated Address class by concatenating the names, e.g. AddressCity would read the City property in the Address part of CustomerAddress. The problem is it can read, but it won’t write as AutoMapper doesn’t ‘un-flatten’ things, i.e. setting the property AddressCity in the DTO will not be mapped back to Address.City in the CustomerAddress class.

There are a few ways round this. One is to hand-code the copy back, but we can also use AutoMapper’s nested mappings feature, which is what I actually did in this case. To use this I need to create a sub-DTO, called CrudAddressDto. I then need to do two extra things in the CrudCustomerAddressDto class.

Firstly I have to tell GenericServices to make sure that AutoMapper knows about how to map the CrudAddressDto, otherwise I will get an error from AutoMapper if I try to do the mapping. That is achieved by overriding AssociatedDtoMapping method in CrudCustomerAddressDto to add the extra mapping as show below (note: there is a AssociatedDtoMappings version if you need multiple extra mappings) :

protected override Type AssociatedDtoMapping
   get { return typeof(CrudAddressDto); }

Secondly for the update to work I need to make sure I have loaded the CustomerAddress and the associated Address class. This is done by overriding the FindItemTrackedForUpdate method (see below). If I don’t do this then updating the CustomerAddress will create a NEW Address class rather than update the existing class. As that Address class may be used elsewhere, such as in the delivery address, then its important to get that right.

 protected override CustomerAddress FindItemTrackedForUpdate(IGenericServicesDbContext context)
    return context.Set<CustomerAddress>()
                             .Where(x => x.CustomerID == CustomerID && x.AddressID == AddressID)
                             .Include(x => x.Address).SingleOrDefault();

b. A Complex Type

A ‘complex type’ is the term EF uses for a reference to class which has no key  – see section entitled ‘Complex Types Convention’ in Entity Framework documentation. This is often used to simplify or separate parts of the class.

If you want to do this then you can use the nesting mapping approach shown above to do the to/from mapping of the complex types. However, as complex types are simply a way of arranging things I tend ‘flatted at design time’, i.e. I don’t use complex types. That means that AutoMapper will do both the read and the write.

If that simplification does not work for you and you want to use Complex Types then you need to create a new dto for each complex type and override the AssociatedDtoMapping or AssociatedDtoMappings to allow GenericServices to set up the AutoMapper mappings. You don’t need to override FindItemTrackedForUpdate, as the class is in fact one single row in the database.

c. A Collection

This is the most complex case because a collection refer to multiple relationships (EF does not support lists and arrays of simple data like bytes or strings). As you can imagine, updating multiple relationships is complex and has to be hand coded by overriding CreateDataFromDto and UpdateDataFromDto, which I will explain in the next section.

I should say that updating multiple relationships isn’t a simple problem anyway. This normally falls under the next section, Contains properties that must be calculated on write, as it is complex enough that GenericServices cannot make a guess as to your intentions.

Contains properties that must be calculated on write

There are times where a property in a class is either set by the user via some other means, like a dropdownlist, or the data in the DTO needs a more complicated reformatting/parsing to get the required value for the property. In these cases the developer needs to intercept the CreateDataFromDto and UpdateDataFromDto methods and write code to set the properties.

Let us start by looking at the anatomy of the two methods. The first thing to note is that they are very similar, but have a slightly different signature. The method signatures are different because update is handed an database entity to update while the create returns a new instance of the database entity class (This was done because databases using a Domain-Drive Designs (DDD) approach the creation of a entity may be done via a factory). This means that the handling of the return status is slightly different between the two methods

However when overriding these methods the code inside has a common approach. Let us take UpdateDataFromDto as an example and look at the three stages the update and create goes through.

protected override ISuccessOrErrors UpdateDataFromDto(
    IGenericServicesDbContext context, TDto source, TEntity destination)
   //1) Before copy/create: If you put your code here then it should
   //set properties in the source, which AutoMapper will then copy over

   //2) Call the base method which uses AutoMapper to create/copy the data
   var status = base.UpdateDataFromDto(context, source, destination);

   //3) After copy/create: If you put your code here then it should
   //set properties in the destination, ready to be written out

   return status;

So, your options are:

  1. Change properties in the DTO and then let AutoMapper copying them over for you.
  2. Wait until Automapper has run and change the properties in the database entity class.
  3. Don’t call the base method and handle the whole copy/conversion yourself (this is what you would need to do for DDD databases).

I have used option one and two depending on the need. Option ones is good if the property needs to be in the DTO while option 2 is useful when you want to exclude a property from the DTO and only set it via your code. In both cases I try to do any validation of the values early so that if it fails it returns immediately and therefore saves the redundant call to AutoMapper.

SampleMvcWebApp has a classic example of option one in DetailPostDto. DetailPostDto is complex because it allows the user to select the post’s author from a dropdownlist and the post’s tags from a multi-select list. If you look at the code you will see the overrides of CreateDataFromDto and UpdateDataFromDto around line 140 or so. To save you swapping pages here is a copy of the overridden methods from DetailPostDto:

protected override ISuccessOrErrors<Post> CreateDataFromDto(IGenericServicesDbContext context,
    DetailPostDto source)
    var status = SetupRestOfDto(context);
    return status.IsValid
        ? base.CreateDataFromDto(context, this)
        : SuccessOrErrors<Post>.ConvertNonResultStatus(status);

protected override ISuccessOrErrors UpdateDataFromDto(IGenericServicesDbContext context,
    DetailPostDto source, Post destination)
    var status = SetupRestOfDto(context, destination);
    return status.IsValid
        ? base.UpdateDataFromDto(context, this, destination)
        : status;

As you can see both methods call a private method called SetupRestOfDto which handles the set up of the BloggerId and the Tags properties in the DTO. The SetupRestOfDto method needs to know if its a create or update, which it knows because if the Post is not given it is set to null. On return if the status for the SetupRestOfDto method is OK, the codes then calls the base method to do the create/update. This is a typical way of handling user selections from dropdown list.

I used option two, setting the value in the database entity class, in a recent project when I wanted to make the user input much simpler. In this case the user needed to input to input a CMYK colour and a RGB colour. The colours were stored as a series of numbers but it was simpler for the user to enter the CMYK and the RGB as two strings of digits, with different options on hex/decimal etc. I therefore overrided the create/update method and parsed the two strings into the right format directly into the database entity class.

I am sure your project has other requirements that GenericServices does not do as its stands. Overriding CreateDataFromDto and UpdateDataFromDto is the go-to place to add your code and still benefit from all of GenericServices’ features.

Note: Writing this section made me realise I could have simplified the the method signatures of the CreateDataFromDto and UpdateDataFromDto as I don’t (now) need to pass in the DTO. However that would be a breaking change to the interface and I’ll leave that for the next big change of GenericServices.


In this article I have covered in detail what happens in a Create or Update inside GenericServices. It is complex, especially around handling relationships. Hopefully this article will help anyone who needs to implement any of the more complex use of Create and Update.

Do ask any questions or make comments here, or raise an issue on the GenericServices GitHub site if you think there is a bug. Happy to answer any questions as it helps increase the documentation for the project.

Happy coding.


Using .NET Generics with a type derived at runtime

I have used .NET Generic types a lot and found them really useful. They form a core part of the solution in my open-source GenericServices library and my private GenericActions library. This article explores why and how I use Generics and some of the techniques I use to hide the more complex type definitions.

Even if you are not interested in my particular use of Generics this article has some useful information on:

  1. How to create an instance of a generic type where the type is defined at runtime.
  2. The two approaches to calling methods or read/write properties of the created generic instance.
  3. The performance issues around the two approaches to calling methods or read/write properties.

Only interested in how to use Generics with a runtime-derived type? Just skip to here in this article.

What are Generic Types

Generics are everywhere – if you use List<T> etc. you are using Generics. If you are not familiar with Generics then look at this introduction. The primary thing about Generics is it allows you to write one code implementation that can then work on a range of types. In particular they produce efficient code because they are ‘type safe’, i.e. the type is known, so the code can access what it needs.

Without Generics we would either:

  • Have to write the same code for each type, which is bad practice.
  • Write one method but use Reflection to access the items we want, which is not type-safe.

The down side of Generics is you (normally) have to know the type at compile time. However I will show you later how to overcome this.

Where I have found Generic types really useful

I developed a web application called Spatial Modeller that does mathematical predictive modelling for healthcare. The modelling is complex and I used a  Domain-Driven Design (DDD) approach, which said I should keep the database/business classes focused just on the modelling problem. That worked really well, but it meant that the database/business classes weren’t right for display to the user.

I therefore had a Service Layer which had classes more attuned to the display and visualisation of the data. These are typically called Data Transfer Objects (DTOs), or in ASP.NET MVC they are also called ViewModels. The Service Layer code transformed the data passing between the Data/Business Layers and the Presentation/WebApi Layer.

The problem was I ended up with writing very similar code, but with different data types, for the transformation of the database/business classes to DTOs. That is both bad software practice, time consuming and boring!

After that project I was determined to create a solution for that. It took a bit of experimenting, but Generics was the answer. However the final solution wasn’t quite a straightforward as you might think.

The problem of a complex Generic type definitions

Let me give you a real example of where my presentation layer wants to run a piece of business logic. In Spatial Modeller to model a scenario the following happens:

  1. The user picks the potential hospital locations for dropdown lists and fills in various selections.
  2. These are handed to the business layer modeller as primary keys for the hospital locations and various enums/parameters.
  3. The modeller does its work, writing some data to the database.
  4. The modeller returns the results as primary keys.
  5. These need to be looked up show the user.

So, in that process we have five classes:

  1. The presentation input DTO, called DtoIn.
  2. The business modeller data input, called BizIn.
  3. The class/method to call in Business Layer, called IBizAction.
  4. The output of the business modeller, called BizOut
  5. The presentation output DTO, called DtoOut.

So, if we define a Generic class to handle this its class definition would look like this:

var service = new ActionService<DtoIn, BizIn, IBizAction, BizOut, DtoOut>
     (db, new BizAction(db));
var dataIn = new DtoIn(some data here );
var result = service.RunAction(dataIn);

That is a) quite hard to write and b) not easy to use with Dependency Injection and c) downright ugly! In fact in both of my libraries its even more complex than that, with different types of input. Even worse the code will be the same, but have different transform parts. In GenericActions I calculated there were sixteen versions, all of which would need separate implementations.

GenericServices was much easier, but still produced ‘ugly’ generic type definitions, which needed special handling at dependency injection time.

Getting rid of the ‘Ugly’ type definitions

OK, so Generics got me so far but I needed to do something else to combine and simplify the type definition of the Generics. I did two things:

  1. For both libraries I hide the complex type definition via a outer, non-generic helper class where the called method does a decode of the types and creates the correct Generic class instance with the right types.
  2. In the case of GenericActions, which has so many versions, I did a further decode of the in/out types inside the Service Layer code, which reduced the sixteen possible versions down to six. I could have gone further, but this was the right level to keep performance up.

Before I describe the solution I will show you the improved code for the example in the last section (see below). I think you will agree that is much nicer, and the creation of the service is now very Dependency Injection friendly.

var service = new ActionService<IBizAction>(db, new BizAction(db));
var dataIn = new DtoIn(some data here);
var result = service.RunAction<DtoOut>(dataIn);

So, let us go through the steps to achieve this.

Part 1: Decode the underlying types

I’m going to use an example from GenericServices, as it is open-source and I can therefore link to the actual code. So the example for GenericServices is:

var service = new UpdateService(db);
var dataIn = new DetailPostDto(some data here);
var response = service.Update(dto);

The steps are:

  1. service.Update<T>(dto) is a top-level method (see first class) that can take either a database class, like Post, or a dto.
  2. If it is a dto then it is derived from abstract EfGenericDto<TEntity, TDto>. This forces the dto definition to include the database class. See the DetailPostDto.cs as an example.
  3. The Update method calls DecodeToService<UpdateService>.CreateCorrectService<T> (see this class, first method). It gets a bit complicated in this class because of sync/async versions of DTO, but the bottom line is it:
    1. Finds if it is a EfGenericDto or not. If not then it assumes its a direct access.
    2. If it inherits from EfGenericDto it finds the database class and the Dto class.


Part 2: Create the Generic Class from the runtime types

Ok, the last process found whether it was a database class, or a dto class. There are different classes to handle these two cases. The direct update has one Generic Type parameter, being the database class. The dto version has two Generic Type parameters: the database class and the dto class. However in the example below I look at a very simple case to make it easier to understand. GenericServices is a bit more complicated, but follows the same approach.

The code to create the instance of the Generic class is petty straightforward. For example if I wanted to create a List<string> at runtime I would

  1. Produce an array of the type(s) need to form the generic type, in this example ‘string’
  2. Get the generic type, in this example ‘List’
  3. Combine them using the ‘.MakeGenericType’ method
  4. Create an instance of that type using ‘Activator.CreateInstance’

The code below shows an example of creating ‘List<string>’ at runtime.

var dataType = new Type [] { typeof(string)};
var genericBase = typeof(List<>);
var combinedType = genericBase.MakeGenericType(dataType);
var listStringInstance = Activator.CreateInstance(combinedType);

Part 3: Calling methods from the created instance

You should be aware that ‘Activator.CreateInstance’ method returns an object, so you can’t just ‘listStringInstance.Add(“hello world”)’ as the compiler will something like “‘object’ does not contain a definition for ‘Add’“. You have two choices:

1. Use Dynamic Type

You can place the output of the ‘Activator.CreateInstance’ into a dynamic type (see start of line 4). This turns off compile time type checking which allows you to call any method, or access any property, with the type checking done at runtime. So in our List<string> case it would look like this:

var dataType = new Type [] { typeof(string)};
var genericBase = typeof(List<>);
var combinedType = genericBase.MakeGenericType(dataType);
dynamic listStringInstance = Activator.CreateInstance(combinedType);
listStringInstance.Add("Hello World");

Dynamic is easy to use, and allows much better freedom. However the dynamic runtime library takes a lot of time on the first call of the method. See Performance section for more detailed analysis.

2. Use Reflection

Reflection allows you to find methods, properties by name. You can then call the method or access the property via different Reflection methods. In our example we are calling a simple method ‘Add’ in the instance type (would be null if no method of that name existed) and then ‘Invoke’ that method, i.e.

var dataType = new Type [] { typeof(string)};
var genericBase = typeof(List<>);
var combinedType = genericBase.MakeGenericType(dataType);
var listStringInstance = Activator.CreateInstance(combinedType);
var addMethod = listStringInstance.GetType().GetMethod("Add");
addMethod.Invoke(genericInstance, new object[]{"Hello World"});

The reflection approach does have some complications, such as method or property accesses returns an object, which can be a problem if you need to type it. Also if the method had an output you need to build a generic Method using ‘MakeGenericMethod’. All in all reflection is harder to handle that dynamic.

However the reflection approach has a tiny first-use cost compared to the dynamic approach. However for lots of subsequent calls to the method then dynamic is quicker. See Performance section for more detailed analysis. (below)

Performance issues – simple types

The way you access the method or property does have a big effect on the performance of the command, as we are doing more. I care about performance so I have studied this in detail. There are two costs, one-off  first-use compute time and per instance compute time.

Let me start with a simple table giving the times for my simple List<string> example. What I did was run the same test on three types:

  1. Normal code, i.e var list = new List<string>(); list.Add(“Hello World”);
  2. Using reflection, i.e. listStringInstance.GetType().GetMethod(“Add”);
  3. Using dynamic, i.e. listStringDynamicInstance.Add(“Hello World”);

To take out any first-use costs I ran it on List<string> twice, followed by List<DateTime> three times, as it did seem to change. The List<DateTime> is there to check if building a different type has the same first-use cost. Here are the results, which were measured using the ANTS Profiler.

BE WARNED: This table is a bit misleading by implying refection is always faster than dynamic – it isn’t. See the ‘Performance issues – GenericAction library’ section where a real-life example shows that dynamic wins in the end.

Type Order Compiled Reflection dynamic
List<string> 1. First 0.8 ms 37.08 ms 5.0 ms
(was 600ms)
List<string> 2. Second < 0.001 ms 17.05 ms 1.0 ms
List<string> 2. Third < 0.001 ms 0.01 ms 0.6 ms
List<DateTime> 3. First (after List<string>) < 0.001 ms 0.03 ms 2.7 mS
List<DateTime> 4. Subsequent < 0.001 ms 0.03 ms 0.5 ms


When I first started testing I was getting 600ms for the first-use cost on the first dynamic method call, of which about 400 ms comes from the Dynamic Language Runtime. However once I installed .NET 4.5.2 on my windows system this dropped to 5ms. I cannot confirm that as the release notes do not say there is a change there. Maybe it was just reinstalling the .NET library. Anyway, be warned that things might have changed.

The way that .NET handled dynamic types is complex and I don’t claim to understand it properly. I would refer you to Simon Cooper’s series on ‘Inside DLR’ (Dynamic Language Runtime) starting with ‘Inside the DLR – Callsites‘ and then look at ‘Inside DLR – Invoking methods’ which describes the caching of methods.

Performance issues – GenericAction library

In GenericServices and GenericActions I have used the dynamic approach. Seeing the figures above made me wonder if that was the best way. I therefore forked a version of GenericActions and changed it to use a Refections approach with no use of dynamic anywhere. Here are my findings:

  1. When comparing the dynamic-based and the Reflection-based versions of GenericActions the first-use costs of dynamic are much less than the table above suggests. The figures are:
    1. Refection-based: 300 ms first-use cost due to AutoMapper first-use costs.
    2. dynamic-based: 700 ms first-use cost, which, if you exclude the AutoMapper part, means that the dynamic part is only 400 ms. I haven’t measured after installing .NET 4.5.2 but it now seems quicker.
  2. If a created instance is used many times then dynamic wins in the end. As an example my test of system on a 1000 accesses then dynamic was between 120% and 200% faster than reflection (on my tests anyway).
  3. Dynamic is much easier to use. I found it really hard to do everything with reflection, mainly because you cannot cast something. My altered GenericActions version worked, but some of the error checking on incorrect types did not work any more. I might have fixed them, but it wasn’t worth it.

So, lets look at the different parts of the performance problem.

1. One-off first-use compute time costs

With dynamic there is a cost on first decode and call of the method in my libraries which use dynamic – about 0.2 seconds or more on my system. As I explained earlier this because the first time you call a method in a dynamic type its needs to be created, which takes some time. However the method is then cached so later calls are very fast.

Any first-use performance cost is a pity, but I have other first-use time costs, like MVC, Entity Framework, AutoMapper etc., so I accepted , mainly because I can mitigate it (see next section) and overall the dynamic approach is faster.

Mitigating first-use costs

One thing that really helps is keeping the application alive so that the first-use cost only happens when you start/restart your application. I mainly develop web applications and one great feature introduced in .NET 4.5.1 was ‘Suspend‘ on an ASP.NET web site.

On shared hosting (maybe other hosting as well, not sure) when a web site has no traffic, i.e. all user sessions have timed out, then it stops. This means for low-traffic sites the web application would stop and the next user coming to it would have to wait for it to start. The ‘Suspend’ feature keeps the application alive so that this does not happen.

Suspend support does depend on your hosting provider setting up the feature in IIS so please read this post on it. My non-Azure shared hosting provider WebWiz supports this. Azure web sites doesn’t support ‘Suspend’ but fixes this with an ‘Always On’ feature which polls the site regularly enough that it stays up.

2. Per instance compute time

The whole decode and create takes about 0.0024 ms on my system, excluding first-use compute time. The actual creation of an instance of the generic type isn’t that costly (I haven’t measured it separately), but it is the decoding of the types etc. that take the time.

In the case of GenericServices the database call is so large, > 1 millsecond in all measured cases, that its not worth trying to improve the decode/create time.

However in GenericActions, which may call a very simple business method which could return very quickly, it is worth looking at. I therefore implemented a system caching system using a ConcurrentDictionary in my GenericActions library. It took quite a bit of tuning but the effect was worthwhile, as it brought the worse case, which has multiple decode calls, down from 0.037 ms to 0.018 ms. These are small figures, but I think worth the effort to make calls to the business logic have a low performance cost.


I have described not only a way of creating a Generic Type from a runtime type, but a whole system for using complex Generic Types, yet still having type definitions that are simple to use and Dependency Injection friendly. I have also looked how you call the methods or access the properties in the runtime created type, focusing on ease of use and detailed performance figures. I hope that is useful to people.

Happy coding!

Architecture of Business Layer working with Entity Framework

Jonathan Crosby asked a question on one of my GenericServices  GitHub projects about how to handle a Business Layer. I thought it was a good question so I have written an article to detail my approach to building business layers that use Entity Framework.

UPDATE 2016: See new, improved approach here

I have revisited this topic and refined my approach to Business Logic using EF. I suggest you read the new article called Architecture of Business Layer working with Entity Framework (Core and v6) – revisited.

UPDATE 2017: Book Entity Framework Core in Action

I have been commissioned my Manning Publishing to write the book Entity Framework Core in Action, in which chapter 4 is all about Business Logic in an Entity Framework Core environment (but the ideas are applicable to EF 6 too). 

This original article is old, but I have kept it because of all the comments.

What is the Business Layer?

GenericServices four layer designThe diagram on the left shows the type of layered application I normally build. This article is talking about the Business Layer (orange in diagram), which is called the “Domain Model” in Martin Fowler’s Service Layer diagram above.

The Business Layer is the place where all the business/domain logic, i.e. rules that are particular to the problem that the application has been built to handle, lives. This might be salary calculations, data analysis modelling, or workflow such as passing a order through different stages.

You can get a more in-depth coverage of the overall architecture in this  Simple-Talk article.

Aside: While having all the business rules in the Business Layer is something we should always strive for I find that in practice some issues go outside the Business Layer for good reasons. For instance validation of date often flows up to the Presentation Layer so that the user gets early feedback. It may also flow down to the database, which checks data written to the database, to ensure database integrity.Other business logic appears in the Presentation Layer, like not allowing users to buy unless they provide a credit card, but the status should be controlled by the Business Layer. Part of the skill is to decide whether what you are doing is justifiable or will come back and bite you later! I often get that wrong, but hopefully I learn from my mistakes.

My philosophy on the Business Layer

I have written numerous applications for predictive modelling of various healthcare issues, all of which have some quite complex business logic. Over the years I have tried different ways of organising these applications and I have come with one key philosophy – that the “The Business Layer is King“, i.e. its design and needs drives everything else.

I am a big fan of Domain-Driven Design (DDD) which has the same philosophy and provides some approaches to help keep the business logic at the forefront of any design. Here are some of the approaches from DDD that drive my designs.

1. The Business Layer defines the data structures

The problem we are trying to solve, often called the “Domain Model”, is the heart of the problem. These can be complex so the core data structures should be defined by, and solely focused on the business problem. How it is stored and how it is viewed are secondary issues.

2. The Business Layer should find persistence of data simple

While DDD accepts that the way data is stored/persisted will have an affect on the design (see Eric Evans book, page 159) the aim is to allow Business Layer code treat the database as almost an in-memory collection. Entity Framework really helps with this.

The architecture of my Business Layer

Based on the philosophy listed above the implementation I use has the following characteristics.

1. I use Adapters widely

As I said above the Business Layer is in control of the data structures – what makes the most sense for the Business Layer is what it does. This means it normally deals in data classes and/or entity keys (DDD term for the primary keys of data/entity classes).

This means the data needed/produced by the business logic is often not in the right format for communicating to the user and/or other external APIs. I therefore use Adapters, i.e. something that transforms the data to/from what the business logic. The Service Layer does this for the Business to Presentation layer communication while external services are adapted inside the Data Layer.

Further reading:

  • See Alistair Cockburn’s Hexagonal/Adapter-Port architecture . My applications are mainly in this style. I use Dependency Injection to link layers.
  • DDD call these anti-corruption layer – See Eric Evans’ talk on DDD in which he talks about different ways to adapt across Bounded Contexts (whole talk is great. Skip to 20mins in for part on linking bounded contexts).

2. The Business Layer uses EF directly

My early applications used the repository pattern and UnitOfWork pattern. However as EF has improved, and I have learnt more I have moved over to using EF directly in the Business Layer.

I found that the repository pattern in the end got in the way and made life much harder. Going for direct EF access allows the developer to access all the features of EF and stops some of the ranger tortuous approaches I used to use with repositories.

You can read more about this in my two blog posts:

  1. Is the Repository pattern useful with Entity Framework?
  2. Is the Repository pattern useful with Entity Framework? – part 2

3. The Business Layer does not do the final data save.

This is subtle, but I have found this very helpful. I don’t want the Business Layer to really know about saving data. I can’t totally ignore the data access code, in my case Entity Framework (EF), in the Business Layer, but I do minimise it.

Therefore Business Layer methods adds/inserts new data into the in-memory data classes or simply changes any loaded data from the database. However the Business Layer never calls EF’s SaveChanges. That is done in the Service Layer that called it.

Why do I find this useful? There are a number of reasons:

  • The Business Layer does not have to handle database validation errors, which can occur when ‘SaveChanges’ is called. The Service Layer does that, which has more information available to it on what is going on.
  • I know that all the data will be saved in one transaction. It is either there or its not.
  • If I need to chain multiple business methods I can use a transaction scope to rollback changes if one of the later methods fails.
  • The Service Layer can check other things to decide whether the data should be saved. I use this to allow users to save/not save on warnings.
  • It makes the Unit Testing of the Business Logic much easier as the database has not been changed. Most of my Business Logic has a property holding the data structures about to be written, so I can just check that.

Note: The discarding of data by not calling ‘SaveChanges’ only works in situation where each call has its own DbContext. This is the state in a web application as each HTTP request gets a new DbContext.

My implementation: GenericActions library

I have a library called GenericActions which has a very similar interface to the GenericServices library. Note: This isn’t an open-source library, but is part of my proprietary Rapid-Build™ library set that I use when building applications for clients.

Like GenericServices GenericActions can either work directly with the business class or more usually via a Data Transfer Object (DTO) – See Why DTO? in the GenericServices documentation for more on DTOs. It uses DI to pull in a Business Layer class, hence making sure the Business Class can pick up any services it needs through constructor injection.

A typical call of a business method at the MVC controller level looks very much like a GenericServices call.

public ActionResult SetupGrader(GraderDto dto,
            IActionService<IBizGraderSetup> service)
    if (!ModelState.IsValid)
        //model errors so return immediately
        return View(dto);

    var response = service.RunAction(dto);
    if (response.IsValid)
        TempData["message"] = response.SuccessMessage;
        return RedirectToAction("Index");

    //else errors, so copy the errors over to the ModelState and return to view
    response.CopyErrorsToModelState(ModelState, customerAddress);
    return View(customerAddress);

The main difference is the definition of the service (see line 4 above). The service contains the interface of the Business Layer class that contains the method it needs to call. DI is used to inject the right class into the service.

In the case above the action only returned a status to say if it was successful or not. However other Business Layer methods may need to return data, so there is another type of call that returns a class. This class can either be the class result produced by the business class or another DTO to do the conversion.

You can read about the way I implemented the GenericActions (and GenericServices) library in my new post, Using .NET Generics with a type derived at runtime.


Because GenericServices is focused on the database to/from Presentation Layer communication I have not described my view of the Business Layer. Hopefully this article fills in that gap by describing the ways and wherefores of the Business Layer in my  architectural view of applications.

Happy coding!

Announcing NuGet release of GenericServices

I am pleased to say that I have finally released a version of the GenericServices on NuGet!

For those who are not aware of GenericServices it is an open-source .NET library designed to help developer build a service layer, i.e. a layer that acts as a facard/adapter between your database in the data layer and your User Interface or HTTP service.

GenericServices makes heavy use of Entity Framework 6 – EF6 and .NET 4.5’s async/await commands. Its aim is to make the creation of the service layer simple while providing robust implementations of standard database access commands. You can get a summary of its features in from the Why GenericServices? page on the GenericServices documentation site or the README page on the GenericServices’ GitHub home page.

Link to NuGet Version

The library is known as GenericServices on NuGet, so simply search with that name in the ‘Manage NuGet Packages’ in Visual Studio. You can also find it on NuGet at .

Why have you released on NuGet now?

I did not feel confident to release GenericServices onto NuGet, where it is likely to get much more use, until I had good documentation and examples. In the past two weeks I have:

  1. I have written quite a bit of documentation and added a useful index in the sidebar. Do have a look at the documentation in the GenericServices’ Wiki.
  2. I have made available a much more advanced example called SampleMvcWebAppComplex. SampleMvcWebAppComplex is a ASP.NET MVC5 web site which was built as a ‘stress test’ of how well GenericServices library and Microsoft’s Entity Framework V6 could cope with an existing SQL database AdventureWorksLT2012.
    1. The example web site is at
    2. The source code is available as a reference source. Note: this project is not fully open-source because it uses a paid-for library, Kendo UI MVC.
  3. There are two articles I wrote on the Simple-Talk web site that describe in detail the process I went through to build SampleMvcWebAppComplex. They are:

I still have the original, more basic example web site called SampleMvcWebApp. You can access this via:

  1. The live example web site is at
  2. The source code, which is fully open-source, is available too.

Happy coding!

Is the Repository pattern useful with Entity Framework? – part 2

Quick Summary of Part 2
Having decided that the Repository/Unit of Work pattern does not add anything to Entity Framework (EF) the question was how to tame EF to work with Domain-Driven Design principals. This post details my new approach to using EF and why it is better.

This is a series:

  1. Part 1: Analysing whether Repository pattern useful with Entity Framework
  2. Part 2: Four months on – my solution to replacing the Repository pattern (this article).

Four months ago I was reviewing the design of Spatial Modeller™ , version one and I come across people who said that the Repository/Unit of Work pattern was not useful with Microsoft’s main data access technology, Entity Framework (shortened to EF from now on).

See items marked with ‘UPDATED 2015‘ for new comments added eight month on from first article.

And ‘UPDATED 2017′ for new comments added from my book Entity Framework Core in Action, which I am writing for Manning Publishing.

At that time I wrote a the blog post called ‘Is the Repository pattern useful with Entity Framework? – part 1‘ where I looked critically about what people said. At the time I was fairly convinced, but I needed to write some real code to see how I would do things without a repository.

Well, four months on the answer is – I’m convinced. The Repository/Unit or Work pattern does NOT add anything to EF. In fact it hides some useful features that a repository has trouble using. (UPDATE 2015: eight months on I am even more convinced as I have now built some more representative applications).

However the issue is – what replaces the good bits of the repository, like handling Domain-Driven Design (DDD) which tries to treat the data persistence library as a simple in-memory collection. This means we want to hide some of EFs code, but still be able to use all of its features. It has taken me a little while to refine my approach, but here are my thoughts so far.

One size does not fit all

After some trials it became clear that three parts of the application needed to interact with the database:
  1. The User Interface (UI) which needs to display data in lots of different ways and often needs help when editing data, i.e. it might have to provide the user with a list of options using a dropdown list.
  2. The WebApi or Web Service, which is similar to the User Interface but has less need for user selection from a list.
  3. The Business Layer which needs to hide some of the EF code and make the database feel like a searchable collection.
Realising that there were different parts was a breakthrough as I could now design different ways to interface to EF to meet the different requirements. I combined the UI and the WebApi as they are very similar and then provided a different approach for the business layer.

Design 1: CRUD data access for User Interface layer and WebApi

The front-end of any application needs to do CRUD operations: that is Create, Read, Update and Delete of entries. Create and Update can be particularly difficult as it might need other data to guide the user. For instance on my example site the create/update of a Post needs the user to select the right Tags and, in my example, the Author (try and then edit an entry). That needs extra data, which is where Data Transfer Objects (DTOs) come in.

I am a strong proponent of DTOs I have used them heavily and I find they really help decouple the data and business layer from the UI or WebApi. DTOs are also good at dealing with the issue of Aggregates, that is a class that has subclasses under it. For instance you might have a class called Order, which has collection of LineItems under it, and most likely a link to a Customer. DTOs can extract all the various parts and either ‘flatten’ them into one class, or for collections extract only the parts of the data that the user wants/is allowed to see.

The important point is to design the transfer for reads from the database so that they turn into efficient SQL requests. This needs a bit of thought, but isn’t that hard to do. By using LINQ and IQueryable<T> requests then this allows paging and filtering to be added at the last minute in the UI/WebApi end. Also adding the .Select function allows only the columns/properties you need to be read in. The benefit of all this is that the SQL request only asks for those rows and properties that the request needs, which makes the request as efficient as possible.

So an approach that uses DTO is good. The only down side is that writing the data access code for DTOs is repetitive and boring. I therefore looked at how to automate this. As I said in part 1 of this post I thought I might use T4 code generation. However in the end I pursued a route using .NET Generics. The result is an Open Source library called GenericServices which provides List, Detail, Create, Update and Delete functions on either the EF data classes themselves or more interestingly via DTOs linked to EF data classes.

Design 1: Implementation (UPDATED 2015)

Rather than describe how this all works here I have build two example web sites.

1., which is simple application to show GenericServices in action.

2. (UPDATE 2015) Complex.SampleMvcWebApp, which is a much more realistic example of a real application. The interesting thing for me is that it only took me two weeks to write the whole application (well, plus a few days for adding supporting comments and a few tweaks to improve it). That is way faster than my previous applications and proved to me the power of GenericServices.

I have written two articles that describe the design of the Complex.SampleMvcWebApp site. They are:

(UPDATE 2015) The GenericServices Wiki now contains lots of information  and GenericService is now available on Nuget – see GenericServices on NuGet. This means you can try GenericServices yourself.

Design 2: The Business Layer (UPDATED 2015)

UPDATE 2017: I now have a improved approach to handling business logic which you can find in the article Architecture of Business Layer working with Entity Framework (Core and v6) – revisited and in Chapter 4 of the book I am writing for Manning Publishing, Entity Framework Core in Action.

For the business layer I have taken a different approach. From a design/architecture point of view I do three things.

  1. I allow the business layer to directly access the data via Entity Framework (EF). No repository, not unit of work, just good old EF. This means that I have access to all of EFs commands and nothing is hidden from me.
  2. Problem is that EF on its own can be a bit of a handful so I tame it with ‘Query Objects’. Mainly I want to make repetitive, or complicated EF code appear as simple methods with intention revealing names. See implementation section for an example of this.
  3. In the business layer I try to treat the database as an in-memory collection which some search procedures, i.e. Query Objects. I try to not use the data access methods other than for loading or inserting new entries. Any other commands are hidden in Query Objects or other methods.

Let me explain how this works by talking about the implementation.

Design 2: Implementation

So the implementation of using EF in the business layers consists of four parts:

1. Building ‘Query Objects’ to hide some of the complex filtering/loading

UPDATE 2017: In chapter 2 of the book Entity Framework Core in Action I describe building a book listing with sorting, filtering and paging using a series query objects. You can see a live example site showing you those query objects in action.

In the first post I mentioned that people were taking about Query Objects, which is an encapsulation of particular database queries or updates. In C# Extension methods are a great tool for building Query Objects.  If you have seen LINQ fluent commands then that is how extension methods work. Below is an example from Spatial Modeller™ where it loads all the schemes who’s key is in the array parameter, with the schemes layers, members, attributes and services.

public static IQueryable<SmScheme> GetSchemesByIdIncludingAllChildClasses
    ( this SpatialModellerDb db, ICollection<int> schemeIdsToLoad)
    return db.SmSchemes
        .Include(x => x.Grades)
            x => x.Layers.Select(y => y.Members
                              .Select(z => z.Attributes)))
            x =>
                x.Layers.Select(y => y.Members
                               .Select(z => z.ServicesHandled)))
            x => schemeIdsToLoad.Contains(x.SmSchemeId));

This code hides the EF code needed to perform the database action inside a nicely named method. This code would be in the data/infrastructure layer, which means the EF code is more centralised in one place. I would recommend ‘s article ‘Giving Clarity to LINQ Queries by Extending Expressions‘ which has some excellent ideas and source code for building ‘query objects’.

2. Using Domain-Driven Design (DDD) principals

I really like the thinking behind DDD. It has made me a better programmer. While DDD accepts that the way data is stored/persisted will have an affect on the design (see Eric Evans book, page 159) the aim is to allow Business Layer code treat the database as an in-memory collection which some search procedures, with as little interaction as possible with the data access code.

This isn’t done just to hide the database, but mainly to ensure that the developers working in the business layer are thinking about the business problem first, not how the database works. DDD recognises that there might need to be some implementation compromises to work with the database, but the overall aim is to keep the focus on the business problems.

The other issue that effects the approach used is that the actual data saved in the database will be designed for the business layer. Therefore the business layer will use the data classes directly, as they are a good fit to the business problem.

Update after a lot more business layer work

I still use a DDD approach, but I have found that EF doesn’t really support a pure DDD design for a number of reasons. Firstly the EF database classes need to be at the data layer level, so these classes aren’t really the right place to have business logic (I have a separate layer/assembly for that). Also EF sets certain restrictions on the way collections are handled, which means you can’t use a IReadonlyCollection, which can make building a DDD root/aggregates approach where the sub classes of a DDD root can only be changed via the root hard to enforce. Because of this I still follow a DDD design approach, but some of the more stringent access rules have to be implemented simply by setting standards, not by programatically enforces rules.

Even so my business layer is able use the DDD principal of treating the database as just classes, which make the business logic much simpler. To do this a typical business pattern has three phases:

  1. Read in any data it needs for the business logic.
  2. Some pure business logic working on the data classes – the bulk of the code.
  3. An optional add/remove phase if any new data needs to be added to the database (note: updates happen automatically if you just change the database classes read in in phase 1).
3. The Business Layer never calls EF’s .SaveChanges() to persist data

The business layer should not know about persisting/updating data. I therefore design the business layer call .add() or .remove() on the EF collections, but the final update is triggered by the Service Layer after the business process returns.

This sounds subtle, but it has proved to be really useful. I go into this in more detail in my new article  Architecture of Business Layer working with Entity Framework but here are three good reasons why it helps.

  • The single big commit meant that the modelled data was either there or not – no cleaning up if the business layer had done part of the job and committed it but a later part failed.
  • The Business Layer does not have to handle database validation errors, which can occur when ‘SaveChanges’ is called. The Service Layer does that, which is better placed to deal with any errors.
  • The business layer was easier to test as it added data to the EF collections, but didn’t change the database.

Conclusion (UPDATED 2015)

Doing all this is hard work, especially building the GenericServices library, but I really do think this will speed up future development. I built the SampleMvcWebApp fairly quickly, but that was because I was developing GenericServices.

However when I built Complex.SampleMvcWebApp, which is much more representative application, it only took two weeks, which I though was fantastic. Have a look at the site – it’s fairly complex in terms of data manipulation even if I didn’t spend time on the visual design. NOTE: I have not implemented any business logic at all in the sample. If I had it would have taken a lot longer because business logic is by its very nature specific to a project.

The things I immediately noticed when swapping to the new approach are:

  • The separation of concerns is much better, with the EF code more focused to where it is needed.
  • Creating a good repository pattern is hard and I used to find myself fiddling with the repository implementation during the project as I hit new issues. That is all gone.
  • Using EF directly allows me to use the full range of commands, like load to load relationships that are not currently loaded. I now realise when I used a repository pattern I was having to tweak the code to get round things that EF had commands to solve.

The primary helper is GenericService, which as of 2015 is now released on Nuget – see GenericServices on NuGet. Without GenericService it would be a hard slog to build the Service Layer. What I noticed GenericServices did was:

  • Writing the DTOs is all about getting the right names/properties. Everything else is handled by GenericServices. So much quicker to write and more robust.
  • If there was a special case then it was easy to override one of GenericServiceDTOs virtual methods. As part of the exercise of building Complex.SampleMvcWebApp I made the method names more obvious to stop me having to look them up every time.
  • The MVC Controller code is now the same for all but the most complex situation, with just the data classes changing. This makes writing the MVC side much easier, and less error prone. I plan to create some scaffolding but currently waiting for MVC 6 (vNext) which has new scaffolding options.

I have also written a private library called GenericActions to help with calling business methods, but that is a subject of another article.

I hope that is helpful to people. Happy coding!