Search

The Reformed Programmer

Bits and pieces about programming

Architecture of Business Layer working with Entity Framework (Core and v6) – revisited

08/22/2016

I wrote an article a while ago called Architecture of Business Layer working with Entity Framework, which has been popular. I am now playing with Entity Framework (EF) Core and some sample code made me think – could I better isolate the EF part of my Business Logic? The answer is yes, and this article describes the new and improved approach, with a lot more examples.

Note: All the code in this article is using .NET Core and Entity Framework Core (EF Core). However, the code is easy to understand for anyone who has played with either flavour of EF, and the concepts work equally well with EF 6.x or EF Core.

What is the Business Layer?

If you are new to idea of Business Logic, then I suggest you read the section near the top called ‘What is the Business Layer’ in my original article as it gives a good description. However, if you are in a hurry here is the short version.

Business Logic is what we call pieces of code that carry out operations that are specific to the job that the application is carrying out. For instance, on an e-commerce site like Amazon you would have Business Logic that handles a customer’s purchase: checking on availability and delivery, taking payments, sending an email confirmation etc. Business logic is definitely a step up on complexity over CRUD (Create, Read, Update and Delete) operations.

While Business Logic can be spread throughout an application and the database, it is accepted best practice to try and isolate the Business Logic into one place, which am calling the Business Layer.

Aside: While having all the business rules in the Business Layer is something we should always strive for I find that in practice some issues go outside the Business Layer for good reasons. For instance validation of data often flows up to the Presentation Layer so that the user gets early feedback. It may also flow down to the database, which checks data written to the database, to ensure database integrity.

Other business logic appears in the Presentation Layer, like not allowing users to buy unless they provide a credit card, but the status should be controlled by the Business Layer. Part of the skill is to decide whether what you are doing is justifiable or will come back and bite you later! I often get that wrong, but hopefully I learn from my mistakes.

The (improved) architecture of my Business Layer

Let me start with a diagram of the new, improved structure of the Business Logic within an application – in this case an ASP.NET web application but the same approach would work in many other application types.

BizLogic and BizDbAccess layer diagram - simplified

Next I will explain each of the main layers, with example code.

1. The DataLayer

The Data Layer is where the EF POCO (entity) classes are defined, along with the EF setup and DbContext. I am not going to describe how to create the EF entity classes or the setup of the DbContext as I will assume you already know about EF. For information on how to set up the database and EF entities see:

Note: I have shown the Data Layer as one assembly, but in bigger applications the EF entity classes are often in a different assembly to the EF setup code. In that case the BizLogic would only link to the EF entity classes.

2. The BizLogic Layer

I have written numerous applications, all of which has some quite complex Business Logic. Over the years I have tried different ways of organising these applications and I have come up with one key philosophy – that the “The Business Layer is King“, i.e. its design and needs drives everything else.

I am a big fan of Domain-Driven Design (DDD) and Eric Evans’ seminal book on DDD, Domain-Driven Design. Eric Evans’ book talks about how the business problem we are trying to solve, called the “Domain Model”, should be “the heart of the Software” (see Eric Evans book, page 4). Eric goes on to say “When the domain is complex, this is a difficult task, calling for the concentrated effort of talented ad skilled people”. Therefore, I try to make sure that the Business Logics data structures are defined by, and solely focused on, the business problem. How that data is stored and how that data is viewed are secondary issues.

Because of the way the EF works it is not sensible to combine the Business Logic with the actual EF POCO classes. Eric Evan’s recognises and discusses this problem in his book, see Eric Evans book, page 159. However, lots of people have tried to make EF more DDD-like.

NOTE: There is plenty of debate on how to apply DDD to EF. Personally I did try making the EF entity classes behave in a DDD way, but for me the amount of work for the gain in separation became too much (however, it is easier in EF Core though). In trying to go fully DDD with EF I felt I was losing the core tenant of Eric Evans book, which is that the business problem was the focus, not the technology. I recommend Jimmy Bogard comments near the end of this EF issue ‘The impedance mismatch between EF and domain models: what is Entity Framework’s answer?’, which I think are very helpful.

My rules for the Business Logic are:

a. The Business Logic data classes should not be affected by other higher layers

Business Logic can be hard to write. Therefore, I want to stay focused on business issue and not worry about how other layers above might need to reformat/adapt it. So, all data in or out of the Business Logic is either defined inside the BizLogic layer, or it is a Data Layer class. It is Service Layer’s job to act as an Adapter, i.e. Service Layer converts any data to/from what the Business Logic needs.

In new/modern databases the EF POCO (entity) classes should be a very good fit to the Business Logic needs, with a few extra properties needed to set up the database relationships, e.g. Foreign Keys. Therefore, the Business Logic can use the EF POCO (entity) classes directly without any adaption.

However, one of my reader, , pointed out that for for old/legacy databases the fit between what the database and what the Business Logic may not be a good match.

b. The Business Logic works on a set of in-memory data

I don’t want the Business Logic worrying about how to load and save data in the database. It is much simpler to design the Business Logic code to work on simple in-memory set of data. The changes described in this article improve this significantly over the previous article.

In the previous approach I used to have a specific part of the Business Logic, normally at the start, which loaded all of the data. However, the new approach has what I call a DbAccess class which handles all the reading and writing (see later for why this is better). The DbAccess is class is a Facade pattern.

Example Code

Here is an example of Business Logic which is creating the first part of the order, which consists of an Order class instance with a number of LineItems. It is handed the customer’s Id, and list of LineItems which contain an entry for each BookId, followed by how many copies of that book the customer wants.

namespace BizLogic.Orders.Concrete
{
    public interface IPlaceOrderAction : IBizActionAsync<PlaceOrderInDto, Order> { }

    public class PlaceOrderAction : IPlaceOrderAction
    {
        private readonly IPlaceOrderDbAccess _dbAccess;
        private readonly List<string> _errors = new List<string>();

        public IImmutableList<string> Errors => _errors.ToImmutableList();

        public bool HasErrors => _errors.Any();

        public PlaceOrderAction(IPlaceOrderDbAccess dbAccess)
        {
            _dbAccess = dbAccess;
        }

        /// <summary>
        /// This validates the input and if OK creates an order 
        /// and calls the _dbAccess to add to orders
        /// </summary>
        /// <returns>returns an Order. Can be null if there are errors</return>;
        public async Task<Order> ActionAsync(PlaceOrderInDto dto)
        {
            if (!dto.TsAndCsAccepted)
            {
                _errors.Add("You must accept the T and Cs to place an order.");
                return null;
            }
            if (!dto.LineItems.Any())
            {
                _errors.Add("No items in your basket.");
                return null;
            }

            var booksDict = await 
                _dbAccess.FindBooksByIdsAsync(
                    dto.LineItems.Select(x => x.BookId))
                .ConfigureAwait(false);
            var order = new Order
            {
                CustomerName = dto.UserId,
                LineItems = FormLineItemsWithErrorChecking(
                     dto.LineItems, booksDict)
            };    

            _dbAccess.Add(order);

            return order;
        }

        private List<LineItem> FormLineItemsWithErrorChecking(
             IEnumerable<OrderLineItem> lineItems, 
             IDictionary<int,Book> booksDict)
        {
           // rest of the code left out…

A few things to point out about the above.

  1. We have created a generic IBizAction<Tin, Tout> interface that the PlaceOrderAction class has to implement. This is important as we use a generic BizRunner to run all the actions (described later in the Service Layer section). The IBizActionAsync interface ensures we have the Errors and HasErrors properties as well as the ActionAsync
  2. It uses the IPlaceOrderDbAccess provided via the constructor to a) find the books that were referred to in the order and b) to add the order to the EF db.Orders set.

NOTE: I also setup an interface, IPlaceOrderAction. I need this to get Dependency Injection (DI) to work. But I won’t be describing DI in this article. However, I really recommend the use of DI on any non-trivial application.

2. The BizDbAccess Layer (new)

The BizDbAccess layer contains a corresponding class for each BizLogic class that accesses the database. It is a very thin Facade over the EF calls. I should stress, I am not trying to hide the EF calls, or make a repository. I am just trying to Isolate the calls. Let me explain.

The problem with my previous approach, where the Business Logic called EF directly, wasn’t that it didn’t work, but that it was hard to remember where all the EF calls were. When it came to refactoring, especially when doing performance improvements, then I had to hunt around to check I had got every EF call. By simply moving all the EF calls into another class I have all of the EF commands in one place.

The main approach I will describe assumes that the database is of a new/modern design and there is a good fit between the data held in the database and the Business Logic requirements. The code below is a PlaceOrderDbAccess class, which goes with the PlaceOrderAction Business Logic described above:

public class PlaceOrderDbAccess : IPlaceOrderDbAccess
{
    private readonly EfCoreContext _db;

    public PlaceOrderDbAccess(EfCoreContext db)
    {
        _db = db;
    }

    /// <summary>
    /// This finds any books that fits the BookIds given to it
    /// </summary>
    /// <returns>A dictionary with the BookId as the key, and the Book as the value</returns>
    public async TaskIDictionary<int, Book>> FindBooksByIdsAsync(IEnumerable<int>; bookIds)
    {
        return await _db.Books.Where(x => bookIds.Contains(x.BookId))
            .ToDictionaryAsync(key => key.BookId)
            .ConfigureAwait(false);
    }

    public void Add(Order newOrder)
    {
        _db.Orders.Add(newOrder);
    }
}

Please note that I use a method called Add, to add the order to the EF Orders DbSet. As I said, I am not trying to hide what is going on, but just isolate the commands, so using the same terminology is helpful.

solution-view-bizlayer-bizdbaccessNOTE: To help with finding the relevant DbAccess I call the Business Logic class <name>Action and the DbAccess calls <name>DbAccess. They also have the same top-level directory name. See the attached picture of the solution directories on the right.

A secondary, but very powerful improvement, is that I could test my Business Logic without using an EF database. Because I provide an Interface then I can replace the DbAccess class with a Mock. Here is my Mock for PlaceOrderDbAccess.

public class MockPlaceOrderDbAccess : IPlaceOrderDbAccess
{
    public ImmutableList<Book> Books { get; private set; }

    public Order AddedOrder { get; private set; }
    public MockPlaceOrderDbAccess()
    {
        Books = CreateTestData.CreateBooks().ToImmutableList();
    }

    /// <summary>
    /// This finds any books that fits the BookIds given to it
    /// </summary>
    /// <param name="bookIds"></param>
    /// <returns>A dictionary with the BookId as the key, and the Book as the value</returns>
    public async Task<IDictionary<int, Book>> FindBooksByIdsAsync(
        IEnumerable<int> bookIds)
    {
        return Books.AsQueryable()
            .Where(x => bookIds.Contains(x.BookId))
            .ToDictionary(key => key.BookId);
    }

    public void Add(Order newOrder)
    {
        AddedOrder = newOrder;
    }
}

Being able to test all your Business Logic that way is very useful.

Handling old/legacy databases

One of my readers, , has been using a similar approach to the one describe in this article for some time. He pointed out in one of his comments that he finds this approach useful when the database tables are not a good fit for the Business Logic.

At that point, in addition to isolating the EF commands, the BizDbAccess class can carry out any adapting/ reformatting of the data between the database and the Business Logic.

For example I have come across database which does not mark its relationship keys and Foreign Keys. This stops EF from doing relational fixup, i.e. linking the various entities loaded. In that case you would add code to the DbAccess class to link the loaded entities so that the Business Logic can work on a linked, in-memory set of classes.

The Service Layer

As I said earlier in my applications the Service Layer is very important layer that links everything together. The Service Layer uses both the Command pattern and the Adapter pattern between the presentation layer and all the layers below.

Let me start by showing you one of what I call the BizRunner classes, which is the Command part of the code. This class contains the code to runs the PlaceOrderAction method above, which has a signature of taking data in and producing data out. It also needs data to be written to the database, hence the format/content of this BizRuner:

public class RunnerWriteDbAsync<TIn, TOut>
{
    private readonly IBizActionAsync<TIn, TOut> _actionClass;
    private readonly DbContext _db;

    public RunnerWriteDbAsync(IBizActionAsync<TIn, TOut> actionClass, DbContext db)
    {
        _db = db;
        _actionClass = actionClass;
    }

    public async Task<TOut> RunActionAsync(TIn dataIn)
    {
        var result = await _actionClass.ActionAsync(dataIn).ConfigureAwait(false);
        if (!_actionClass.HasErrors)
            await _db.SaveChangesAsync();

        return result;
    }
}

The RunnerWriteDbAsync BizRunner is a simple, but powerful class which, through the use of interfaces and generics can run different Business Logic.

One this to note in this arrangement is that only the BizRunner’s RunActionAsync method should call the EF command SaveChangesAsync. That is important for a number of reasons:

  1. If any of the BizLogic/BizDbAccess methods called SaveChanges arbitrarily then you could get problems, with part of the data being saved while another part wasn’t. By having only the BizRunner calling SaveChanges it means that you know that all the Business Logic has finished.
  2. A subtler (and more advanced) reason is it allows us to chain together multiple Business Logic calls within a transaction (I describe this in the article ‘Architecture of Business Layer – Calling multiple business methods in one HTTP request’).
  3. The handling of errors is important, and you can see that the BizRunner will only call EF’s SaveChanges if there are no errors.
  4. EF6.x and EF Core differ on what happens when you call SaveChanges.
    1. In EF6.x it validates the data against any validation attributes, such as [MaxLength(20)], and against IValidatableObject if present. This means you will get a DbEntityValidationException exception if any validation error is found. The exception containing validation errors for every incorrect property or class that was new or updated.
    2. EF Core uses a lighter touch and does not validate the data, because in many cases the data has been already validated on input. Clearly that isn’t the case in business logic and you may want to add this feature back to EF Core by overriding SaveChanges and add validation – see this article for how to do that.

Note: The discarding of data by not calling ‘SaveChanges’ only works in situation where each call has its own DbContext. This is the case in a web application as each HTTP request gets a new DbContext. However, in Windows Applications etc. where the DbContext can be kept alive for longer you need to be careful about the lifetime/disposal of the DbContext.

Here is the code inside the Service Layer that uses the BizRunner to call PlaceOrderAction. You can see this acts as an Adapter pattern to the input and output of the Business Logic. The method:

  1. Gets current basket from a cookie and converts it into the format that the Business Logic needs.
  2. Calls the RunActionAction method from the PlaceOrderAction class.
  3. If RunActionAction method was successful it clears the basket, as the order now holds them.
  4. It extracts and returns the OrderId from the Order instance, as that is what the Presentation Layer needs.

The code is:

{
public class PlaceOrderService
{
    private readonly CheckoutCookie _checkoutCookie;
    private readonly EfCoreContext _db;

    public IImmutableList<string> Errors { get; private set; }

    public PlaceOrderService(IRequestCookieCollection cookiesIn, IResponseCookies cookiesOut, EfCoreContext db)
    {
        _db = db;
        _checkoutCookie = new CheckoutCookie(cookiesIn, cookiesOut);
    }

    /// <summary>
    /// This creates the order and, if successful clears the cookie
    /// </summary>
    /// <returns>Returns the OrderId, or zero if errors</returns>
    public async Task<int> PlaceOrderAsync(bool tsAndCsAccepted)
    {
        var checkoutService = new CheckoutCookieService(_checkoutCookie.GetValue());

        var action = new PlaceOrderAction(new PlaceOrderDbAccess(_db));
        var runner = new RunnerWriteDbAsync<PlaceOrderInDto, Order>(action, _db);

        var order = await runner.RunActionAsync(
            new PlaceOrderInDto(tsAndCsAccepted, 
            checkoutService.UserId, checkoutService.LineItems));
        Errors = action.Errors;

        if (action.HasErrors) return 0;

        //successful so clear the cookie line items
        checkoutService.ClearAllLineItems();
        _checkoutCookie.AddOrUpdateCookie(
             checkoutService.EncodeForCookie());

        return order.OrderId;
    }
}

You can see the PlaceOrderAction instance being created, along with its linked PlaceOrderDbAccess class. This would normally be done by Dependency Injection, but to make the example simpler to understand I have created it by hand.

You can see the adapter pattern in progress. On input the information is contained in a cookie, but the business layer shouldn’t have to deal with that. Therefore, the Service Layer method PlaceOrderAsync converts the cookie content into the form that the PlaceOrderAction wants.

Similarly, for the output of the PlaceOrderAction Business Logic, which returns an Order instance, but the presentation layer actually wants the OrderId, Again the PlaceOrderAsync method extracts the OrderId from the Order and returns that to the MVC Action.

Interestingly the pattern of returning a database class instance is quite typical, as the Business Logic won’t know what the entity’s key is, as it isn’t generated until the BizRunner in the Service Layer calls EF’s SaveChanges. It then becomes the job of the Service Layer to extract/adapt the parts that are needed by the Presentation Layer.

NOTE: The BizRunner and the code above works fine, but is simpler than what I use in practice. I have a private library called GenericActions, which has the same sort of features as my GenericServices library. This can identify the right signature of BizRunner to call by unpacking the Interface parts. GenericActions also uses the same ISuccessOrErrors<T> class as GenericServices, which returns both the status/errors and the class being returned. It also has the AutoMapper mapping built in to do any needed adapting.

Presentation layer

For completeness I have included the presentation layer, which in this case is an ASP.NET Core MVC page. If the order is successful it redirects to the confirmation page, otherwise it copies the errors into the ModelState and shows the basket again, with the error message(s).

public async Task<IActionResult> PlaceOrder(bool iAcceptTAndCs)
{
    var service = new PlaceOrderService(
        HttpContext.Request.Cookies, 
        HttpContext.Response.Cookies, _db);
    var orderId = await service.PlaceOrderAsync(iAcceptTAndCs);

    if (!service.Errors.Any())
        return RedirectToAction("ConfirmOrder", "Orders", 
             new { orderId});

    //Otherwise errors, so copy over and redisplay
    foreach (var error in service.Errors)
    {
        ModelState.AddModelError("", error);
    }
    var listService = new CheckoutListService(_db, 
        HttpContext.Request.Cookies);
    return View(listService.GetCheckoutList());
}

I try to keep my Controller Actions small and simple by putting as much logic as possible into the Service Layer or lower layers. I do this because it is hard to test Controller actions, but much easier to Unit Test the Service Layer or lower layers.

Quick aside – handling errors needs thought.

This article is already quite long, so I won’t dwell on this subject, but providing good error feedback to the user is not trivial. Many libraries use Exceptions to report errors – some of these errors are user friendly, like Validation Errors. However, many Exceptions, such as EF’s DbUpdateException, are very unfriendly and sometimes produce messages that relate to the inner working for the application, and hence should not be shown for security reasons.

In this article I have implemented a very simple error feedback mechanism. In practice you would need to implement a much more comprehensive approach. I suggest you look at my open-source library, GenericServices, with uses ISuccessOrErrors<T> to return a result with possible error messages.

Conclusion

Writing CRUD (Create, Read, Update and Delete) methods used to take a lot of (boring) development writing boiler-plate code. I solved that by writing a library called GenericServices, which has radically improves the speed of development (see this site CRUD-only site, which I wrote in about 10 days).

I now find I spend most of my development effort on a) the Business Logic and b) building responsive user-interfaces. Therefore, continuing to improve my approach to Business Logic writing is well worth the effort, which is what this article is all about. (See a different article on how I am improving my building responsive user-interfaces too).

I recently reviewed an e-commerce application I developed which used the older approach to writing Business Logic, i.e. EF called in the Business Logic. The Business Logic was quite complex, especially around pricing and delivery. My conclusion was that the Business Logic work fine, but it was sometimes difficult to find and understand the database accesses when refactoring and performance tuning the Business Logic.

I tried out the new approach described in this article by refactoring some of the Business Logic in this e-commerce application over to the new approach. Overall it did make a difference by bring all the EF access code into one clear group per Business Logic case.

In additions I wrote an example application using ASP.NET Core and EF Core with this new approach, again with good results (this is where the example code comes from in this article). Couple this with the other positive feature of being able to test the Business Logic with a Mock DbAccess class rather than accessing EF directly and it is clearly a good step forward.

The only down side to this new approach is that does need one more class than the previous approach. However, that class does provide a much better separation of concerns. Therefore, I think this new approach is a positive improvement to my previous approach to building Business Logic and is worth the effort extra effort of another class.

I hope this article gives you some ideas on how to better design and build complex Business Logic in your applications.

Unit Testing React components that use Redux

07/05/2016

React.js is build to make it easy to Unit Test, and there is plenty of information on how to do this. However as you build more complex applications and employ a store pattern like Redux then things get a little more complicated. This article goes through the issues you need to consider and is supported by a full set of open-source examples on GitHub.

This article is the forth in the series I am writing about creating a React.js samples that are a good starting point for real-world, production-ready applications with full React.js build, test, and deploy capabilities. The other articles in the series are:

  1. Templates for building React.js front-ends in ASP.NET Core and MVC5
  2. Using a Redux store in your React.js application
  3. Adding mocking to React.js Unit Tests
  4. This article: Unit Testing React components that use Redux

The aims of this article

  • Choosing the right React Testing Library to handle Redux.
  • How to test React components that uses Redux.
  • What to do about nested components that use Redux.
  • How to test events when Redux is involved.
  • Quick aside on other decorators like DragAndDrop.

I’m going to assume you at least know what React is and what Redux does. If you don’t know React try these article links [1, 2, 3]. I would also really recommend the book “Pro React”, which helped me a lot. If you don’t know what Redux is and why it is useful I cover that in my second article, Using a Redux store in your React.js application.

Note: One word of warning:  React.js is still changing and articles and books get out of date. This article and open-source AspNetReactSamples on GitHub uses React version 15.0.2  and Redux 3.5.2 (see package.json for full list).

Quick introduction

I had time between projects and I wanted to create a solid foundation for the development of React.js applications. In my first article I describe the tool chain that is needed to build and test React.js applications. As I am a full stack developer focused on Microsoft’s ASP.NET for the server side I build around that (Note: you don’t need any knowledge of ASP.NET to understand this article – its all about React.js).

In my first article I describe the Unit Test setup. So you don’t have to wade through the article here is a summary:

  • I chose Mocha as my test framework. Facebook recommend Jest for Unit Testing React, but a number of posts [1, 2, 3] said Jest was slow and Mocha was better. I have also used Mocha successfully in the past, which gave me confidence in this approach.
  • I used Karma as my test runner. Most people do and I have used it before.
  • I run my Unit Tests inside a real browser, Chrome, rather than a phantom browser so I can debug.
  • I used AirBnB’s Enzyme React testing utility. More on this later.
  • I added NPM scripts in the ReactWebPack.Core’s package.json file so that the tests could be run in same way as the build.

Choosing the React Testing Library

There are a number of unit testing libraries for React. The obvious one is React’s one ReactTestUtils. However I found ReactTestUtils didn’t have great documentation and it has some problems with Redux, e.g. ReactTestUtils doesn’t always seem to like the Redux’s Provider class, which is uses to pass in the Redux information to the React Components. Maybe there is a way to fix it, but I didn’t fine one.

Thankfully I found the excellent Enzyme React testing utility which is very well documented and also works with Redux. For the rest of this article I will use Enzyme.

Note: I do go through this in more detail later, but if you are an expert and want to just ‘look at the code’ to see how ReactTestUtils and Enzyme handle Redux connected components then look at the ConnectedClasses.test.js . I have commented on the ReactTestUtils tests that fail.

Testing a React components that uses Redux.

a: single, non-nested components

I created some very simple classes to check out the best way to test React components that use Redux. Let’s start with the simplest – InnerConnect. The content of this module is:

 
import React, {Component} from 'react';
import { connect } from 'react-redux';

export class InnerConnect extends Component {
    render () { return 
       <h2>
         Inner.dispatch {
           this.props.dispatch === undefined 
              ? 'undefined' : 'defined' }
      </h2>
    }
}
export default connect()(InnerConnect)

Note: For those not familiar with Redux the code above has the minimum it needs to get access to Redux. The connect()(InnerConnect) on the last line adds the property dispatch to the props. Normal use of Redux by a component would include more Redux code.

Redux’s documentation on Writing Tests suggests you export the React component class as well as the default, which should be the Class used (decorated by) in a connect() method.

Now let me give you a segment of the file ConnectedClasses.test.js which does a simple test with and without linking to Redux (or in this case redux-mock-store).

//... various includes but especially the one below
import InnerConnectConnected, {InnerConnect} 
    from '../../localSrc/InnerConnect';
describe('mount', () => {
    it('InnerConnect, no connect', () => {
        const wrapper = mount(<InnerConnect />);
        expect(wrapper.find('h2').length).toBe(1);
        expect(wrapper.find('h2').text())
           .toBe('Inner.dispatch undefined');
    });
    it('InnerConnectConnected, with connect', () => {
        const mockStore = configureStore([]);
        const store = mockStore({});
        const wrapper = mount(<Provider store={store}>
            <InnerConnectConnected />
        </Provider>);
        expect(wrapper.find('h2').length).toBe(1);
        expect(wrapper.find('h2').text())
            .toBe('Inner.dispatch defined');
    });
    //... more tests left out

The first test tests the InnerClass without the connect statement. The second test tests the default export, which contains the class decorated by the Redux connect method. In the tests I use enzyme’s mount which does a full DOM rendering. You can see the first test says this.props.dispatch is undefined while the second test says that this.props.dispatch is defined.

The important point is that if I had used the default export of InnerConnect (Referred to as InnerConnectConnected in the code above) and had NOT used <Provider> to supply a store then the test would have failed with the error:

Invariant Violation: Could not find “store” in either the context or props of “Connect(InnerConnect)”. Either wrap the root component in a <Provider>, or explicitly pass “store” as a prop to “Connect(InnerConnect)”.

Note: I could have used enzyme’s shallow render (there is an example of that inside ConnectedClasses.test.js), and that would of worked. However enzyme’s mount or render methods gives a more useful output in this case. I tend to use mount, but that is just a minor preference on my part.

b: Nested components

ReactKanbanNestedComponentsOK, that was pretty straightforward, but life in a real application is never as simple as that. React components are designed to be nested. In the Kanban applications, taken from the excellent book “Pro React”, uses a set of nested components to show multiple lists of multiple card, where each card may have multiple tasks (called CheckList in the diagram).

This is pretty normal for a React.js application. Also, if you use Redux then many of the React components will need to access Redux. Once this happens then things get more complicated.

Let us look at a second example React component called OuterConnect. The module is repeated in full below:

 
import React, {Component} from 'react';
import { connect } from 'react-redux';
import InnerConnect from './InnerConnect';

export class OuterConnect extends Component {
    render () { return 
       <div>
          <h1>
             Outer.dispatch {
                 this.props.dispatch === undefined 
                    ? 'undefined' : 'defined' }
          </h1>
          <InnerConnect />
       </div>
    }
}
export default connect()(OuterConnect)

You can see this is very similar to the InnerConnect, but it has one big difference – it has the InnerConnect component inside its render method – this is known as nesting. At this point the following happens:

  • Shallow render still works without the Redux <Provider> as long as you call the undecorated class, in this case InnerConnect.
  • Full Dom render will NOT work without the Redux <Provider>. This is because the nested class, i.e. InnerConnect, will fail because it uses the connect method which expects to find a Redux store.

As Full Dom rendering, i.e. enzyme’s mount method, often provides a more useful output then supplying the store via the Redux <Provider> class is the most sensible way to go.

Note: If you want a full list of all the test options then look at the test ConnectedClasses.test.js. The tests that fail are set to be skipped and have comments to say how they fail.

I have done a few tests on parts of the Kanban components. You can find them all in the directory Tests/components. I use a mixture of shallow and mount rendering.

Testing events when Redux is involved

React components need to respond to events, like clicking a button. If Redux isn’t involved then these events are normally handled by changing the this.state inside the component. However, once a Redux store, or any other sort of store pattern, is used the state is mainly (always?) handled by the store. This means that simulating an events is likely to trigger a Redux action.

You could provide a full implementation of the store and (I assume, because I haven’t tried it) that the action would ripple through and cause whatever change was expected in the html. However I personally use redux-mock-store so that I can decouple the actions. Here is a simple example taken from Card.test.js .

it('call close card event', () => {
const identity = function (el) { return el; };
    // Wrap in DnD test context
    const CardConnectedWithDnD = 
       wrapInTestContext(CardConnected);
    const mockStore = configureStore([]);
    const store = mockStore({});
    const card = createCard();
    const wrapper = mount(
        <Provider store={store}>
           <CardConnectedWithDnD 
               {...card} 
               connectDropTarget={identity}/>
        </Provider>);
    expect(store.getActions().length).toBe(0);
    wrapper.find('div .card__title').simulate('click')
    expect(store.getActions().length).toBe(1);
    expect(store.getActions()[0]).toEqual({ 
           payload: { cardId: 0 }, 
           type: 'toggle card details' });
});

As you can see simulating the click (line 16) causes an action to be called, which the redux-mock-store records. The test then checks that action has happened and has the right content.

Quick aside – other decorated components

While this article is mainly about components using Redux there are other types of components that use data passed down to them. The example I had to fix was the use of DragAndDrop but I expect there are others. These need a similar approach to how we handle Nested Redux usage. However the actual way you inject the data varies with the library you are using.

Do have a look at Card.test.js and List.test.js with the helper class WrapTestDnD.js .

Conclusion

I did have to think about the best approaches to testing nested React components that use Redux. Having built myself a useful set of example tests in the shape of the ConnectedClasses.test.js tests then I thought it was worth publishing a detailed run through of how things work.

I hope this helps others with their React.js developments. Please feel free to leave a comment if you have more information to add, or especially if you spot a error on my part.

Happy coding.

Adding mocking to React.js Unit Tests

06/24/2016

I am continuing to build up my set of templates to improve the tools used to develop, test and build React.js front-ends (see original article introducing my ASP.NET React.js templates). In my first iteration I created a project to contain my Unit Tests, but it only had five very basic tests. In this article I describe how I beefed up the Unit Tests, especially around mocking out ES6 modules.

The aims of this article

  • Why I chose the JavaScript mocking approach called inject-loader.
  • A detailed summary of how to use inject-loader, as it wasn’t totally obvious to me.
  • A quick aside on mocking JavaScript Promises.
  • Links an open-source project which contains an example of Unit Testing with Mocks.

I’m going to assume you at least know what React is and what Unit Testing is and why Unit Testing is useful. If you don’t know React try these article links [1, 2, 3]. I would also really recommend the book “Pro React”, which helped me a lot. On Unit Testing  and mocking try this overview article.

Note: One word of warning:  React.js is still changing and articles and books get out of date. This article and open-source AspNetReactSamples on GitHub uses React version 15.0.2 (see package.json for full list).

Setting the scene

I described my Unit Test environment in the first article, but in summary I am using the Mocha test framework run by the Karma test runner (read section 3 of the original article for why I, and others, don’t use the suggested Jest test framework). The whole React.js build environment uses Babel and WebPack extensively.

I had recently converted one of my React web apps over to use Redux and I wanted to create some proper Unit Tests. The React.js application, called Kanban (comes originally from the book “Pro React” book), uses async accesses to the server to fetch and save the data. This was pretty fundamental to how Kanban works and I needed to intercept this if I was going to test certain parts of the code.

I could have used a specific library to intercept the “fetch”, but I also needed to mock some other parts so I wanted a generic way of replacing ES6 modules with some stub or test code – this is called mocking.

Down the rabbit hole (again!)

In the first article I said that when I researched setting up the build process it “felt like I had fallen down Alice in Wonderland’s rabbit hole!”. Well, it felt the same when I tried to find a method to mock ES6 modules (but thankfully not quite as deep as with Webpack!).

I found this stackoverflow question and answers, which mentioned a number of ways, like rewire/babel-plugin-rewire library,  proxyquire and inject-loader. I spent a frustrating day trying to get babel-plugin-rewire to work to no avail before I swapped to try inject-loader.

Thankfully I found an excellent article called ‘Mocking ES6 import for Tests‘ by James Tease. This looked like just the thing, especially as he gives an example which uses Redux.

True be told I was still a bit confused and couldn’t get inject-loader to work. I didn’t find the inject-loader Readme file totally enlightening as I thought I needed to link in the inject-loader into my karma/webpack files (you don’t – all you need to do is load inject-loader, which you do by adding “inject-loader”: “2.0.1” to your package.json and updating your packages).

Mocking example

In the end I created a really simple test of mocking to check I understood inject-loader. First I will list three small modules that I created for the test. You will see that the final module, called ‘OuterFunction’, imports and used the other two (you can find the actual files in this directory)

//file InnerFunction.js
export default function inner() {
    return 'Inner Function';
}
//file InnerValue.js
export default 42;
//file OuterFunction.js
import InnerFunction from './InnerFunction';
import InnerValue from './InnerValue';

export default function outer() {
    return {
        innerFuncValue: InnerFunction(),
        innerValue: InnerValue
    }
}

This is a very contrived example, but they really helped me understand how to use inject-loader. Here is a section of my mocha test called ‘TestMocking.test.js‘:

import expect from 'expect';

import OuterFunction from '../../localSrc/OuterFunction';
import innerMock from '../../mocksEtc/MockInnerFunction'

describe('Test Mocking example', function () {

    it('check normal operation works',
    () => {
        var result = OuterFunction();
        expect(result.innerFuncValue).toEqual('Inner Function');
        expect(result.innerValue).toEqual(42);
    });

    it('mock InnerFunction with local function',
    () => {
        const localFunc = () => { return 'local mock'};
        const inject = require('inject?./InnerFunction!../../localSrc/OuterFunction');
        const outerFunctionWithMock = inject({
                './InnerFunction': localFunc
            }).default;
        var result = outerFunctionWithMock();
        expect(result.innerFuncValue).toEqual('local mock');
        expect(result.innerValue).toEqual(42);
    });
    //... lots more tests

The key lines are 18 to 21 so let me take you through each part as this is where I got confused.

Line 18 – the require statement

In one of the emails with James Tease he said using inject-loader “look weird” if you are used to ES6 imports. You can see what he means when you look at line 18. It is odd to see a require statement in ES6 code, but that is how inject-loader works, i.e. it alters the module as you load it. The format of the require parameter string is important so let me go through it in detail, with bold used to show each part:

  1. inject?./InnerFunction!../../localSrc/OuterFunction’
    The start of the string makes the inject-loader take over the load of the module.
  2. ‘inject?./InnerFunction!../../localSrc/OuterFunction’
    This says which module inside the ‘OuterFunction’ that we want to replace. It starts with the question mark followed by the EXACT same string used in the import statement in ‘OuterFunction’. Note: this part is optional – you can leave it out if you plan to replace ALL of the imports (example later).
  3. ‘inject?./InnerFunction!../../localSrc/OuterFunction
    This says what module you want to load, in this case the ‘OuterFunction’ module. The string starts with an exclamation mark and then has the correct file reference to access the module from where the test is being run.

Be warned: if you get part three wrong then you will get an error, but if you get part 2 wrong then it just doesn’t work.

Lines 19 to 21 – mock out the module(s)

This is the part that replaces the imported module inside the module you are loading. In this example we are replacing the ‘InnerFunction’ module inside the ‘OuterFunction’ module. Again there a a few things to watch out for:

  • line 20. Replacing a module.
    The content of the string ‘./InnerFunction’ is again crucial. It must be the EXACT same string used in the import statement inside the module you are loading, in this case the ‘OuterFunction’ module. The part after the colon is the new content. You can give it a whole module, or in this case just a function as that is all the ‘InnerFunction’ contains.
    Note: Do have a look at James Tease’s example ‘Mocking ES6 import for Tests‘ as he sends over an object. My TestMocking example Unit Test shows different ways to replace a module.
  • Line 21. getting the right exports from the loaded module
    You have loaded a module using require. Because of this you need to know how to access the parts that this module exports. If its an ES6 module then there is a convention for that. In the case of the ‘OuterFunction’ module it exports just one function as its default: therefore I can access it by adding .default to the end of the statement.  James Tease gives an example of accessing a specific class in his article, Mocking ES6 import for Tests.

More examples

There are lots of different ways you can use inject-loader. I have copied the following section from the inject-loader Readme file, so you can see things all in one place.

// If no flags are provided when using the loader then
// all require statements will be wrapped in an injector
MyModuleInjector = require('inject!MyStore')
MyModule = MyModuleInjector({
  'lib/dispatcher': DispatcherMock,
  'events': EventsMock,
  'lib/handle_action': HandleActionMock
})

// It is also possible to only mock only explicit require
// statements via passing in their path as a flag
MyModuleInjector = require('inject?lib/dispatcher!MyStore')
// only 'lib/dispatcher' is wrapped in an injector
MyModule = MyModuleInjector({'lib/dispatcher': DispatcherMock})

// this also works for multiple flags and requires
MyModuleInjector = require('inject?lib/dispatcher&events!MyStore')
// only 'lib/dispatcher' and 'events' are wrapped in injectors
MyModule = MyModuleInjector({
  'lib/dispatcher': DispatcherMock,
  'events': EventsMock
})

// you can also explicitly exclude dependencies from being injected
MyModuleInjector = require('inject?-lib/dispatcher!MyStore')
// everything except 'lib/dispatcher' is wrapped in an injector
MyModule = MyModuleInjector({
  'events': EventsMock,
  'lib/handle_action': HandleActionMock
})

To reiterate, it is important to get the strings in the right format:

  • When referring to the module you want to replace then you must use the EXACT file reference used in the import inside the module you are loading. If you get this wrong it doesn’t error, but it just doesn’t work.
  • The file reference for the module you are loading should be the correct reference to reach the module from where your test is located in the file system. If you get this wrong then you get the normal “file not found” type error.

Also the format of the inject has to match what the calling module expects. For mocking out simple modules that use a default return then you can just inject something that matches that default, e.g. a function in the case of my ‘InnerFunction’ example. For mocking out more complex modules with multiple exports then its often easer to create a mock of the module.

Note: Have a look at my file  ‘TestMocking.test.js‘ which has some really simple examples. If you want to see an example of the mocking used for real Unit Testing then have a look at CardActionCreators.tests.js, which tests the ActionCreators for a card in the Kanban app.

Quick Aside – mocking Promises

My main aim was to test some of the code that accessed data on the server via the “fetch” library. The “fetch” command is async and uses JavaScript Promises. I looked at how to mock a promise and found these articles [1, 2]. I tried a few approaches that these two articles suggested, but it definitely wasn’t simple. As Jani Hartikainen says in his article,  “dealing with promises in unit tests is a hassle“.

In the end I looked at how Promises were used in the kanban application and found it only used the then(success, failure) function of the Promise. I therefore created a really simple MockPromise that just has a just the then function. You can see me using the MockPromise in the ReduxDispatcher.test.js and CardActionCreators.tests.js.

Conclusion

I am slowly improving the dev, test, build tools in my React samples. In this article I have improved my ability to Unit Test a larger amount of my code base by adding mocking. Finding the right way to do that was a bit of a struggle as the tools, e.g. Babel, Webpack etc. are changing so some articles you read are out of date.

I do like the inject-loader but its syntax is not obvious and there isn’t a lot of documentation on how to use it. Hopefully this article and the associated example code will help you to understand and use the inject-loader to make your Unit Test more effective. Please feel free to point out other mocking methods that have worked for you, preferably with some example code that I and others can look at.

Happy coding.

Using a Redux store in your React.js application

06/08/2016

This article is about the benefits of using the Redux library in your React application, followed by a detailed description on how to use Redux. These descriptions use code snippets from a open-source sample application which contains a React application called Kanban inside a ASP.NET Core web application.

This article follows on from my initial article Templates for building React.js front-ends in ASP.NET Core and MVC5 where I describe a number of example web applications which show how to create and build React.js applications inside various ASP.NET web sites.

When I created the samples I used the Kanban React app from the excellent book “Pro React” in two of the samples which used a library called Flux to simplify the handling of states.  However, in my review of the Pro React book on Amazon I said that the only down side of the Pro React book was it used Flux instead of the newer Redux library to implement stores. The author saw my review and left a comment to say an alternative version of Chapter 6 using Redux was now available.

This made me think – how about I change the Kanban app in one of the samples to use Redux, using Cássio’s new chapter as a guide? Well, I have done just that and this article is all about how to use Redux in your React application.

UPDATE: It turns out Cássio has also upgraded the Kanban app to Redux – see the new branch chapter6-redux on the Pro React github site. While seeing that would have saved me doing my own refactor I would not have learnt as much as I did by doing the work myself. However it does give me another version I can look at to see if I could have done things better. I comment on this later.

The aims of this article

  • To explain why using Redux is useful in a React.js application.
  • Describe the architecture of a React.js application that uses Redux.
  • Provide a point-by-point description of the different parts of a Redux/React application. That makes the article very long, but hopefully more useful.
  • Provide links to an example Redux/React application on GitHub.

Supporting material

This article is backed up with a number of ASP.NET projects which I have made available as an open-source Visual Studio solution, AspNetReactSamples, on GitHub. This currently contains four projects, but two of particular interest:

While these examples are aimed at ASP.NET MVC all the React code will work in any system supporting Node.js. You can try all the React code using Visual Studio Code, which is a free tool that runs on multiple platforms.

In addition React/JavaScript samples are available from the code associated with “Pro React” book, including a version of the Kanban application that does not use a store, but the more basic hierarchical approach (see below for an explanation of the difference).

UPDATE: As I said earlier Cássio has added a chapter6-redux version. It is very interesting to compare our two implementations. I found one area where I think Cássio’s implementation is better (I point it out later), but on the whole I am happy that my solution is pretty similar, especially as Cássio is the expert in this area.

NOTE: All the code in this article and supporting material is using the React JSX coding style and the new ECMAScript 6 (ES6) JavaScript format. If you aren’t familiar with these styles (I wasn’t until a few months ago) then it will look strange to you at the start. I haven’t found any problems embracing either JSX or ES6 format, in fact I like it a lot, but if you haven’t seen it before it looks wrong.

Introduction to why Redux is useful

(Skip this section if you already know React)
I’m going to assume you at least know what React is. If you don’t then I recommend reading the article Thinking in React and maybe some of these articles [1, 2, 3]. I would also really recommend the book “Pro React”, which I think is excellent.

ReactKanban-hierarchical-viewReact is very hierarchical, with various components put together in a parent-child relationship to create the final UI component. In small applications you can pass data downwards using React this.props and return data changes back up the hierarchy via callback functions.  See diagram on the right of the Kanban app using this ‘chaining’ approach.

In small React applications then using a ‘chaining’ approach is fine. However when you are building something bigger then this approach starts to cause problems. These problems have resulted in the development of what are generically known as a ‘store’ to handle the changing data. The benefits of using a store over a ‘chaining’ approach are:

  1. In the ‘chaining’ approach the data access and business logic is spread throughout the various React components. As you will see later a store takes data access and business logic code out of the components and into separate modules. This leaves the React parts handling just the UI and lets another part of the application handle the changing data. This makes for a much better separation of concerns and hence the application is easier to understand and refactor.
  2. When using a ‘chaining’ approach then each level in the hierarchy passes the callbacks down to the next lower levels. This makes refactoring React components difficult because they are inextricably linked through these chains. See this version of the Kanban application that does not use a store and note the passing of callbacks in most of the components.

In a store based implementation the ‘store’ takes over the handling of data that can change. All the react components then link to the store to a) to trigger an action on an event and b) to get updated if something changes.

There are many versions of these stores: Flux was one of the first, but the Redux library has shown to be a good alternative. In my opinion Redux is exceptionally good at separation of concerns, with a clear break between the Store that holds the data, the code that changes the store (Reducers) and the Actions that the components call to ask for a change.

The flow of control in the Kanban app using Redux is shown below, with the various steps numbered 1, 2, 3:

ReactKanban-redux-view

The text below describes each step using two words that Redux uses, Action and Reducer, which I explain in more detail later.

  1. Action: When an event happens inside a React component it now calls an Action, via a Redux function called dispatch. An action returns an object which always contains an action type.
  2. Reduce: Redux calls all the reducers which then use the action type to decide which reducer method should be run. The reducer’s job is to updates the Redux store for this action type, using any extra data that the action may have included.
  3. Update: Because the store has changed Redux calls a specific function in any component that registered with it. This function allows part/all of the new store values to be copied into the component’s props. Changing the props automatically causes the component to call its render function.

Details on the parts of a Redux solution

In this long section I will go detail each of the parts you need to add/change in your React application to use Redux. I will link to the actual code example application I refactored, which means you can try out the final applictaion.

1. The Redux Store

The simplest part about Redux is the store itself. Its job is to hold the data and it has a small list of API commands that make things happen. In this case we will use two commands to setup and create the store. Below is the total content of the file store/reduxStore.js which creates the single store:

import { createStore, applyMiddleware } from 'redux';
import thunk from 'redux-thunk';
import throttle from 'redux-throttle';
import reducers from '../reducers/index';

const defaultThrottleOption = { // https://lodash.com/docs#throttle
  leading: true,
  trailing: false
}
const throttleMiddleWare = throttle(500, defaultThrottleOption);
const reduxStore = createStore(
   reducers,
   applyMiddleware(thunk, throttleMiddleWare) 
);

export default reduxStore;

The Redux store is designed that you create just ONE store, and the code above is my code to do that. Because Kanban, like most sensible apps, uses async API accesses then we need the slightly more complex setup using the Redux applyMiddleware feature. This allows us to add special handling for certain features such as async functions. The two features I add are:

  • The ‘redux-thunk’ library, which provides async handling (described later)
  • The ‘redux-throttle’ library, which provides a debounce or throttle feature which I need for some of the functions that get fired while a drag-and-drop move is under way.

2. Actions

Actions are what your various React components call, via Redux’s dispatch function, to get anything done. Some of your actions may request data, e.g. fetchCards(), and some may do some more business-like logic, e.g. updateCardStatus(cardId, listId). The point is that all data access or business logic is placed in the action rather than in the React component where the event happened. That way the React components can focus on the display of the data and the actions can focus on the logic to change the data.

Note: The removal of this.state from a React component allows you to use a simpler form of React component called a “stateless functions“. Well worth looking at for simple react components, like buttons, or setting styles etc. This article has some more information on stateless functional components.

In any reasonable sized applications you will have multiple action files. In Kanban there are two: actions/CardActionCreators.js and actions/TaskActionCreators.js. In the Kanban app’s case the split is fairly arbitrary, and I think the CardActionCreators.js file could be split further.

Now down to the detail. In all cases an action will, eventually, return an object with at least a type variable, which contains a unique type. The simplest form is just that – a function returning a JavaScript object. Here is an example:

  toggleCardDetails(cardId) {
    return {
      type: constants.TOGGLE_CARD_DETAILS,
      payload: {cardId}
    };
  }

Note: As you can see in the example above, the type is taken from the object constants. The norm is to create file which holds all the action types and always refer to type via this. This makes sure you don’t make a mistake when typing an action type in different places in the application.

The second form is a function that returns another function of a specific signature. To use this form you need the store to include redux-thunk middleware you saw in section 1. This function form can be used in two cases:

a. Handling actions that use async functions

An async function is a function that kicks off a task and returns immediately before the task has finished, e.g. a request for data from an external source. Once the task finishes it will return a new result that Redux needs to pick up.

To handle this async case your initial action needs to return a function, not an object. Here is an example which uses Kanban’s dispatchAsync function to call dispatch with the correct success/failure when the async task ends:

  fetchCards() {
    return (dispatch) => {
      dispatchAsync(KanbanAPI.fetchCards(), dispatch, {
        request: constants.FETCH_CARDS,
        success: constants.FETCH_CARDS_SUCCESS,
        failure: constants.FETCH_CARDS_ERROR
      });
    }
  },

b. When the returned object needs to be calculated

If you need to execute some JavaScript code before you return a result then you need to use the function return format. The action below needs to extract the card id and status before it can persist the card’s change in status. Notice the second parameter, getState, which gives you access to the store’s content.

  //Warning: This type of usage does not support server-side rendering.
  //see this SO answer for more on this http://stackoverflow.com/a/35674575/1434764 
  persistCardDrag(cardProps) {
    return (dispatch, getState) => {
      let card = getCard(getState().cards, cardProps.id)
      let cardIndex = getCardIndex(getState().cards, cardProps.id)
      dispatchAsync(KanbanAPI.persistCardDrag(card.id, card.status, cardIndex), dispatch, {
        request: constants.PERSIST_CARD_DRAG,
        success: constants.PERSIST_CARD_DRAG_SUCCESS,
        failure: constants.PERSIST_CARD_DRAG_ERROR
      }, {cardProps});
    }
  },

NOTE: the comment at the top of this code that points out that this technique for accessing the store does not work when doing server-side rendering. Because this application is not using server-side rendering this approach is OK here. However please read this SO answer for more on this subject.

3. Reducers

Redux defines a very specific approach to how the store is changed. It does it through what it calls ‘reducers’. A reducer is a function that take the current state of the store and a specific action object and produce a new, immutable (unique) store content. I think of reducers as using a repository pattern, i.e. some code that turns a client centric command, e.g. updateCardStatus(cardId, listId), into some sort of change to the data in the store.

The reducers are only called by the Redux store, i.e. you don’t call them yourself. It is the action type provided by the Action that defines which reducer method is called to alter the store. There are often multiple reducers, with different reducers looking after different bits of the stored data. In my case I have a reducers/cardReducer.js and a reducers/draftCardReducer.js. Redux calls all the reducers and lets them decide what should be run.

The normal pattern for a reducer is to use switch statement using the type variable in the action object (see code below) to select the right method to run. While you could use other approaches using a switch does make it obvious what is happening when looking at the code. Here is the start of my cardReducer.js:

import constants from '../constants';
import { getCardIndex } from '../cardUtils'
import update from 'react-addons-update';

const initialState = [];

const cards = (state = initialState, action) => {
    switch (action.type) {
      case constants.FETCH_CARDS_SUCCESS:
        return action.payload.response;
      /*
       * Card Creation
       */
      case constants.CREATE_CARD:
        return update(state, {$push: [action.payload.card] })

      case constants.CREATE_CARD_SUCCESS:
        cardIndex = getCardIndex(state, action.payload.card.id);
        return update(state, {
          [cardIndex]: {
            id: { $set: action.payload.response.id }
          }
        });
           //... lots more case statements
      default:
        return state;
    }
}

export default cards;

A few things to note:

  • The function is called at the start with no parameters. This results in the state being set to the initialState, in this case an empty array, and because the action will be undefined the default path will return that state.
  • In each place where we change the store we use the update command from the ‘react-addons-update’ library. This does an immutable update, i.e. it creates a brand new object while leaving the old state as it was. That is very important to Redux, but I don’t have the space to go into why – read this for some reasons why Redux does this.
  • The name of the function, in this case cards, is important as it the name that is given to the part of the store that this reducer looks after. That means that the command getState().cards will return the data that this cardReducer deals with.

4. Making the store available

Because Redux has only one store we need to make it available to any component that needs to call an Action or be updated if the store changes. The first step is to get the store known to React at the top level. For that we use the ‘react-redux’ class Provider which apply as an outer class on our app’s main  entry point. Here is link to my App.js which also uses a router etc, but below is simple example make it more obvious what is going on:

let store = createStore(myReducers)

render(
  <Provider store={store}>
    <YourApp />
  </Provider>,
  document.getElementById('root')
)

5. Accessing the store in React components

5.a The standard approach

Having used the Provider class at the top level we can access the store using Redux’s connect function. I will give an example of how I set up the use of Redux in my KanbanBoardContainer.js file with some code snippets. The most-used approach is to supply two functions to the connect function:

  1. mapStoreToProps: this is called when the store changes and copies the part of the store that it needs into a local prop, or multiple props. In this case we read all the cards into the prop.cards.
  2. mapDispatchToProps: this allows you to set up functions that will call the actions you need, but linking them into Redux via Redux’s dispatch In this case the class only needs the fetchCards function, and that gets mapped to this.props.fetchCards inside your React component.

Note: Redux calls the first function mapStateToProps, rather than my mapStoreToProps. I changed the name as ‘state’ has a specific meaning in React and the name ’mapStateToProps’ really confused me at the start, as I thought it was referring to React’s this.state… .

Below is the end of the components/NewCard.js file from the Kabban app. You can see the two functions and how they are used to a) subscribe to Redux’s update if the store changes and b) make some actions available inside the component.

//... the start of the class has been left out
function mapStoreToProps(storeState) {
  return {
    draft: storeState.draftCard
  }
}

function mapDispatchToProps(dispatch) {
  return {
    updateDraft: (field, value) => 
        dispatch(CardActionCreators.updateDraft(field, value)),
    addCard: (card) => 
        dispatch(CardActionCreators.addCard(card)),
    createDraft: (card) => 
        dispatch(CardActionCreators.createDraft(card)),
  }
}

export default connect(mapStoreToProps, mapDispatchToProps)(NewCard)

Once you have done this an action can be called via the props, e.g. this.props.addCard(this.props.draft). Also, if the Redux store is changed then mapStoreToProps is called and, in this case, updates the props.draft variable, which in turn causes React to run the render method inside this component.

Tip: Quicker linking to actions

A variation of the mapDispatchToProps is to use Redux’s bindActionCreators function. This is a quick shorthand that binds all the actions in an action creator to this.props.actions.<youraction>. You can see an example in the components/CheckList.js.

I used the bindActionCreators function in CheckList because the react component called all of the actions defined in TaskActionCreator, so it was easier to use the bindActionCreators function. I tend to hand code when I use a subgroup of the actions so that I’m not linking to an action I don’t use. Maybe I’m being too picky – just using Redux’s bindActionCreators function is certainly easier. You can decide what you want to do.

5.b The minimal approach to calling an action

Sometimes you only want to access say one action, and going through all the mapping seems overkill. In this case you can use the react-redux’s connect function, but without any parameters. In this case the Redux function dispatch is added as this.props.dispatch and you can call a action using it, e.g.

this.props.dispatch(
   CardActionCreators.updateCardStatus(dragged.id, props.id))

I do this in the components/List.js component.

5.b The minimal approach to accessing the store

It is possible (but highly unlikely – it is not good practice) that you want to access the store directly. You can use something called the React context which, if you click the React context link, you will see it has a ‘use with care’ notice on it, so be warned.

However the react-redux Provider class we used to make the store available throughout the application places link to the store in this.context.store. You can access it in any React component by adding the following code to YourClass (very similar to propTypes, but for React context)

<YourClass>.contextTypes = {
  store: React.PropTypes.object
}

I thought I needed to do this in component/EditCard.js and implemented the use of context. Now I know more I might have used the normal mapStoreToProps, but I left the EditCard code as it always good to have an example of using the context to refer back to.

Update: Cássio’s EditCard.s implementation is, in my opinion, much better. He uses mapStoreToProps approach with access to ownprops. Here is his code in the EditCard.js file:

const mapStateToProps = (state, ownProps) => (
  {
    draft: state.cardDraft,
    card: getCard(state, ownProps.params.card_id)
  }
);

Unit Testing

As I said at the start I think Redux does a great job of separating each of the parts of the problem into separate areas. In addition the Actions and Reducers are completely self-contained and know nothing about the Redux store. As the Actions and Reducers contain some of the key code then this makes Unit Testing much easier.

Rather than make this article even longer I refer you to Redux’s useful Writing Tests section.

UPDATE: I have started to extend the Unit Tests in the sample application, especially around testing the Redux version. You can read about how I mock out modules in the new article Adding mocking to React.js Unit Tests and see my current React Unit Tests in the ReactWebPack.Core project under the JsUnitTests directory.

Conclusion

Firstly, well done for getting this far, as it is a long article.

Like every new technology I approach Redux was a struggle to understand at the start, but once I began writing some code, finding it didn’t work and fixing it then it began to come together. GitHub tells me it was only four days, but it seemed longer as there were a number of new approaches I hadn’t seen before. Hopefully this article, which was partly written as I was doing the upgrade, will help you on your own journey with Redux.

From my experience of using Redux I think it is a very nice tool. I especially like its very clear separation of each different aspect: store, actions and reducers. That means I only have to think about one problem space when I am working in each part – which frees me to be much more focused and productive.

I hope this has been helpful. If you spot anything I misunderstood (I am still learning!) then do let me know.

UPDATE: I have written another new article called Adding mocking to React.js Unit Tests which adds a new capability to my Unit Testing tools. This is especially relevant to the Redux version of the Kanban application as some of the tests I have created are aimed at testing the action parts of the Redux system.

Happy coding!

 

Templates for building React.js front-ends in ASP.NET Core and MVC5

05/19/2016

I have been interested in React, Facebook’s libraries for building front-end User Interface, for some time. Having worked on a Single Page Application (SPA) some years ago that used Backbone I was very aware of the pain point React is aimed at, i.e. the problem of building and debugging complex views with lots of interactions (see Why React?).

Recently I got some spare time and I decided to create some samples ASP.NET MVC applications that used React so that I was clear on how I could use it in the future. Well, it turned out WAY more complex than I thought but I learnt a lot!

This article is the first article in the series I am writing about creating a React.js samples that are a good starting point for real-world, production-ready applications with full React.js build, test, and deploy capabilities. The other articles in the series are:

  1. This article: Templates for building React.js front-ends in ASP.NET Core and MVC5
  2. Using a Redux store in your React.js application
  3. Adding mocking to React.js Unit Tests
  4. This article: Unit Testing React components that use Redux

Note: While my solution is aimed at ASP.NET MVC all React code and build/test files would work in any system supporting Node.js. You can try all the code using Visual Studio Code, which is a free tool that runs on multiple platforms.

What this article is all about

My aim was to produce sample applications that I could use as a basis for building real-world ASP.NET MVC applications that used React to help build the UI. As such they needed to have:

  1. Good development environment.
  2. Good Unit Test setup.
  3. Good production building setup.
  4. Examples in both flavours of ASP.NET, e.g. the existing MVC5 and the new ASP.NET Core MVC.

I ended up making a number of projects which I have made available as an open-source Visual Studio solution, AspNetReactSamples, on GitHub. This currently contains four projects:

  • ReactJsNet.MVC5, which uses the ReactJS.Net NuGet package.
  • ReactWebPack.Core, which used WebPack and the new ASP.NET Core (version 1.0.0) MVC app. This project also contains a set of JavaScript Unit Tests. (NOTE: This projects started out at CORE version RC2 but I have updated the project to CORE 1.0.0 now. The URLs still say CoreRC2 as it was difficult to chaneg them, but it is really Core 1.0.0.)
  • ReactWebPack.MVC5,  which used WebPack and the existing ASP.NET MVC5 app.

The bulk of this article goes though the decisions I made when putting these examples together. Having ‘cut-and-pasted’ other people’s examples and got lost, I think knowing why I built something that way is as important as knowing how.

I’m going to assume you at least know what React is and that it has lots of associated libraries. If you don’t then there are plenty of articles out there on React [1, 2, 3] but I would also really recommend the book “Pro React”, which helped me a lot.

NOTE: In my review of the book on Amazon I said that the only down side of the Pro React book was it used Flux instead of the newer Redux library to implement stores. Well the author, Cássio de Sousa Antonio, saw my review and has made available an alternative version of Chapter 6 using Redux. How about that for good service!

UPDATE: I have recently taken Cássio’s new chapter on using Redux to heart and converted the ReactWebPack.Core project over to using Redux. You can read about why Redux is really useful and how to use it in my article Using a Redux store in your React.js application.

OK, let’s start with the first example of how to use React in an ASP.NET application. This is the simplest to use, so makes a good starting point for anyone looking at React for the first time.

1. For simple applications try ReactJS.Net

If you want a simple place to start with React on ASP.NET then try the ReactJS.Net NuGet package from Facebook. This allows you to write JSX code which it turns into JavaScript on the fly. I have an example in my sample application called ReactJsNet.MVC5. (React.Net is also available for ASP.NET core RC1, but with a few limitations. See Nuget package ReactJS.NET MVC6).

Note: JSX is a source format that combines html and JavaScript. Sounds horrible, but it works, although you can write regular JavaScript if you prefer. Read this for an opinion on why it’s better to use JSX. More on the impact of using JSX later, but here is an example of some JSX.

var Comment = React.createClass({
    rawMarkup: function() {
        var rawMarkup = marked(
            this.props.children.toString(), {sanitize: true});
        return { __html: rawMarkup };
    },
    render: function() {
        return (
          <div className="comment">
            <h2 className="commentAuthor">
              {this.props.author}
            </h2>
            <span dangerouslySetInnerHTML={this.rawMarkup()} />
          </div>
      );
    }
});

If this React class Comment was called with the following JXS code:

<Comment author={'Jon Smith'}>
   {'This is a **bold** comment'}
</Comment> 

then the rendered HTML would be as shown below. Note that the method ‘marked’ is a library call that turns markdown into HTML, so “**bold**” gets turned into “<strong>bold</strong>”

<div class="comment">
  <h2 class="commentAuthor">Jon Smith</h2><span>
      This is a <strong>bold</strong> comment
   </span>
</div>

My verdict on ReactJS.Net

ReactJS.Net makes starting using React really easy, but by default it uses the simple module linking of each library, i.e. using a global variable that other packages use, for example the way we often use JQuery in ASP.NET. In any substantial front-end system this style of module linking has its limits, and that is especially true with React as there are lots of libraries to include.

Therefore ReactJS.Net is a great solution if you want to add some simple React views to an existing ASP.NET project, or you just want to play with React. In my samples the ReactJsNet.MVC5 project implements the React Quick Start Tutorial of a simple chat.

However for bigger applications, or applications that needs more React features such as stores and routing, we need to handle the finding and linking of JavaScript module and libraries in a better way.

Note: You can use a system of linking modules on top of using ReactJS.Net, but then you need to use the more complex build process I describe in the next section.

However before we leave ReactJS.Net there is one feature that is very important – that of server-side rendering of React. This is really important as you can pre-render some React on the server and then tell the client-side React that the html has been pre-initialised and it will work with it. These sorts of application are called “Isomorphic Apps” (not a very helpful name in my opinion) but I don’t have the space to cover this. Have a look at these articles [1, 2, 3] and chapter 8 of the “Pro React” book I recommended to find out more about “Isomorphic Apps”.

2. Building React applications with JavaScript modules

For larger, more complex React applications then you are going to have lots of separate files (React components are normally small) and lots of libraries that you want to link to. Linking by loading them in sequence is not a viable option. So you need two things: a) a package manager to get the libraries and b) a way to link your code and the libraries together.

2.a. Library package manager

In the JavaScript world there are a number of package managers, the biggest two being NPM and Bower. While I am a fan of Bower it soon became very clear that React uses NPM extensively.  In the end it was obvious I needed to use ASP.NET’s package.json file and NPM, as using Bower would not have given me the access to all the libraries I needed.

2.b. Linking code and libraries.

This one was more complicated and it felt like I had fallen down Alice in Wonderland’s rabbit hole! The build of a normal React goes like this:

  1. Transpile the JSX: Because I have chosen to use the JSX format then I need this translated that format into ordinary JavaScript (ES5). This needs a Transpiler. I used Babel, which is well respected, but there are others, like Traceur from Google.
  2. Extract ES6 Modules: Because I am using the Babel Transpiler it understands the new ES6 module format which everyone in the React camp seems to use. Babel currently converts ES6 modules to a number of formats such as Common.js, AMD/Require etc. (Note: the ES6 module feature is at the time of writing not supported by any browsers natively, but is coming soon to Microsoft Edge).
  3. Load and link the references: React libraries have lots of links to other libraries and it would be a real pain to have to hand-locate each library. There are a few ways to do this, but WebPack seems to be the way that most React developers use. There are a number of alternatives to WebPack.

Other Benefits of this process – ES6 JavaScript

If you are using a Transpiler then it makes sense to make the most of it. What most people do is swap over to using the new ES6 JavaScript standard which has a number of very useful enhancements. You have already heard about modules, but there are things like let and const and classes. Also many of the functions I used from the JavaScript underscore / lodash libraries like _.map() are built into ES6. The benefits are cleaner, more rigorous code and you are somewhat future-proofing your code. Have a look at “Top 10 ES6 Features every busy JavaScript Developer must know”.

Babel converts all the ES6 commands to regular JavaScript (ES5), plus has a babel-polyfill library so that the code will run on all the modern browsers. Babel can output code for older (ES3) browsers but you need to add some more settings and libraries (see this useful note about that).

The down side of this process – JavaScript ‘build’

The down side is that you have to ‘build’ the JavaScript, even in development mode. For anyone used to using JavaScript with ASP.NET MVC you know that it’s great to be able to debug JavaScript by simply editing the JavaScript, hit F5 in the browser and you are away. Now you need to run the build process on every change.

However there is one saving grace. WebPack has a “watch” mode and if you start this it will rebuild when you save a JavaScript/JSX file. A normal build takes some time, 6 seconds for my simple sample, but because WebPack caches the build then changing a single file normally takes ½ second.

2a. Setting up a build process

I would say that setting up the build process was the hardest and most frustrating part of the project. There are lots of different ways to set it up, each with its own pros/cons. The main problem, especially with WebPack is if it didn’t work it was really hard to find out why. Let me tell you some of the decisions I made so you know where I am coming from:

Big decisions

  • I used Babel for my Transpiler. That is what everybody is using and it worked well.
  • I used WebPack for my module bundler because everybody else seem to use it. I found WebPack really hard work to configure but it works now. I have since heard about systemJs and jspm, which follow the ES6 module approach. I am definitely going to look into those!
  • I chose WebPack as the command to call to build as it has a ‘watch’ facility to recompile on changes to a file. I think that is best, but you can call Babel, which then calls WebPack.
  • I chose to write NPM “scripts” to call WebPack. These scripts live in the package.json file. I quite like this, and it works for Unit Testing too (see later).
  • I chose to use the NPM Task Runner extension to call these scripts in Visual Studio, and the VSCode npm Scripts extension to run them in Visual Studio Code (more on that later). I tried other approaches such as the WebPack Task Runner, but NPM definitely worked the best.

Small decisions

  • I chose to use the extension ‘.js’ for my JSX code rather than ‘.jsx’. A lot of people do that and the tools handle either. Not much in it, but if you use .jsx then when you use the module import you have to add ‘.jsx’ on the end of the filename. (note: the build files in the sample applications are set to ONLY look at .js files).
  • This is a bit deep so ignore until you understand WebPack, but I didn’t use WebPack’s dev server to deliver the bundles files in the development build. This was partly because I had problems with setting up sourcemaps but I also liked having the Bundled JavaScript file around to look at. However I did use WebPack’s dev server in the Unit Testing, for no better reason than it worked that way by default.

To see other people’s approaches to the React build process, some of which I learnt from myself,  especially the first one, then do look at:

2b. How to run the build process

The best way to study my build process is to download the AspNetReactSamples and run a build. There are two versions which build and run the same React application, but for the two current version of ASP.NET MVC:

  1. ReactWebPack.Core, which is built on the new ASP.NET Core MVC app.
  2. ReactWebPack.MVC5, which is built on the existing ASP.NET MVC5 app.

The full information on how to do this is in the Readme.md file in the AspNetReactSamples solution so I won’t repeat it here.

Note: One nice trick is to use Visual Studio Code (VSCode) alongside Visual Studio (VS2015). If you aren’t aware of VSCode it’s a free, lightweight IDE which is especially good with JavaScript etc. In fact I found that VSCode understand JSX format better that Visual Studio 2015. In fact I find myself developing with both VS2015 and VSCode open: VS2015 for the C#/ASP.NET parts and VSCode for the React parts. There are details in the Readme file on how to set up the build environment in both VS2015 and VSCode.

3. Unit Testing

One of the plus points of React is that it is easy to Unit Test for two reasons. Firstly it is easier because React uses what is called a ‘Virtual Dom’, which is an in-memory DOM. The Virtual Dom means that the Unit Tests tools can check the virtual Dom rather than the actual DOM, which can be slow and troublesome. The second reason is that Facebook, and other React library suppliers like Redux, have thought about Unit Testing from the start and made sure their components/libraries are easy to test.

Even so, setting up a sensible Unit Testing environment still took quite a bit of research and trials. Let me tell share with you the choices and decisions I made so you can understand why I went this way:

Big decisions

  • I chose Mocha as my test framework. Facebook recommend Jest for Unit Testing React, but a number of posts [1, 2, 3] said Jest was slow and Mocha was better. I have also used Mocha successfully in the past, which gave me confidence in this approach.
  • I used Karma as my test runner. Most people do and I have used it before. Specifically it has a ‘watch’ facility and can interface with WebPack, which means it does a recompile and test when a test file is changed. That feature makes a big difference to the test/fix cycle.
  • I run my Unit Tests inside a real browser, Chrome, rather than a phantom browser. This adds a bit of start-up time but it does mean I can debug my Unit Tests (I talk about that later).
  • I used AirBnB’s Enzyme React testing utility. I had lots of trouble with my early tests, mainly because The “Pro React” book I mentioned used a very old version of React, and things had change. However even when I was looking at the correct code was still quite complex, with extra libraries and different approaches for handling render to the DOM. The React Test Utilities page suggested Enzyme which found much simpler to understand and use than the standard React Test Utilities. Enzyme also handles the render/DOM issue that was bugging me when using the standard React testing approach.
  • I used mJackson’s expect assertions rather than the older expect.js library I used before. mJackson’s expect has shorter test names and has its own event spy part. The spy part is significant because I saw lots of questions/problems about getting Sinon, the normal package for doing spying on events, working with with WebPack.
  • I placed my Unit Tests in another Visual Studio project, mainly because I want to keep my test code separate. UPDATE: It turns out that placing the Unit Tests in another project was a bad idea. The problem is you get the ‘multiple copies of React‘ error in some cases. I therefore have moved the JavaScript Unit Tests into the ReactWebPack.Core project (I have updated all the links)
  • I added NPM scripts in the ReactWebPack.Core’s package.json file so that the tests could be run in same way as the build. That way I can call them for Visual Studio or VSCode. I tried other ways but this was certainly the best.

I have only done a limited amount of Unit Tests (only 5 Unit Tests at the moment) UPDATE: I done a lot more work on Unit Testing around Mocking – read my new articles Adding mocking to React.js Unit Tests and Unit Testing React components that use Redux. Only a real application will tell me whether I have the right setup (my Backbone SPA had over 700 Unit Tests), but I am becoming more confident on the Unit Test setup.

 

In addition there is stacks of information on Unit Testing React because lot of people, like Facebook and Airbnb, use Unit Tests. Try:

3b. Running the React Unit Tests

Again, if you want to look at the Unit Tests, I recommend you download the AspNetReactSamples solution and look at the ReactWebPack.Core project. There are two files in the top directory, karam.config.js and tests.webpack.js, which control the Unit Tests, with certain libraries added to the package.json file. The actual Unit Test code is in the directory JsUnitTests directory. The Readme.md file in the AspNetReactSamples solution has all the information for running the Unit Tests, so I won’t repeat that. However here are some thoughts on Unit Testing React.

Debugging with Unit Tests

You can run the Unit Tests once by calling the command test-run. It will tell you what passed or failed. However when you do have a failing bug it is very useful to be able to debug it by inspecting the code and live variables, change the code and retest. Therefore I have added a test-watch that a) runs the tests, but leaves the browser open and b) will rebundle and rerun the tests if a .js file is changed.

Because the test-watch command leaves the browser open then you can use Developer Mode (F12) to place breakpoints in the code and rerun the tests by pressing F5 in the browser. Then you can inspect the code and the live variables to see where the problem is.

However because we are working with JSX and a Transpiler there is a wrinkle!

The issue of transpiled code

By default Chrome will find the source maps and show you the original JavaScript/JSX file. This is really helpful, but can be misleading as you aren’t looking at the actual code that is running. Mostly it’s fine, but imported methods/variables will have different names in the Chrome ‘scope’ window from what you see in your original code.

If you want to see the underlying ES5 code then the only way I have found to do that is to turn off ‘Enable JavaScript source maps’ in Chrome’s developer setting and press F5 again. Then you see the real code, which shows the correct variable names.

Once you get used to it it’s fine, but I got very confused at the start.

Note: Mocha has some useful commands when you are working on a large set of tests. The most important is the .only() method which, if added to a test/sub test, will only run that test. Also the .skip() method can be added to a test/sub test and causes it to be skipped.

Another useful feature is adding the line debugger; to your JavaScript which will cause a breakpoint when it is executed if the developer tools (F12) is open.

Conclusion

I started this project by cut-and-pasting solutions I found on the web and got terribly lost, especially because some things like WebPack can ‘fail silently’, e.g. does not do what I wanted it to do, but did not produce any error messages. Therefore when I wrote this article I took the time to include the decisions I made and why. I hope that helps you to decide what you want to do, and more importantly help you understand what is wrong when it doesn’t work.

This has been quite a long research project for me, but at the end I have what I consider a really good example setup that I could use in a real project. My final solution ticks almost all the points I needed apart from having multiple React apps (but I know how to do that).

My journey with React had only just begun and there is much more to learn. If I have time I plan to look at systemJs and jspm as an alternative to WebPack (have a look at this video on HTTP/2 and systemJs to see why this looks like a good way to go).

UPDATE: My journey with React continues. I have written a new article called ‘Using a Redux store in your React.js application’ which explains the advantages of using a store like Redux and then detail the steps needed to switch your React application over to using Redux. The application ReactWebPack.Core in the samples has been altered to use Redux so you can see it in action.

UPDATE: Another new article called Adding mocking to React.js Unit Tests which adds a new capability to my Unit Testing tools.

Happy coding!

Flattening Entity Framework relationships with ExpressMapper

04/20/2016

This article looks at how Microsoft’s Entity Framework (EF) handles extracting data from nested relationships, i.e. linked EF classes. This is a process known as flattening. I will show that EF produces efficient SQL commands for this type of access.

I then talk about making development of such accesses by using an Object-to-Object mapper and introduce the new Flattening feature in ExpressMapper, a relatively new mapper that is quite lean and quick. The last part is a detailed description, with lots of examples, of how to use flattening in ExpressMapper and what is is capable of.

Quick background on flattening

If you understand the issues around relational databases, foreign keys and flattening then skip this.

Well constructed relational databases try to minimise the duplication of data, e.g. only have one copy of the customer details. They do this by creating extra tables to hold each of these unique pieces of data and using ‘foreign keys’ if another piece of data needs to refer to it. Let me give you an example.

In the diagram below, which shows the EF classes, a product has a ‘Variant’, which is linked by the foreign key ‘VariantId’ and that Variant can have a number of size information, given by the ‘Size’, which has a foreign key ‘SizeId’.

Product, ProductVariant, Size

The benefit of splitting the data like this is we can change say the Color information on a specific  ProductVariant and it is automatically picked up by all the products that point to it. In software terms we would call this an application of the DRY (Don’t Repeat Yourself) principal.

The down side if if we want to show a list of products on screen its quite likely that we want to include the Color and Size information. This means we need to ‘pick out’ the Color column from the linked ‘ProductVariant’ table, and the Name column in the Variant’s linked ‘Size’ table. This is called flattening and happens a lot in real-life applications.

How Entity Framework does flattening

I’m going to use a very simple example to show how EF does flattening. The diagram below shows a simple Father->Son->Grandson nested relationship.

ExpressMapper-father-son-grandson

The EF command to get all of the properties from all of the linked records would be:

var flattened = dbContext.Fathers.Select(f => new { 
    f.MyInt,
    f.MyString, 
    SonMyInt = f.Son.MyInt,
    SonMyString = f.Son.MyString, 
    GrandsonMyInt = f.Son.Grandson.MyInt,
    GrandsonMyString = f.Son.Grandson.MyString});

The result is an anonymous class containing all the data from all the relationship.

The T-SQL EF produces for this access

It turns out that EF produces the very efficient T-SQL command for this sort of flattening. The T-SQL command that EF puts out is as follows:

SELECT 
    [Extent1].[MyInt] AS [MyInt], 
    [Extent2].[MyInt] AS [MyInt1], 
    [Extent2].[MyString] AS [MyString], 
    [Extent3].[MyInt] AS [MyInt2], 
    [Extent3].[MyString] AS [MyString1], 
    [Extent1].[MyString] AS [MyString2]
    FROM   [dbo].[Father] AS [Extent1]
    INNER JOIN [dbo].[Son] AS [Extent2] 
        ON [Extent1].[Son_Id] = [Extent2].[Id]
    LEFT OUTER JOIN [dbo].[Grandson] AS [Extent3] 
        ON [Extent2].[Grandson_Id] = [Extent3].[Id]

This is an optimal T-SQL command and here is execution plan that proves it.

ExpressMapper-father-son-grandson-sqlNOTE: Flattening is only useful for reading data, as any update to a column in a related table can only be done in EF by reading in the inner class, updating its data and calling EF’s .SaveChanges(). However reading data for display is very common and therefore we want it to be really efficient.

Using an Object-to-Object Mapper with EF

My experience has been that you spend a LOT of time writing EF .Select() statements like the one above in even a small application. They are repetitive and boring, which often leads to making mistakes. This caused me to build the open-source package GenericServices, which uses an object-to-object mapper  (currently AutoMapper, but l am planning to change to ExpressMapper).

The idea behind using an object-to-object mapper (referred to as mapper from now on) is that you create a class, or classes, which represent what you need from the database. These classes are often called DTOs (Data Transfer Objects), or in ASP.NET MVC style ViewModels – I will refer to them as DTOs in this article.

The role of the mapper is to work out how to map the EF classes to the DTO(s). The mapper does this based on a set of rules. Let’s consider one implementation of an object-to-object mapper that will work with EF – ExpressMapper.

Introducing ExpressMapper

As I said earlier I have used AutoMapper, but I have been tracking the development of a new mapper called ExpressMapper, which has some advantages. ExpressMapper is newer than AutoMapper and relies on building LINQ commands for all its mappings. The main advantage is that its setup is much faster than AutoMapper, and one or two other small things make it fit with EF more easily.

However while I wanted to start using ExpressMapper but it didn’t have the flattening feature. So, with the help of ExpressMapper’s author, Yuriy Anisimov, I added the flattening feature. This is now released in version 1.8.1 which you can find as a Nuget package here.

I will give you three examples of how ExpressMapper’s flattening works and then describe the differences between ExpressMapper and AutoMapper, which is used more widely because it has been around a long time.

1. Flattening nested references

1.a. Non-null nested references

In the case of flattening ExpressMapper and AutoMapper uses a concatenated name rule,  e.g. SonMyInt would match Son.MyInt and SonGrandsonMyString  would match Son.Grandson.MyString. So, looking at our previous example of Father->Son->Grandson if we created a class as shown below and then ran the code at the end then ExpressMapper would produce the same sort of result, and SQL that my hand-coded select would have done.

public class FlattenDto
{
   public int MyInt { get; set; }
   public string MyString { get; set; }

   public int SonMyInt { get; set; }
   public string SonMyString { get; set; }

   public int SonGrandsonMyInt { get; set; }
   public string SonGrandsonMyString { get; set; }
}

...
//code to set up the mapping 
ExpressMapper.Mapper.Register<Father, FlattenDto>().Flatten();
ExpressMapper.Mapper.Compile(CompilationTypes.Source);
//code to execute the EF command and get the data
var flattened = dbContext.Fathers.Project<Father, FlattenDto>().ToList(); 

The SQL code is the same as I showed on the hand-written code.

1.b. Nullable nested references

If either the Son or the Grandson can be null then these relationships are called an ‘optional relationship’ in EF, or One-to-ZeroOrOne references in SQL. ExpressMapper handles null references, so handling null references mapping is fairly straight forward. However you do need to think that some of the properties may be null if a nested relationship is null, otherwise EF will complain that it can put a null in it.

For example if we take our previous example and we make the GrandSon an optional relationship with the following EF configuration (see the HasOptional on line 7) then the Grandson property may be null.

public class SonConfiguration 
   : EntityTypeConfiguration<Son>
{
   public SonConfiguration()
   {
      ToTable("Son");
      HasKey(t => t.Id);
      HasOptional(t => t.Grandson);
   }
}

Having done that ExpressMapper will handle not having a grandson (or a son if you set that as optional too). However we do need to change the DTO, as now the the SonGrandsonMyInt can now be null (see the int? on line 13 of this class definition from the Unit Tests of ExpressMapper).

//code to set up the mapping 
ExpressMapper.Mapper
    .Register<Father, FlattenDtoWithNullable>().Flatten();
ExpressMapper.Mapper.Compile(CompilationTypes.Source);
//code to execute the EF command and get the data
var flattened = dbContext.Fathers
    .Project<Father,FlattenDtoWithNullable>().ToList(); 

The result of this will be that SonGrandsonMyInt and SonGrandsonMyString will both be null if no Grandson relationship exists, but will contain the Grandson’s MyInt and MyString if the relationship does not exist.

NOTE: AutoMapper does not have this feature by default – see later section of differences between ExpressMapper and AutoMapper.

2. Running Linq commands on collections

As well as accessing columns in one-to-one relationships there is another feature that EF and ExpressMapper’s flattening can provide for one-to-many relationships. That is it can convert a small but useful number of LINQ collection commands into SQL commands. The commands that EF supports are:

  • Any: This returns true if there is one or more rows in the collection.
  • Count: This returns an int of the number of rows in the collection.
  • LongCount: Same as Count, but returns a long (64 bit) type.

To use these you simply add the method name on the end of a EF POCO class properly that implements IEnumerable (but not a string). Let me give you a very simple example. The code below shows a EF POCO class called FatherSons which contains a collection called Sons. The DTO below it has a property called SonsCount, which ExpressMapper turns into the LINQ command Sons.Count().

public class FatherSons
{
   public int Id { get; set; }

   public int MyInt { get; set; }
   public string MyString { get; set; }

   public ICollection<Son> Sons { get; set; }
}
public class FlattenFatherSonsCountDto
{
   public int MyInt { get; set; }
   public string MyString { get; set; }

   public int SonsCount { get; set; }
}

...
//code to set up the mapping 
ExpressMapper.Mapper
    .Register<FatherSons, FlattenFatherSonsCountDto>().Flatten();
ExpressMapper.Mapper.Compile(CompilationTypes.Source);
//code to execute the EF command and get the data
var flattened = dbContext.Fathers
    .Project<FatherSons, FlattenFatherSonsCountDto>().ToList(); 

The resulting SQL is again very good (see below):

 
SELECT 
    [Extent1].[Id] AS [Id], 
    (SELECT 
        COUNT(1) AS [A1]
        FROM [dbo].[Son] AS [Extent2]
        WHERE [Extent1].[Id] = [Extent2].[FatherSons_Id]) AS [C1], 
    [Extent1].[MyInt] AS [MyInt], 
    [Extent1].[MyString] AS [MyString]
    FROM [dbo].[FatherSons] AS [Extent1]

Full list of ExpressMapper flattening features

  1. Flattening looks at properties inside classes, either EF relationships or EF Complex Types. For the DTO to match it must match on two counts:
    1. The Name must match: The DTO name should be  a concatenating the names without the dots, e.g. Variant.Size.Name is accessed by having a property called VariantSizeName in the DTO.
    2. The Type must match: The type of the DTO property must be either:
      1. The same type as the inner property, e.g. SonMyInt can be of type int.
      2. A nullable version of type of the inner property, e.g. SonGrandsonMyInt is of type Nullable<int>. This is useful when you have optional relationships (see 3 below).
      3. A class that has been registered with ExpressMapper as mapping between the inner property and the DTO class (see 2 below for a better explanation!)
  2. You can have nested DTOs within DTOs. For instance in our Father->Son->Grandson examples you can have a property called SonGrandson, which is a DTO class. The only rule is you must also register the SonGrandson DTO class as well as the outer DTO. There is an example of this in the ExpressMapper Unit Tests – see test on line 95 of FlattenEntityTests.cs and note the registering of the extra DTO class on line 27 of the same file.
  3. Flattening can handle optional relationships, i.e. null references to a class. In that case it will return null for all the properties you access in the nested class(es). See example 1.b above.
  4. Flattening can handle circular references, e.g. if class person has a property called Boss of type person then you can have a DTO property such as BossBossBoss to get the third boss up (or null if the hierarchy doesn’t go that high).
  5. Flattening also looks at collection properties, i.e. one-to-many relationships.  If the DTO contains a property with the name of the collection, plus an ending of Any, Count or LongCount then it adds the LINQ method of that name. The property must be of the right type, i.e. bool, int or long respectively otherwise you get an ExpressMapperException.

Configuring flattening in ExpressMapper

  • Flattening only happens if you add the method .Flatten() to the registration of the mapping.
  • If you use ExpressMapper’s .Ignore() or .Member() methods in registration with Flattening then your .Ignore() or .Member() calls will take over from flattening, i.e. Flattening will not touch those source properties. The order in which you apply .Flatten, .Ignore and .Member does not matter.
  • The comparison of names follows ExpressMapper configuration, which defaults is case insensitive. For instance in the Variant.Size.Name case the match would work with variantsizename or any other upper/lower sequence of the same letters.
    NOTE: this is useful when you are mapping to json in an API and you often want the first letter of the property to be lower case.
  • Flattening only works on the Source side, i.e. it does not ‘unflatten’ if you do a DTO->class mapping. I did not implement this because it just doesn’t work with EF, which needs you to link to tracked classes. It is also very difficult to get right in all cases, e.g. what do you do with a Count??
    NOTE: That is why you see the command ExpressMapper.Mapper.Compile(CompilationTypes.Source) in the examples. Because I only use the class->DTO direction then I don’t bother to compile the reverse direction.

Comparison with AutoMapper

Many people, including me, are already using AutoMapper, so it is useful to point out the differences.

  1. AutoMapper is a bit more specific on matching the concatenated names, and has some extra features for changing this. My implementation in ExpressMapper simply matches the next part of the string as it goes, plus the type. I think AutoMapper would not accept vAriantsiZenAme but ExpressMapper’s Flatten will.
  2. AutoMapper has other flattening features, like GetMethod. ExpressMapper’s Flattening does not support that as they don’t work with EF.
    NOTE:  The way to overcome this is either by specific .Member() set ups or more elegantly by using DelegateDecompiler. DelegateDecompiler allows you to add ‘computed’ properties which DelegateDecompiler can turn into LINQ code that EF can execute. I use DelegateDecompiler in my GenericServices package and it is very helpful. You should look it up!
  3. AutoMapper does not, by default, handle nullable nested references (see example 1.b above). This is a plus for ExpressMapper.

Conclusion

I wanted to get three main things across in this article:

  1. Entity Framework is creates very efficient SQL code when accessing nested One-to-One or One-to-ZeroOrOne relationships.
  2. Object-to-Object mappers can make your life a lot easier in writing EF POCO classes to DTO code.
  3. ExpressMapper now has a good Flattening feature.

When I have the time (!) I plan to update my GenericServices project to use ExpressMapper. It has some nice features and my very simple testing says its about eight times faster on setup, i.e. the registering of the mappings, than AutoMapper. However ExpressMapper is about the same speed as AutoMapper on the actual mapping of the data.

I should say that EF doesn’t do so well at producing efficient T-SQL commands if you use EF’s .Include() method to eager load properties that are collections. I would like to write about that later if I have time, as there are definitely some things to watch out for in cases like that.

Well done for getting to here. The article is a bit long but hopefully useful.

Happy coding!

Handling Entity Framework database migrations in production – part 4, release of EfSchemaCompare

04/07/2016

Some months ago I wrote an article on Simple-Talk ‘Deploying an Entity Framework Database into Production‘, with a more detailed series on this blog site (see list below). I have finally found time to release the package I mentioned in the articles as an open-source (MIT) project and also on NuGet as EfSchemaCompare.EF6.

In this article I explain where EfSchemaCompare is useful and give three real-world users of the package in my own e-commerce application development. The whole series deals with the issues of creating/updating a database schema in a robust and testable way. The list of articles are:

Where would you find EfSchemaCompare useful

The main reason to use EfSchemaCompare is if you use EF with a database but also want to:

  1. Take over the creation, definition or migration of the database rather than having EF handle it.
  2. You want to build a EF model that works with an existing database.

If you work this way then you have a problem: you need to make sure any changes to the database are matched by changes to the EF classes/configuration, or vise-versa. For instance, if you added a foreign key relationship in your database, or a new column to a table then you need to make the correct changes to the EF classes/configuration.

The important thing to see is that, other than when using EF reverse engineering (see next paragraph), EF never looks at the schema of the database, even when it is doing migrations. It just assumes the database in the state that EF’s metadata model says it is. (For more on how this works under the hood then see Max Vasilyev’s interesting article about how EF migrations are calculated and stored).

So one way to overcome any mismatch between EF and SQL is to re-import the database via EF’s reverse engineer the database option. However I have found that to be a rather blunt instrument as it adds all the possible relationships and also has fixed names for properties.

However now EfSchemaCompare gives you a second option. EfSchemaCompare is designed to check that EF’s view of the database is in line with what SQL says the database schema is. You can use it in your Unit Tests, or in your deployment scripts. It returns a go/no-go response with useful error/warning messages to spot what is wrong. It can also help you work out how to set up SQL tables to handle some of the more complex EF table relationships, like Many-to-Many tables (see example 3 near the end).

Let me give you three examples of how I use EfSchemaCompare.

Examples of how I use EfSchemaCompare

I was building an e-commerce web site on Azure and I wasn’t comfortable that EF’s Code First Migrations approach was robust enough in my situation (read the first article for more details on why). I therefore swapped over to using DbUp, which is a script-based database deployment/update tool.

However it turns out matching a database change to the form that EF needs is quite complex! I therefore wrote EfSchemaCompare . I will detail three different ways I use EfSchemaCompare in my Unit Tests and deployment tests.

1. General checking in Unit Tests

The NUnit Test below is part of my standard tests and it simply checks that EF’s current model of the database matches the database my Unit Test has.

[Test]
public void CompareEfSqlTestLabelAppDbOk()
{
    using (var db = new MyDbContext())
    {
        var comparer = new CompareEfSql();
        var status = comparer.CompareEfWithDb(db);

        Assert.True(status.IsValid, status.GetErrors());
        Assert.False(status.HasWarnings, string.Join("\n", status.Warnings));
    }
}

The method it uses, CompareEfSql.CompareEfWithDb, is fairly quick, assuming the MyDbContext has been used in the Unit Tests, and finds 90% of the issues that can occur. It also gives error messages that talk about the EF classes rather than in SQL terms, so its easier for software developers to understand.

Note: The above version assumes the EF POCO classes are in the same assembly as the EF DbContext. There is another version if you have your EF POCO classes in a different  assembly to the EF DbContext (I do!). There is also another version if you have multiple DbContexts covering one database.

2. Detailed checking before I release to production

The NUnit Test below checks my production SQL database against the development database. The SQL to SQL checking is more thorough, but only checks things that EF cares about (see list of limitations).

[Test]
public void CompareLabelAppDbWithAzureTestLabelAppDbOk()
{
    using (var db = new MyDbContext())
    {
        var comparer = new CompareSqlSql();
        var status = comparer.CompareSqlToSql( "LabelAppDb", "AzureLabelAppDb");

        Assert.True(status.IsValid, status.GetErrors());
        Assert.False(status.HasWarnings, string.Join("\n", status.Warnings));
    }
}

The method it uses, CompareSqlSql.CompareSqlToSql which is very quick. It takes either a connection string name, or a full connection string. The error messages are all about SQL, so can be harder to interpret if your aren’t use to SQL, but its comprehensive.

Note that I use the more strict version where my Unit Test fails on warnings (see line 10 test).  Warning are things that should not affect EF, but are worth knowing about. You may not want to be so strict, but in my application I am. See the documentation on the difference between errors and warning.

3. When I am making complex changes to the database

Some changes are really easy, like adding a new column to a table. However when you change relationships it can be a challenge to get the SQL and EF classes in step. There is another command, CompareEfGeneratedSqlToSql, which is very useful.

This generates a brand new database using EF, with the name of the DbContext but with ‘.EfGenerated’ appended on the end. It then compares this with your current database using SQL-to-SQL testing.

[Test]
public void CompareLabelAppDbWithAzureTestLabelAppDbOk()
{
    using (var db = new MyDbContext())
    {
        var comparer = new CompareSqlSql();
        var status = comparer.CompareEfGeneratedSqlToSql(db, "LabelAppDb");

        Assert.True(status.IsValid, status.GetErrors());
        Assert.False(status.HasWarnings, string.Join("\n", status.Warnings));
    }
}

The method it uses, CompareSqlSql.CompareEfGeneratedSqlToSql, is quite slow because it gets EF to create a new database. However it does provide a way to a) compare exactly what EF would have done against what your SQL database looks like and b) you get a EF generated database from your current EF classes/configuration to inspect yourself.

Some things, like EF’s Many-to-Many relationships, require the SQL database to have a new table with a specific name and setup. Failing to follow the style that EF needs will mean EF won’t fill in the many-to-many links. Adding the right tables is often a case of seeing what EF does by inspecting the ‘.EfGenerated’ database and copying the SQL EF generated into your new database (or implementing your own many-to-many handling of course, which I cover in Part 2 of this article).

The other thing I noticed is that EF is somewhat forgiving if you make mistakes when you use EF’s Data Annotations or EF’s Fluent API to alter the way the database works. For instance if you configure a required to optional relationship but give it a non-nullable foreign key it will add its own hidden nullable foreign key. In fact I found about three errors like this in my 22-table application, all around relationships.

EfSchemaCompare will point out these discrepancies and you can fix them. Your minor configuration mistakes can be found mainly by the column names, which normally contain the name with a _ in the middle. I have also found places where the Cascade delete option was different, again through me misconfiguring a relationship.

Conclusion

Hopefully this article will give you a feel for why you might find the NuGet package EfSchemaCompare.EF6 useful, plus some tips on how to use it in your application. Its certainly a niche tool, but if you are using EF on a database that is not controlled by EF then its a life-saver. Do have a look at the project on GitHub for documentation and Unit Tests that might give you more ideas on how to use it.

Finally I wanted to thank and Hakon Thomas who field tested the first version. Their feedback has shaped the tool and hopefully made the documentation a bit clearer too.

Happy coding.


Want some help with EF?

Are you looking for someone to help you use Entity Framework more effectively in your business? I am a contract software developer / architect with many years of experience and I am happy to work remotely.

Have a look at my Hire Me page and drop me a email via my contact me form to see if I can help.


 

Converting your ASP.NET MVC5 application to use Bower, Grunt and Gulp

02/09/2016

I changed over to Visual Studio 2015 (VS2015) a while ago, and one of the main reasons for moving was to use some of the new features for handling web packages. I also wanted to look at the preview of ASP.NET Core 1 to see how they used these features.

VS2015 contains as standard Bower, which is a web package manager used across the whole of the web development community. In addition VS2015 supports a web build automation system using Grunt and/or Glup. These are great tools for a number of reasons that I won’t cover now, but at the end of this article I list some of the advantages I found when I swapped over – and I think they are really worth having!

Note: While this article talks about VS2015 it is also applicable to VS2013 as the same Bower, Grunt and Gulp features are available for VS2013 via add-on extensions. If you have VS2013 then read this article by Scott Hanselman on what packages you need.

I personally needed the power of these tools on an ASP.NET MVC5 e-commerce project which is nearing completion and we are doing fairly drastic things to the design and JavaScript front-end. I didn’t want to upgrade to ASP.NET Core 1 this late in the project, especially as ASP.NET Core 1 is very different to MVC5 and not released yet. I therefore set about harnessing these tools in an existing project, with the aim of following the ASP.NET CORE 1 style to make upgrading later as simple as possible.

NOTE: There is a sample application on GitHub to go with this article. In this application I have converted an existing ASP.NET MVC5 project away from using the normal NuGet/BundleConfig to using Bower and Grunt for handling web packages. It is open-source so please have a look and see if it is useful.

This sample application also contains a library called BundlerForBower, B4B for short, which takes over the role of MVC’s BundleConfig. I mention the use of B4B towards the end of this article, but I have also written a more detailed article about B4B called Introduction to BundlerForBower for ASP.NET MVC5.

UPDATE: BundlerForBower is now available on NuGet

BundlerForBower is available on NuGet – see https://www.nuget.org/packages/Bundler4Bower/ 

Swapping from NuGet to Bower

If you create a brand new ASP.NET CORE 1/MVC solution in VS2015 then the MVC project looks very different to the current ASP.NET MVC5 structure. This is because the whole structure has been reconfigured to be more applicable to modern web development (see the ASP.NET CORE 1 overview article for more information).

We are interested in how it handles the web packages because if we are going to adopt Bower etc. we want to use the same approach as the new ASP.NET structure so that any future upgrades will be simpler. There are many changes in ASP.NET CORE 1 but the key web package changes are:

  1. It automatically uses Bower, not NuGet, to load web packages like JQuery, Bootstrap etc.
  2. It includes a ‘Dependencies’ virtual folder that contains:
    1. A Bower folder that containing our web libraries, e.g. JQuery
    2. An npm folder that contains our build automation tools, e.g. Gulp and plugins.
  3. It doesn’t have an App_Start/BundleConfig.cs as Gulp does the bundling/minification.

So let us now change our existing MVC5 project to match this approach.

1. Use Bower, not NuGet for web package management

If you have an existing MVC5 application you won’t see the Manage Bower Packages option in any of your menus. It needs a file to unlock the feature. Therefore the very first step is to do is create a bower.json file to your MVC5 project, by right clicking the MVC project folder in the solution explorer and select: Add>New Item and then find the “Bower Configuration file” in the list of templates, e.g.

CreateBowerFile

This will create the file bower.json and activate bower. The bower.json is the file that Bower writes to when you load a packages. This is useful for package restore, i.e. restoring the bower files if haven’t already got them loaded. As you don’t normally include the web packages in your source control then you need this if loading the application into a different development environment.

You don’t have to, but I also set up a .bowerrc file. This is the Bower configuration file and contains things like what directory where the packages will be written to. I added the following in the .bowerrc file:

{
  "directory": "lib"
}

which set the top-level directory ‘lib’ as the place that bower will place the libraries it loads. If your don’t do this then I think (I haven’t checked and the docs don’t say) the default is the top-level directory ‘components’, based on this stackoverflow question.

Once you have these files you when you right click the MVC project folder in the solution explorer you should see a new option: Manage Bower Packages (Note: I have found that this command doesn’t immediately appear once you have added the files. Restarting the Visual Studio and reloading the solution fixes this).

Selecting the Manage Bower Packages function brings up a screen (shown below) which is very like the new V3 NuGet screen. However in this case it is accessing the bower.io API to allow you to browse the packages available via Bower.

ManageBowerPackages

BE WARNED: I have found a few of issues in the current Bower interface:

  1. I used the interface to try and find a package called ‘fontspy’ but it could not find it, although Bower’s online search said it was there. However if I went into the bower.json file and typed “fontspy” intellisence knew about it and gave me the version. I then right-clicked the bower.json file and selected Restore Packages and fontspy loaded ok. The interface never showed that fontspy was loaded.
  2. I found Underscore using the interface, but it only gave version 1.5.2, which is an older version. If I went into bower.json and typed “underscore” it found version 1.8.3. I used that version by using Restore Packages to load it (see point 1).
  3. The interface says ‘No dependencies’ for packages that do actually have dependences, which is a bit confusing. Clicking the ‘project URL’ and looking at a file called ‘bower.json’ will show you the actual dependencies. However, even if the screen doesn’t show the dependencies bower will load any dependant packages if required.

If you select and install a package two things happen:

  1. The bower.json file is updated with the loaded package(s).
  2. The package and any of its dependant packages are loaded to the directory pointed to by .bowerrc.

Here is the content of bower.json after JQuery, BootStrap and Microsoft.jQuery.Unobtrusive.Validation has been loaded.

{
  "name": "ASP.NET",
  "private": true,
  "dependencies": {
    "jquery": "2.1.4",
    "bootstrap": "3.3.5",
    "Microsoft.jQuery.Unobtrusive.Validation": "~3.2.3"
  }
}

By default the loaded packages are not included in the project, which is good as we don’t want them to be sent to the web site, or store in source control. However this means to see them we need to turn on the ‘Show all Files’ option in VS’s Solution Explorer, and after installing a new package you will need to ‘Refresh’ the Solution Explorer view. Here is a view of the loaded packages directories with bootstrap opened up.

Note: You won’t see the lib directory in the sample application as they are not included in source control. If you download the application you need to right-click on the ‘bower.js’ file and click Restore Packages. That will download the packages and populate the lib directory.

BowerLibShowingBootStrap

Note: you will see that package jquery.validation is loaded even though it does not appear in the bower.json file. This is because Microsoft.jQuery.Unobtrusive.Validation’s own bower.json files says it dependents on jquery.validation so Bower automatically loads that if it is not already there.

2. Remove web packages loaded by NuGet

Tip: It took me a bit of searching to find the equivalent web packages in Bower. JQuery was easy but some of the packages have difference versions with very similar names. It’s worth cross-referencing the files that NuGet loaded with the bower equivalent before you uninstall the NuGet versions.

Previously you used NuGet to load your web packages, but when changing to Bower I really recommend your remove those packages from NuGet. This ensures that you won’t confused between the two packages, and more importantly if Bower updates its packages there isn’t a chance that you are using an older package loaded by NuGet. I also find it nice to not have a long list of JavaScript files under the ‘Script’ directory – now I can put my JavaScript files in there and it’s not confusing.

To Uninstall NuGet loaded packages then simply open NuGet and select the ‘Installed’ tab, then ‘Uninstall’ the packages that Bower is now handling. Note that your current views and layouts will stop working, which leads me on to the next section.

Swapping from BundleConfig to Grunt/Gulp

In ASP.NET MVC5 CSS and JavaScript files were normally delivered by App_Start/BundleConfig.cs which also looked after bundling and minification in a released application. Also, if you required compiling of Less, Sass, TypeScript, CoffeeScript, templates, image spriting etc. you may have used Web Essentials.

The new way is to use build automation tools like Grunt or Glup to achieve the same thing. These tools have a large base of what are called ‘Plugins’ which include all the features of Web Essential and much more.

You have a choice of using either Grunt or Gulp, or both. They both do the same thing but in slightly different ways. I ended up using Grunt for no more reason than I found some useful commands in Grunt format. Gulp is the standard automation tool if you create a new ASP.NET CORE 1 project, but as I said either will do the job.

Note: If you want to use Gulp instead of Grunt then I recommend this ASP.NET CORE 1 documentation on Gulp. If you want an example Gulp file then create an ASP.NET CORE 1/MVC project and look at the gulpfile.js that it produces.

1. Installing Grunt

The build automation tools need to be loaded, and for that we use NPM. NPM is a library system that makes it easy to share code. Grunt (and Gulp), plus there extensions, called plugins, can be loaded by NPM into your application.

First you need to add a NPM configuration file to the project in a similar way you did for the Bower configuration file, i.e. right clicking the MVC project folder in the solution explorer and select: Add>New Item and then find the “NPM Configuration file” in the list of templates.

This adds a file called package.json and opens you in that file. Now you manually add the Plugins you need. Thankfully you get great intellisence help with package names and their versions, so it’s not too hard. What you put in depends on what you want to do, but here is my file as a start:

{
  "version": "1.0.0",
  "name": "ASP.NET",
  "private": true,
  "devDependencies": {
    "grunt": "0.4.5",
    "grunt-contrib-clean": "0.6.0",
    "grunt-contrib-cssmin": "0.14.0",
    "grunt-contrib-jshint": "0.11.0",
    "grunt-contrib-concat": "0.5.1",
    "grunt-contrib-uglify": "0.8.0",
    "grunt-contrib-watch": "0.6.1",
    "grunt-contrib-copy": "0.8.2"
  }
}

Here is link to the reference article that I got this from. The only extra command I added was grunt-contrib-copy, which I needed for copying the bootstrap font files.

Once you have filled or changed the package.json file you should right-click the file and click Restore Packages. This will cause npm to look for those packages and load them. If you do a Refresh in the VS Solution Explorer you should find the package/plugins in the ‘node_modules’ directory.

NOTE: Be warned. If you have made a mistake in the file you won’t get any kind of error feedback when you run Restore Packages – it still says ‘Installing packages complete’. However if the files don’t seem to update look at the Output Window>Bower/NPM.

2. Define your Grunt tasks

You define the tasks you want to call in the Grunt configuration file, using the normal right-click the project and select Add > New Item. Select the Grunt Configuration file option, leave the default name, gruntfile.js, and click the Add button.

There are lots of ways of doing this and not really the space to explain all the possibilities, but I do give some links later. To give you a general introduction I have listed a very cut-down gruntfile.js which only concatenates and minifies the CSS files.

module.exports = function (grunt) {

    // Project configuration.
    grunt.initConfig({
        concat: {
            css: {
                src: ['lib/bootstrap/dist/css/bootstrap.css', 'Content/Site.css'],
                dest: 'css/styles.css'
            },
        },

        cssmin: {
            css: {
                src: 'css/styles.css',
                dest: 'css/styles.min.css'
            }
        }
    });

    // Load the plugin that provides the tasks we need
    grunt.loadNpmTasks('grunt-contrib-concat');
    grunt.loadNpmTasks('grunt-contrib-cssmin');

    // Default task(s).
    grunt.registerTask('default', []);

    // Build task(s).
    grunt.registerTask('build:css', ['concat:css', 'cssmin:css']);
};

In the example above I listed the files I wanted to concatenate from the packages that Bower loaded earlier. The simplest way to do this is compare the names of the files you loaded via your existing BundleConfig class and look for the same name in the lib directory. The files you want are often in a directory called ‘dist’, but some packages like Microsoft.jQuery.Unobtrusive.Validation don’t follow that rule.

With the gruntfile.js set to the code shown above up then if you can open the Task Runner Explorer window (try View > Other Windows > Task Runner Explorer) then you should see something like this:

TaskRunnerWindowYou can see the concat:css and cssmin:css tasks, plus the build:css task which executes concat:css followed by cssmin:css. The result of running concat:css is you will find a file called ‘styles.css’ which is a combination (bundle) of the bootstrap.css file followed by the Site.css file. If you run cssmin:css then a second file called ‘styles.min.css’ will appear which is the minified version of ‘styles.css’.

You run commands by right-clicking on a command and selecting Run. Feedback is pretty good, with the output of the task shown to the right of the command part of the window.

There is plenty more you can do but the example above gives you the basic idea. I would refer you to these useful articles for more information.

Note: different gruntfile in the sample application

You will find that in the sample application I use a different approach to specifying the files that are in each bundle, because I want another feature, BundlerForBower (explained later), to also have access to the list of bundles and their files. Therefore I place the array of files in a json file which I store in the App_Data directory. You can see the complete version of my gruntfile.js here and the json file it reads here.

3. Binding actions to Visual Studio Events

TaskRunnerWindowYou can bind any of the actions in the GruntFile to a Visual Studio events: Before Build, After Build, Clean and Project Open. You do this by right-clicking on a command you want to link and selecting Bindings. Below is an example taken from the ASP.NET grunt documentation, where you bind the watch feature to Project Open.

These bindings are stored as a specially formatted comment at the top of the gruntfile.json.

4. Delivering CSS/JavaScript to the Brower

The new ASP.NET CORE 1/MVC6 project does not use BundleConfig.cs to combine and minify CSS and JavaScript. It uses some new razor <environment names=”Development”> feature to choose between individual files or the concatenated & minified files produced by the Grunt/Glup task. It seems that the approach is to include all the individual files in in Development mode. In Production/Release it includes the concatenated and minified file with a cache buster suffix added using the new asp-append-version tag inside a HTML <script>. The asp-append-version tag generates unique hash based on the file content (see this helpful stackoverflow answer explanation).

In an existing MVC5 project we don’t have the new <environment> or the asp-append-version tag so we need to find an alternative. There are two ways to do this:

a. Use BundleConfig.cs to deliver the concatenated files

One of the reasons I chose the Grunt approach is it created both a concatenated, non-minified file and the minified file for both CSS and JavaScript. This means I could create a bundle inside BundleConfig.cs that looked like this:

bundles.Add(new StyleBundle("~/Content/css").Include("~/css/mainCss.css"));

In development mode it would deliver one large CSS file, styles.css, which isn’t minified. In release mode it would use the styles.min.css file with a cache buster suffix, i.e. a string that changes when the file changes which makes sure an older, browser-cached version isn’t used.

b. Using the library BundlerForBower (B4B)

The above solution works but throws up loads of issues when debugging of JavaScript. As the whole point of using Bower etc. is because you have a lot of front-end code to debug the first solution is less than idea.

I therefore built a specialised version of the MVC5 BundleConfig, called BundlerForBower or B4B for short, which is specifically written to work closely with Bower, Grunt and Gulp. It also follows some of the design styles used in the current ASP.NET Core 1 applications.

B4B consists of some code that performs a similar role at MVC’s BundleConfig class plus two extension methods that are very similar Html helper methods to MVC5’s `Styles` and `Scripts` classes to deliver bundles in a view. It is therefore quite easy to convert an existing MVC5 views over to B4B by making the following replacements:

@Styles.Render(“~/Content/mainCss”) is replaced by @Html.HtmlCssCached(“mainCss”) and @Scripts.Render(“~/bundles/javaScriptBundle”) is replaced by @Html.HtmlScriptsCached(“javaScriptBundle”)

As well as bundles that deliver groups of files from within your application ASP.NET Core 1 has a set of tags for delivering files from a Content Delivery Network (CDN), with fallback capabilities. I have reproduced this feature in B4B as it is so useful. Public CDNs are available across the world for many of the standard libraries like JQuery and Bootstrap, and can be faster than local delivery especially if the user has already loaded the standard library when using another web site.

NOTE: I have written an article called ‘Introduction to BundlerForBower for ASP.NET MVC5‘ which describes B4B in detail. B4B is an open-source (MIT) project and is included in the sample application.

My reflections on using Bower and Grunt

I think that Bower and Grunt/Gulp are a really good step up from the old NuGet, BundleConfig and Web Essentials. I spotted a few issues that I have noted in this article, but they were easy to get round and are likely to be fixed soon. The difference was immediately apparent as I was now much more in control on my web packages. It solved three problems that I had with the NuGet, BundleConfig and Web Essential’s approach which are worth listing, as they point out some of the gains this new approach brings.

1. Better coverage of packages

I use a JavaScript package called JQuery-FontSpy, which isn’t in NuGet. Previously I had to copy the files I needed by hand from GitHub. However JQuery-FontSpy is, of course, available in Bower. This is typical of more specialised packages and one of the reasons why switching to Bower is so useful.

2. More up to date packages

I am using handlebars.js templates in my web application to dynamically create complex HTML at runtime via JavaScript. While a handlebars package is available via Nuget it is at version 3.0.0 which isn’t the latest and misses a feature I would have liked to use. However the Bower version is at version 4.0.3, which is the latest, and because Bower is the main way of releasing packages like this then it will always have the latest.

This reminds us that web packages only get into NuGet because someone takes the trouble to transfer it to NuGet, and they may not feel the need to keep it updated. Whereas Bower is the primary tool for deploying web packages so the latest and greatest of a package will be available.

3. Bower libraries can contain more information

The Nuget version of libraries like BootStrap and JQuery contain just the final .js and .min.js versions of the file. However if you load these libraries via Bower you find they have a directories that contain the various parts of the library.

This is very helpful for me as I change my e-commerce site from using BootStrap to the final CSS design scheme. While I don’t want much of BootStrap’s CSS I would like to use some of its modules, like Modal, to save me reinventing that code. If I was using NuGet I would have had to go the bootstrap site and make a custom download, which I would have to redo if any new version came out. However with Bower I simply installed the bootstrap package and add the individual modal JavaScript and CSS, via less, files I need from the bootstrap\js and bootstrap\less directories to my gruntfile.js build script. If I want to add another module, like tooltip, I just include those files in my build script too. This is a much nicer approach than a custom download.

Even more web tools…

I should mention an alternative/adjunct to Bower etc. in the form of ‘Node.js tools for Visual Studio’ extension for VS. This has some extra features over the plain Bower, Grunt/Glup such as JavaScript debugging and profiling. It is definitely worth a look.

I used Node.js as a separate application some years ago when developing a Single Page Application and Node.js was great, if a bit daunting to learn coming from a VS background. However it seems like ASP.NET CORE 1 is going to use Bower, Grunt/Gulp as standard so I decided to start with that for now. You can use both the build in Bower and the Node.js extension together, so maybe I might progress to the Node.js extension in the future.

Conclusion

I have really appreciated the new Bower, Grunt/Gulp tools. They have made changing my e-commerce web site over to a proper design much easier. I also find the build tools are more comprehensive, yet allow me to just run one commend, ‘build’, to make sure everything is up to date.

I did have to create BundlerForBower (B4B) for helping with the process (see separate article on this). B4B means I have one file, BowerBundles.json, to update and everything ripples through the system. I also like the Unit Test checking that B4B contains as it stops me forgetting something before I release to production.

Please feel free to take a copy of the sample application and play with it. Hopefully this article and the sample will help you convert to Bower and be ready for ASP.NET Core 1.

Happy coding!

Introduction to BundlerForBower for ASP.NET MVC5

02/09/2016

I have recently changed over my ASP.NET MVC5 application to use Bower and Grunt for handling all my web packages, e.g. JavaScript and CSS libraries. You can read about this in another of my articles called Converting your ASP.NET MVC5 application to use Bower, Grunt and Gulp. In changing over that I tried to follow the approach that ASP.NET Core 1 uses with web packages. Most things, like its use of Grunt/Gulp for web build tasks was fine, but when it came to the the inclusion of JavaScript and CSS files in a HTML web page, then I had to find a new way.

This article is about my solution called BundlerForBower, or B4B for short, which is available in an open-source project. My B4B solution provides similar features to MVC5’s BundleConfig approach, but is designed specifically to work with Bower and the new Grunt/Gulp build process.

UPDATE: BundlerForBower now available on NuGet

BundlerForBower is available on NuGet – see https://www.nuget.org/packages/Bundler4Bower/ 

ASP.NET Core 1 approach

When swapping over to Bower and Grunt/Gulp I wanted to follow as closely as possible the approach that ASP.NET Core 1 uses, as I expect to change over to using ASP.NET Core 1 some time soon. As you will see from my other article it wasn’t hard to follow the same approach for most things, but for delivering JavaScript and CSS files in a HTML web page I had two problems.

The first was a technical problem, in that ASP.NET Core 1 uses new <link> and <script> tags which MVC5 does not have access to. The second was a design problem in that ASP.NET Core 1 approach was to have the user list the files to be delivered in TWO places: one in the Gulp file for the build process and the second in the HTML views that needed them. That isn’t DRY (Don’t Repeat Yourself)!

These two things spurred me on to create a better solution. In fact I have made B4B configurable to work in both an ASP.NET MVC5 application AND in the new ASP.NET Core 1 application as I think the design of B4B, which is DRY, is useful in ASP.NET Core 1 too.

The parts of BundlerForBower

I am going to describe the various part of B4B, but as a start let me introduce the three main parts.

  1. A extension class called BowerBundlerHelper which needs to be placed in you MVC application so that it has access to various MVC features.
  2. A BowerBundles.json file that contains the list of bundles and their files. This is used both by Grunt/Gulp to prepare the files and by B4B to deliver the correct files at run time. Note: This file should be placed in the App_Data directory of a MVC5 project (not sure where to put it in ASP.NET Core 1).
  3. A class library called B4BCore which the BowerBundlerHelper class uses to handling bundling. This class library also contains a useful class called CheckBundles that is useful for checking all your bundles are up to date before your release anything to production.

As you can see from point 2 this approach is DRY, as the same file that is used for building the concatenated and minified production files is also used by B4B to delivery the individual files in Debug mode and the production files in non-Debug mode.

Using BowerBundlerHelper to deliver bundles

The BowerBundlerHelper extension class has two Html helper methods that are very similar to MVC5’s Styles and Scripts classes, but applied as extension methods on the HtmlHelper class. They are:

  1. @Html.HtmlCssCached("bundleName"), which is equivalent to @Styles.Render("~/Content/bundleName")
  2. @Html.HtmlScriptsCached("bundleName"), which is equivalent to @Scripts.Render("~/bundles/bundleName")

B4B makes a decision to delivers individual files or the single minfied file that Grunt produced is defined by whether the code was compiled in DEBUG mode or not. This feature can be overridden on each method by the optional ‘forceState’ parameter.

Using BowerBundleHelper to deliver static files with cachebuster added

When delivering static files, e.g. images, you need to think about what happens if you change the file content. The problem is if you change the file content but not its name then if caching is turned on the user’s browser will use the old file content, not the new file content.

The BowerBundlerHelper has a command to turn a normal file reference into one containing a cache  busting value. For instance for an image you would use something like this in your razor view:

<img src='@Html.AddCacheBusterCached("~/images/my-cat-image.jpg")' />

There is a more detailed section on this later in this article.

BowerBundles.json file format

The BowerBundles.json file holds the data on what files are in reach bundle. This is equivalent MVC5′ s BundleConfig.cs in that you define your bundles here. I happen to think its a bit simpler than BundleConfig.cs, mainly because Grunt/Gulp is doing much of the hard work.

The key thing is that this file is the single source of what files are in what bundles. This file is used both by Grunt/Gulp to concatenate and minify the files and by B4B to deliver the right files at run time. It is also used by B4B’s CheckBundles (see later) class to check that your bundles are all correct.

The file is, by default, must be called BowerBundles.json and should be placed in MVC5’s App_Data directory. The file format is a json object which can contain a mixture of two formats: one for files delivered from your web application and a second format for using a Content Delivery Network (CDN) to delivery standard libraries

1. Delivery of files from your application

For delivering bundles of files from your application then each bundle is property that contains an array of strings holding the relative file references to each file you want in a bundle. Here is a simple example:

{
  "mainCss": [
    "lib/bootstrap/dist/css/bootstrap.css",
    "Content/site.css"
  ],
  "standardLibsJs": [
    "lib/jquery/dist/jquery.js",
    "lib/bootstrap/dist/js/bootstrap.js"
  ],
  "appLibsJs": [
    "Scripts/MyScripts/*.js"
  ]
}

The name of the property, e.g. mainCss is the name of the bundle and the array is the list of files in order that should be included. So to include the mainCss bundle you would include the command @Html.HtmlCssCached("mainCss") in your _Layout.cshtml file, or whatever View that needed it.

As you can see you can specify an exact file, or add a search string like "Scripts/MyScripts/*.js", but the order is then dependant on the name and some (many) files need to be loaded in a specific order. Directory searches can include file searches as well, e.g "Scripts/*/*.js", but at the moment I have not implemented the Grunt/Gulp’s /**/ search all directories and subdirectories feature.

Please see the section in the ReadMe file which gives you the steps to add a new file bundle.

2. Delivery of files from Content Delivery Network (CDN), with fallback

B4B can also handle the delivery of JavaScript via a Content Delivery Network (CDN). You can define a CDN url, with fallback in the BowerBundles.json file using the following syntax:

  "standardLibsCndJs": [
    {
      "development": "lib/jquery/dist/jquery.js",
      "production": "jquery.min.js",
      "cdnUrl": "https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.1.4.min.js",
      "cdnSuccessTest": "window.jQuery"
    },
    {
      "development": "lib/bootstrap/dist/js/bootstrap.js",
      "production": "bootstrap.min.js",
      "cdnUrl": "https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.5/bootstrap.min.js",
      "cdnSuccessTest": "window.jQuery && window.jQuery.fn && window.jQuery.fn.modal"
    }
  ]

The individual properties are explained in the B4B ReadMe CDN section but you can see from the data above we have both a ‘development’ file, which is delivered in debug mode, and a ‘production’ file that is delivered if the ‘cdnSuccessTest’ fails.

Using the above CDN bundle, standardLibsCdnJs, in your application will insert two <script> loads, each with a JavaScript section which used the {cdnSuccessTest} code to check that the CDN had loaded properly. If the test was false then the CDN worked and nothing else happens. However if it fails then the JavaScript inserts a extra <script> load to pull in the file given by the {production} property. The code output by the first CDN definition would look like this:

<script src='https://ajax.aspnetcdn.com/ajax/jquery/jquery-2.1.4.min.js'></script>;
<script>
   (window.jQuery||document.write(
   "\x3Cscript 'src=~/js/jquery.min.js?v=SnW8SeyCxQMkwmWggnI6zdSJoIVYPkVYHyM4jpW3jaQ\x3C/script>'));
</script>

Please see the section in the ReadMe file which gives you the steps to add a new CND bundle.

NOTE: At the moment I have not implemented CSS CDN support. The testing code is quite complex and I left it out for now. If someone wants to implement that then please let me know.

Adding a cachebuster to other static files

As well as bundles B4B can help with individual static files, e.g. images. These is a command called @Html.AddCacheBusterCached(“~/images/my-cat-image.jpg”). In this case a checksum of the file will be calculated based on its content and added as a cachebuster value.

In the case where you can pre-calculate the cachebsuter value then there is a second version which looks like this @Html.AddCacheBusterCached(“~/js/jquery.js”, “2.1.4”).

The way that the cachebuster is applied is set by the `StaticFileCaching` property in the B4B config. This means you can use different ways of applying caching busting by adding your own `BundlerForBower.json` file with a different cache busting scheme (see next section).

By default B4B uses the standard ASP.NET approach of adding a suffix, e.g. the command @Html.AddCacheBusterCached(“~/images/my-cat-image.jpg”) would produce the following html.

http://localhost:61427/images/my-cat-image.jpg?v=xKyBfWHW-GTt8h8i8iy9p5h4Gx9EszkidtaUrkwVwvY

Note: I use the SHA256 Hash which produces a hash which is related to the content. However this does take some time on large files, so I cache the SHA256 Hash to improve later access times.

B4B’s options: BundlerForBower.json

I have tried to make B4B flexible so I have put some of the key setting is a json file so that you can override them if you want to change things. I is also useful for me as ASP.NET Core 1 will need different settings. Below is the default setting for B4B, held in the file defaultConfig.json.

{
 "BundlesFileName": "BowerBundles.json",
 "StaticFileCaching": "{fileUrl}?v={cachebuster}",
 "JsDirectory": "js/",
 "JsDebugHtmlFormatString": "<script src='{fileUrl}'></script>",
 "JsNonDebugHtmlFormatString": "<script src='{fileUrl}?v={cachebuster}'></script>",
 //see http://stackoverflow.com/a/236106/1434764 about why we need to escape the document.write()
 "JsCdnHtmlFormatString": "<script src='{cdnUrl}'></script><script>({cdnSuccessTest}||document.write(\"\\x3Cscript src='{fileUrl}?v={cachebuster}'\\x3C/script>\"));</script>",
 "CssDirectory": "css/",
 "CssDebugHtmlFormatString": "<link href='{fileUrl}' rel='stylesheet'/>",
 "CssNonDebugHtmlFormatString": "<link href='{fileUrl}?v={cachebuster}' rel='stylesheet'/>",
 "CssCdnHtmlFormatString": "" //I have not currently implemented CDN for CSS files. Doable, but complicated.
}

I have used meaningful names to make the setting more comprehensible. These cover things like the names/locations of the directories where the minified files are found and the <link> and <script> code they output.

Can I point out that you can see a parameter in the setup called {cachebuster}. B4B adds a suffix to production files, just like MVC’s BundleConfig does, so that if the file changes the the new file will be used rather than the previous version in the browers local cache.

I actually use a SHA256 Hash as the cachebuster suffix rather that say the time the file was last written. This allows me to just rebuild everything and the caching suffix won’t change on files where the content is the same.

Changing the B4B options

I have used a flat object structure as that allows you to override just the item(s) you want while leaving the other properties at their default state. For instance to override just the directory where B4B looked for the JavaScript minified files then you would place the following json in a file called BundlerForBower.json in the MVC App_Data directory.

{
  "JsDirectory": "differentTopDir/bundles/"
}

See the following examples from the sample application:

  1. Override just the name of the bundle file see this file
  2. Override all the properties – see ASPNET Core 1 Config/bundlerForBower.json

Note: This last example shows how you would change the setting to match what ASP.NET Core 1 would need.

Unit Testing your bundles

Early on I was using a prototype of B4B in an e-commerce site I am working on. I deployed some code to my test site and it didn’t quite do what I had thought it should. I realised I had changed some JavaScript code and had not rebuilt the minified file.

I am pretty paranoid about problems that could hit a production site so I build a fairly comprehensive set of tests to check for any problems in the JavaScript and CSS bundles. The class is called CheckBundles.

To Unit Test your bundles then you need to create the CheckBundles in such a way that it knows where you MVC project is. If you are using the standard setup then the ctor can work this out by giving a type that is in your MVC application, e.g.

var checker = new CheckBundles(typeof(BowerBundlerHelper));

Notes:

  1. I use typeof(BowerBundlerHelper) rather than something like typeof(HomeController) as I wanted a type that did not need me to add the System.Web.Mvc assembly to my Unit Tests.
  2. There are other version of the CheckBundles ctor if you have an unusual setup. Please consult the CheckBundles code.

You most likely should run two tests:

  1. checker.CheckAllBundlesAreValid(). This checks all the bundles found in the BowerBundles.json file and returns a list of error messages. If there are no errors it returns an empty collection. (see sample project CheckBundles Unit Test example). The rules its checks against are:
    – Does the bundle contain any file references?
    – Do any of those file references use a search string that B4B does not support, e.g /**/
    – Do all of those files and their directories exist?
    – Does the concat file exist? (can be turned off via ctor param if not using concat files).
    – Was the concat file updated more recently than all the files referenced in the bundle?
    – Does the minified file exist?
    – Was the minified file updated more recently than the concat file (or all the files referenced if no concat)?
    For CDN bundles it checks:
    – Does the configuration support CDN for this file type?
    – Does your CDN bundle contain all the properties that the CDN format string needs?
    – Does any of the file definitions contain a search pattern? That is not allowed.
    – Does the ‘Development’ file and the ‘Production’ file exist?
  2. checker.CheckBundleFileIsNotNewerThanMinifiedFiles(). This does what it says. It checks that you haven’t changed the BundleFile and not run the Grunt/Gulp build process to ensure the minified files are up to date.

There is a really good example of using these methods to check your MVC bundles in the sample application. Have a look at this Unit Test class which uses CheckBundles in an NUnit based Unit Test.  I find this very helpful.

Conclusion

Hopefully this article, plus the sample application with its own ReadMe files and Unit Tests will give you good idea on whether B4B could help you. I do recommend you look at the other article called “Converting your ASP.NET MVC5 application to use Bower, Grunt and Gulp” for an overview of how to use Bower etc and how B4B fits into this.

I would appreciate your feedback on B4B. I have used it and been very happy with it, but I haven’t created a NuGet package yet. Anyone got a good link on how to produce multiple versions for each .NET version?

Happy coding!

How to take an ASP.NET MVC web site “Down for maintenance”

12/22/2015

I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. This article details how, and why, I implemented this feature.

Having researched how to at a site offline I found some very good examples (see Khalid Abuhakmeh‘s good post and this helpful stackoverflow question & answer by FaNIX). They were really helpful (thanks guys) but my needs were slightly different. This article, which forms part of my series on Handling Entity Framework database migrations in production, describes my solution.

What I am trying to achieve

I want to create commands that allow me to stop all users, other than an admin user, accessing the web site. I am doing this to stop the database being accessed so that I can carry out a database migration. A database migration is normally followed by an updating the software, and I want the new software to obey any ‘offline’ mode that the last software has. This allows my admin user, or the automation system, to check the new site before bringing it back online.

From the customer point of view want to have friendly messages so that they know what is happening. I especially don’t want someone to be half way through placing an order and then lose it because the site suddenly goes down. That is definitely a way to upset customers!

Bring all these points together then the process for taking the web site down, doing the migration, checking it and bring it back up had the following requirements:

  1. I want to warn my customers, so I put up a “the site will go down for maintenance in xx minutes”.
  2. When the site is down it should:
    1. Show a useful message to the customers.
    2. Return a 503 Service Unavailable HTTP status code (so that the web crawler don’t index the pages).
    3. Still allow my admin user to access to the site so that I can check the upgrade had worked before maying the site live again.
  3. If I upload new software then that also should come up as “down for maintenance” until the admin user has checked it out and makes it back online.
  4. Finally my site is Microsoft’s Azure Cloud offering so I have the following technical requirements:
    1. I cannot log in locally, which affects how I know who is the admin user.
    2. I also want a solution that will handle scaling up on Azure, i.e. it must work if I have multiple instances of the web site running. This affects how I store the offline information.

Why didn’t I use app_offline.htm?

While adding  a file called app_offline.htm to your root directory is a very quick and simple way of taking an ASP.NET web site offline (see article by Scott Guthrie) doing some research pointed out some issues. Khalid Abuhakmeh‘s post points out that app_Offline.htm could cause problems with search engines (read his post for a fuller list of issues).

However the biggest issue for me was the app_offline.htm approach does not let anyone access the site. I wanted to be able to log in and test the site after an upgrade but before making it live again (requirement 2.3). That single requirement meant I had to find another way.

The other plus side to the solution I developed is a bit specific to my application, but it is worth mentioning. My solution uses Azure WebJobs which access the database. I therefore needed to stop these from accessing the database during the upgrade. There were a number of ways I could have done that, but it turns out that WebJobs can see the special offline file (described later), which can be used to put the WebJobs into a ‘stalled’ state during the upgrade. This meant my solution could stop the whole site including associated background tasks.

Using an Action Filter to redirect users

As Khalid Abuhakmeh and FaNIX suggested the best way to implement this is by adding a MVC Action Filter. This intercepts each action call and allows you to change what happens. Here is my Action Filter code:

public class OfflineActionFilter : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        var ipAddress = HttpContext.Current.Request.UserHostAddress;
        var offlineHelper = new OfflineHelper(ipAddress, 
              filterContext.HttpContext.Server.MapPath);

        if (offlineHelper.ThisUserShouldBeOffline)
        {
            //Yes, we are "down for maintenance" for this user
            if (filterContext.IsChildAction)
            {
                filterContext.Result = 
                     new ContentResult { Content = string.Empty };
                return;
            }
                
            filterContext.Result = new ViewResult
            {
                ViewName = "Offline"
            };
            var response = filterContext.HttpContext.Response;
            response.StatusCode = (int) HttpStatusCode.ServiceUnavailable;
            response.TrySkipIisCustomErrors = true;
            return;
        }

        //otherwise we let this through as normal
        base.OnActionExecuting(filterContext);
    }
}

The decision as to whether we should tell the user that the site is “down for maintenance” is done by the OfflineHelper class, which I will describe later, and its sets the ThisUserShouldBeOffline property to true. If true (see test on line 9) then we stop the normal page display and redirect them to the “Offline.cshtml” view while also setting the StatusCode to ServiceUnavailable (503) so web crawlers won’t index the pages while offline.

This action filter needs to be run on all actions. To do this we add it to the GlobalFilters.Filters in the Global.asax.cs file, e.g.

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();
    //we add the filter to handle "down for maintenance"
    GlobalFilters.Filters.Add(new OfflineActionFilter());
    FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
    ... etc.

The OfflineHelper class

I had to decide how the application would know it was in “down for maintenance” mode. In the end I decided to use the absence/presence of simple text file to control the mode. Using a file seems a bit archaic, but it fits the requirements:

  • I didn’t want to use the database, as I want the database quiescent during migrations.
  • A file would be read when new software is loaded, which would continue to be in “down for maintenance” mode until the admin user used a command to delete the file.
  • A simple file will work with Azure’s multiple instances when you scale up because all instances share the same local file system (see this helpful stackoverflow answer).
  • It allows an automated deployment script to take the site offline by writing an appropriately formatted file, and take it back online by deleting the file.
  • Plus if anything goes wrong I can use FTP to manually read, write or delete the file.

Here is the code from the OfflineHelper class:

public class OfflineHelper
{
    public static OfflineFileData OfflineData { get; private set; }

    /// <summary>
    /// This is true if we should redirect the user to the Offline View
    /// </summary>
    public bool ThisUserShouldBeOffline { get; private set; }

    public OfflineHelper(string currentIpAddress, 
          Func<string, string> mapPath)
    {

        var offlineFilePath = 
            mapPath(OfflineFileData.OfflineFilePath);
        if (File.Exists(offlineFilePath))
        {
            //The existence of the file says we want to go offline

            if (OfflineData == null)
                //We need to read the data as new file was found
                OfflineData = new OfflineFileData(offlineFilePath);

            ThisUserShouldBeOffline = DateTime.UtcNow.Subtract(
                OfflineData.TimeWhenSiteWillGoOfflineUtc).TotalSeconds > 0 
                && currentIpAddress != OfflineData.IpAddressToLetThrough;
        }
        else
        {
            //No file so not offline
            OfflineData = null;
        }
    }
}

As you can see from the code the absence of the ‘offline file.txt’ file is a simple test as to whether we should even consider being offline. If there is a file, and it hasn’t already been read in then we read it. Using this approach means we only take the performance hit of reading the file once, which is done via the OfflineFileData class (explained later).

If the offline file exists then there is a test to see if this user is allowed to access the site. If the time for the site to be offline hasn’t happened yet, or if the user is coming on the stored IP (which we will see later is taken from the authorised person who set the site to go offline) then the user is let through.

As we will see later the static OfflineData property is useful for showing messages.

The OfflineFileData class

The OfflineFileData class is in charge of the offline file and showing/changing its content. The class is a bit long so I will show it as two parts: a) the reading of the file, which is done when creating the OfflineFileData class, and b) the GoOffline and RemoveOffline commands

a) Reading the Offline File

The code is as follows:

public class OfflineFileData
{
    internal const string OfflineFilePath = "~/App_Data/offline file.txt";
    //The format of the offline file is three fields separated by the 'TextSeparator' char
    //a) datetimeUtc to go offline
    //b) the ip address to allow through
    //c) A message to show the user

    private const char TextSeparator = '|';

    private const string DefaultOfflineMessage = 
        "The site is down for maintenance. Please check back later";

    /// <summary>
    /// This contains the datatime when the site should go offline should be offline
    /// </summary>
    public DateTime TimeWhenSiteWillGoOfflineUtc { get; private set; }

    /// <summary>
    /// This contains the IP address of the authprised person to let through
    /// </summary>
    public string IpAddressToLetThrough { get; private set; }

    /// <summary>
    /// A message to display in the Offline View
    /// </summary>
    public string Message { get; private set; }

    public OfflineFileData(string offlineFilePath)
    {
        var offlineContent = File.ReadAllText(offlineFilePath)
            .Split(TextSeparator);

        DateTime parsedDateTime;
        TimeWhenSiteWillGoOfflineUtc = DateTime.TryParse(offlineContent[0], 
              null, System.Globalization.DateTimeStyles.RoundtripKind,
              out parsedDateTime) ? parsedDateTime : DateTime.UtcNow;
        IpAddressToLetThrough = offlineContent[1];
        Message = offlineContent[2];
    }
    //more below ....
}

The code is fairly obvious. It reads in three fields which it expects in the file and sets the three properties.

  1. TimeWhenSiteWillGoOfflineUtc: The DateTime in UTC format as to when the site should be offline.
  2. IpAddressToLetThrough: The IP of the admin person that put the site into offline mode, so we can let that particular person through.
  3. Message: An message that the admin person can give, like “Expect to be back by 9:30 GMT”

UPDATE: There was a bug in the parsing of the date, which did not take into account the UTC Z on the end. I have updated the DateTime.TryParse call to use the System.Globalization.DateTimeStyles.RoundtripKind style which has fixed it.

b) The GoOffline and RemoveOffline commands

The last part of the OflineFileData class contains the commands to put the site into, and take up out of, offline mode. I build these as static methods and are shown below:

public static void SetOffline(int delayByMinutes, string currentIpAddress, 
      string optionalMessage, Func<string, string> mapPath)
{
    var offlineFilePath = mapPath(OfflineFilePath);

    var fields = string.Format("{0:O}{1}{2}{1}{3}",
        DateTime.UtcNow.AddMinutes(delayByMinutes), TextSeparator, 
        currentIpAddress, optionalMessage ?? DefaultOfflineMessage);

    File.WriteAllText(offlineFilePath, fields);
}

public static void RemoveOffline(Func<string, string> mapPath)
{
    var offlineFilePath = mapPath(OfflineFilePath);
    File.Delete(offlineFilePath);
}

I think the code is fairly self explanatory. Note that the use of the the Func<string, string> mapPath is used to pass in the Server.MapPath method from the MVC Action. This also allows the code to be easily Unit Tested.

The Offline.cshtml View

The View ‘Offline.cshtml file is placed in the Views/Shared directory and looks like this (note: I am using bootstrap for my CSS)

@{
    ViewBag.Title = "Offline";
}

<div class="container">
    <div class="row">
        <div class="col-md-12 text-center">
            <h2>Our site is not currently available.</h2>
            <h3>@LabelApp.Utils.OfflineHelper.OfflineData.Message</h3>
        </div>
    </div>
</div>

As you can see this fairly simple view that says the site is offline and also shows a message that the admin person entered when GoOffline is called. This could be something like “We expect to be back up at 7am”.

The MVC Actions

The last piece of the puzzle is the MVC actions that the admin user calls to go offline or return to normal working. They are pretty simple, but I give them in case you aren’t sure how to do this. The actions go inside a Controller, which I haven’t shown and you need some sort of way to make them visible in the menu when a user with the Admin rule is logged in.

[Authorize(Roles = "Admin")]
public ActionResult GoOffline()
{
    return View(new OfflineModel());
}

[HttpPost]
[ValidateAntiForgeryToken]
[Authorize(Roles = "Admin")]
public ActionResult GoOffline(OfflineModel dto)
{
    OfflineFileData.SetOffline(
         dto.DelayTillOfflineMinutes, 
         HttpContext.Request.UserHostAddress, 
         dto.Message, 
         HttpContext.Server.MapPath);
    return View("Index");
}

[Authorize(Roles = "Admin")]
public ActionResult GoOnline()
{
    OfflineFileData.RemoveOffline(HttpContext.Server.MapPath);
    return View("Index");
}

Again, fairly straightforward. Note that the ‘HttpContext.Request.UserHostAddress’ returns the IP address of the current user as a string. This is stored in the offline test file so that we can let that user through the offline check. Also OfflineModel model contains an int property called DelayTillOfflineMinutes and the string Message that the admin person can optionally add.

What I have not shown

I have not shown the simple banner that appear when the web site is set to offline. This is added to the default layout file, normally called _Layout.cshtml in the Views/Shared folder. It accesses the static property OfflineData in the OfflineHelper class and if not null can calculate and show the time till the site goes offline as a warning to the user.

Also, in my system I give feedback to the admin user that the system after the offline/online calls as, from their point of view, nothing has obviously changed.

Down sides of this approach

I always like to look at the down sides of any approach I use. When architecting any system there is nearly always a trade off to be had. In this case we are putting an Action Filter that is called on every action call, which has a performance impact. The main performance costs are:

  1. Checking if the file exists.
  2. Reading the file content.

In my first version I read the file every time, which if there was a file then we had a 3ms with +- 3ms deviation overhead. In the newer version I only read the file on the first time we find it. This improved the performance for the case where the offline file exists.

I have instrumented the creation of the OfflineHelper in the  OfflineActionFilter and 99% of the time is in the OfflineHelper, which you would expect. When running on a B1 Basic single core , i.e. not very powerful, and the time that the OfflineHelper takes are:

  • When online: average 0.3ms, +-0.1ms deviation
  • When offline or going offline : average .4ms, +- 0.1ms deviation

Note: Has approx 6ms cost when first read of file.

Clearly there is a cost to using this approach, as File.Exists() takes some time. It would be possible to add some caching, i.e. you only look for the file if more than x seconds has passed since you did so. At the moment I am happy to live with these figures.

Other than that I cannot see any other major problems with this approach.

Conclusion

There are many ways to handle taking an ASP.NET MVC web application “down for maintenance” but for my particular requirements when doing data migrations this seems a good choice. I particularly like that if you upload new software it restarts still in offline mode, which is what I want as I can then check the migration + new code works before taking it back online.

Please do have a look at the series Handling Entity Framework database migrations in production for more on database migrations. It is a big subject, but important.

I hope this helps you. Please do feed back if you have any suggestions.

Happy coding!

Older Posts