I have now moved my blog over to my own url
Microsoft have put together a nice little case study with SEAI on a project that we recently built. It was a full PaaS solution built on Azure websites, using a Cloud style architecture incorporating queues and webjobs for processing carbon credits and managing load on the site.
In the previous article I describe the highlevel requirements of a typical enterprise architecture that needs to expose its services over more touch points. I also provided a background into Microservice architectures as it is with this type of design that we hope to tackle the problem. In this article we will expand on the discussion moving into a more concrete example using an ecommerce solution.
One of the greatest challenges in any distributed architecture is defining the boundaries that you will use to partition your application. Domain Driven Design encourages us to separate out our solutions into “bounded contexts” which each of our models can subscribe to.
Using our example it is easy to identify Orders, Products, Inventory etc, but there are also functional requirements that our solution has to provide which do not really sit within a business domain. Image processing and full text search are examples.
The above diagram lists all the functional components of a full multichannel ecommerce solution. You could deliver most of these capabilities within a single .NET solution like Nopcommerce, but as retailers become omnichannel the demands on their ecommerce systems have grown to the point that they can not service all the touch points from a single solution whilst maintaining agility.
Creating a solution map of your system as above provides a good starting point for driving out all the different components of your Architecture.
If we take a look at an Amazon product page we can quickly see that there are at least 18 different domains being queried for data to build out the page. Inventory and price may be mastered in a core back end system but product metadata, reviews and product images are probably provided by a 3rd party service.
As we start to decompose the page into its individual elements we can start to draw boundary lines to separate out the architecture. Business domains are typically a good starting point for boundary identification but data life-cycle should also be considered, a product packshot is unlikely to change during the lifetime of the product, where as the price and inventory will change regularly.
A few years ago I worked with a team where we had large product pages, just like the Amazon one, however all product information was stored into a single datastore. This included everything from large slices of product metadata to price. As the website became more successful the business wanted to be more price competitive, this required us to deliver multiple price changes a day across the whole product catalogue. The net result was every time a price change for a product was delivered we had to remove the product from the different caches and reload it which would cause the product page to reload the entire Product Graph from the database, when the caches were cold this could almost take the website down. We solved the problem by separating out the data. Prices ended up becoming their own entity with their own life-cycle and a price change just required the publication of a new price which would automatically get picked up. Consistency was not a huge issue for us as price was always checked at the last reasonable point with a direct read from the database before the order was processed. This made sure that we did not have zombie price in the order, due to the ecommerce funnel this final check only happened for 5% of the overall traffic, a much more manageable number of users.
Another important point when looking at your system for natural boundaries is to consider service availability. If the product page could not render product reviews it might impact sales but only marginally, however if there is no price, product title or availability then its unlikely that you would sell the product. Therefore “service level” makes another logical boundary for separation.
We will also want to expose this data to a number of other interfaces, both user and system interfaces including our search page, basket page, order history and a screens within a mobile app. Additionally we may even expose this data as an external datafeed for 3rd party affiliates and aggregators like Google Product Search to use.
In the traditional n-tier application we would have built a service layer that would probably have included a Catalogue, basket and user service, each implemented as class. To keep things clean we may have put our services into a separate dll. This type of structure can be seen within the Nopcommerce solution **.
The diagram below articulates the different capabilities working together to build the product page. Its not an exhaustive list but should illustrate the point.
Following the Microservices approach we would build out each of these capabilities as a simple REST service. The product controller of our MVC application can then use an async call to each service to build out the view model and render the page.
For further scalability and optimisation some of these calls can be pushed out to ajax requests that do the call once the page has loaded or when the user starts to scroll the page, creating a snappier user experience and supporting edge caching services like Akami, but I will leave this for another post.
In this example the data within each service can change, however it is unlikely to change by the user. Instead there are external processes that will change this data. Examples include new recommendations, price changes, products reviews etc. These changes will be driven by external systems or feeds. The orange boxes in the diagram represent interfaces that would be changing the data stored in each microservice.
We have a number of Architectural options available to build out our Microservice Architecture.
One of the greatest challenges of building Microservice Architectures is the provision, deployment, management and monitoring of the services. Building our Architecture on IaaS would result in us having to build out a solution for provisioning, deployment, management and monitoring. Azure would only provide an SLA at the Virtual Machine level, everything else would be up to us. Azure offers better services for deploying and managing an Application Architecture. For this reason I wont focus on IaaS here but may come back to it at a later date.
As our architecture landscape grows with many services it becomes important that we manage our run costs by optimising our utilisation of cloud resources. If we were to build out the above architecture using Cloud Services each component would have to be build as an individual WebRole.
Each Cloud Service is provisioned as a single Virtual Machine. To have a high available service 2 instances of each Virtual Machine have to be deployed. Therefore the above architecture would require 42 virtual machines to be deployed. Not a very cost effective us of resources. It is possible to host multiple WebRoles on a single virtual machine, however deployment happens at the Virtual Machine which would require a complete redeployment of a virtual machine to change a single service. This does not really subscribe to the Principles of a Microservice Architecture and for this reason alone using Azure Cloud Services is not be recommended approach.
** I have used NopCommerce as an example of a particular type of Architectural style. My observations in no mean imply that NopCommerce is an inferior product or that they should have tackled the design differently.
Microservices architectures have become a bit of a trend lately. As with all trends there are the lovers and the haters, however from my perspective Microservices as defined in the excellent book by Sam Newman describe the next evolution of application development. From my own experience building large scale ecommerce solutions Sam’s book was like a shot of clarity.
Martin Fowler has written an excellent summary of Microservices which is required reading for anybody who is thinking about venturing down this type of architecture. However, Thoughworks are still hesitant to promote a Microservices Architectures due to the inherent complexity that is created by having an application composed of many granular parts. This complexity comes from having to provision, deploy and managed multiple applications within your eco-system.
Here is a definition from Martin Fowler’s article
In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
Microservices have gained increasing popularity in the “Startup/Massively scalable” area with Netflix leading the charge. The Gilt Group is a great example of a company who capitalise on the Microservices Architecture. They are a flash sale clothing retailer who started off with a Ruby On Rails monolith. After the initial success of their company they have spent the last few years moving away from the Monolith and now have a system comprised of over 300 independent services. Netflix is supposed to have over 600 services and Hailo has about 160.
The Microservices label encompasses more than just the architecture, but also the processes and practises that are required to build these architectures. Many of the elements in Sam’s book will be familiar to .NET developers who have been following best practises over the years. Especially those that followed down the CQRS route implementing service buses like NServiceBus.
But Microservices are not just for the startups, they provide an excellent architectural model for Enterprises who now need to:
The image below of the Windows 10 platform displays both the opportunity and challenge in building digital services. How do you build a service that can provide an experience on every device from a smart phone to a large screen or integrate data from an IoT device yet deliver a user experience on a hololense!!!!
Traditional Architectures relied on us building a single monolithic application that probably serviced only a single touch point. As we have to service more touchpoints we will want to project our digital service to each touchpoint only exposing the user experience that is appropriate to the device. We will also need to change touch points independently and often.
An Architectural Vision for a modern enterprise would be to have an Architecture that can provide an excellent customer/user experience through focused touch points supported by a low cost, scalable, highly available, redundent infrastructure that can support a low cost, high frequency of change with low risk.
The below diagram articulates a highlevel logical architecture of a typical Enterprise scenario or the probably As-Is of your enterprise application.
The architecture is broken up into 5 layers.
There are 2 key characteristics that separate the infrastructure requirements between the top and bottom of this architecture. The top layers need to be available 24/7, will change frequently and are open to the public and therefore need to be able to support traffic spikes and scale. The bottom layers may only require a low availability, sometimes a 9-5 schedule, change rarely and don’t need to scale as they operate on a predominantly batch based life cycle. This is an important point when reviewing and selecting the types of infrastructure solutions to support the 24/7, frequently changing environments as this can only be achieved with high levels of automation and thus require the selected components to support automation.
As you can see from the diagram below we can use Azure services to deliver this architecture.
In the next article we will decompose an ecommerce solution into the above architecture.
As software is eating our world; how we buy, build and consume products is changing. This is a story about the Digital Economy and my experience dabbling in it.
It started about a year ago when I backed a new project on KickStarter. They had a nice idea for a Drone that could collapse down and fit in your pocket (large pocket). They had a slick video and got some good media attention. The price looked good and I had had a pretty good experience with a few other KickStarter projects so thought I would go for it.
Over the course of a year we would get regular updates, pictures from factories, drones being assembled and tested. Everything looked good albeit the project was taking a lot longer than was expected. Finally at the beginning of this year I got an email confirming shipment. To say I was excited was an understatement. Then it went quiet. No delivery, no updates, nothing. About a month later I checked the KickSarter page to see if there were any comments:
It did not take long to find out that something had gone very wrong with the campaign. Lots of people had received the wrong pledge, other people had not received anything and worst of all nobody could get their drone to fly .
Luckily a couple of days later a parcel turned up and my drone had arrived. The box looked good and I opened it up to find everything there. I followed the instructions, charged it all up and went outside for my maiden voyage. But just as many other had found out, this drone aint going to fly in a hurry.
I later got a message from AirDroids; the gang behind the campaign who claimed the they ran out of money and that they had to take personal loans just to get the campaign to the end. Not sure I entirely believe it, however I am pretty sure that their intentions had been genuine, just their execution was poor.
Making products is difficult, product development, manufacturing, logistics, marketing are all hard. Just because a KickStarter campaign gets noticed does not mean that its going to be a success. Especially when it comes to hardware.
Back onto the KickStart page I found a like to Facebook group that had been set up by people who had received the drone and knew what they were doing. https://www.facebook.com/groups/661830653926054/?ref=browser
An amazing guy called Rolf van Vliet who obviously knows what he is doing put together a fully comprehensive guide to the pocket drone, including all the modification that you have to make to get it to fly.
Another couple of hours of calibration, modifications and playing about and I finally managed to get the little drone into the air!!!! But still flying behaviour was poor. It would fly for about 20-30 seconds and then crash, slowly destroying the airframe.
People are amazing, The number of people that came together on Facebook and help each other out is fantastic and a big thanks to the time that Rolf van Vliet spent putting together the support guide.
Just because somebody can build it does not mean its any good. The design of the pocket drone is very poor, breaking on crashes and unable to maintain calibration for flying. These guys were not engineers!!!
With all my crashes the drone was not fairing well. Its landing gear had taken a battering and was destroyed. But this is the Digital Economy, thats not going to stop me!!! Somebody had designed a set of new legs and uploaded the designs to Facebook for other to download and get printed up using a 3D printer.
Not to be beaten I used a service called 3D Hubs who are a broker for people with 3D printers. You upload your design and they will calculate the printing cost, suggest people that can print up your component in your area and manage the payment and transaction. Within a couple of hours I had a local guy here in Dublin printing me a new set of legs for the Drone.
So back out with the drone and a few more test flights later I am back to 3 broken legs.
Again, the fact that people have taken the time to model up these legs and distributed them freely is amazing, but just because a person can model something in 3D does not make them an engineer and these legs did not last 1 crash.
Rather than waste any more time with a poor design I chose to go with a new carbon fibre airframe that I found on Amazon.
Unfortunately it never turned up!!, so now I have tried buying it from a dealer in the UK over Ebay.
There is so much technology wrapped up in these little drones and its amazing how many people are prepared to invest their time in building drones and helping others to have fun. But the number 1 thing I have learn is that just because you can build it does not make you an engineer. But this got me thinking, Just like the ubiquity of cheap PCs and accessible programming languages has turned programmers into the new cool maybe the next big thing will be for engineers!!!! Maybe its time that engineering will become a great profession again as the demand for decent engineers who can design proper hardware that they can now manufacture on their bench.
Time to dust off my old engineering books!!!
Really proud to win Public Sector Project of the Year with the Energy Credits Management System. An energy measures platform built on Azure Platform as a Service for SEAI. A great solution delivered for a great client. I will come back soon with some more details of the project, architecture and how we built it.
— Version 1 (@Version1Tweets) May 14, 2015
Naming for ASP.NET is now 5.0
Middleware = IApplicationBuilder *there is a lot of contention around this
Design time host now compiles the code as you write, this enabled provides better intellisense. Compiled bytecode is pulled from the design time host into IISExpress in realtime in memory. This is the metaprgramming support, there are hooks so that you can do funky stuff on compilation.
KVM is the tool that is used to manage the KRE (K Runtime Environment) on the machine.
App knows what the environment is using an Environment variable. Variable is set to production by default, Install VS sets it to Dev. Use the StartUp.cs to evaluate the Environment variable and change the system configuration.
Supporting SemVar2 as part of Nuget 3
Nuget has poor support for Content files, this is a challenge for vNext as the content files needs to be moved as part of the build process. Bower is the package manager for content files.
Over the last few weeks a number of services I have deployed to Azure have experienced difficulty. Luckily I have pretty good monitoring on my apps that notified me of the problem.
What I really wanted was a way to correlate between my apps having issues and any known problems with Azure. Luckily Azure has a number of excellent status feeds that are available for monitoring their services. With a little help from http://ifttt.com we can set up an alerting system so that you can get a txt/sms message when the monitoring feed changes.
Here are my shared recipes so that you set them up for yourselves
If you use different services then you need to get the feed url for each service and add them into your recipe. The full status board below has all the different services listed, if you click on the orange feed icon this will take you to the feed. Just copy the url from your browser into the url box in the ifttt recipe.
These are my notes from the ASP.NET vNext Community Standup
Packing and Publishing is the process of taking the source and static files as they are represented in my source tree and staging them into a folder structure ready to xcopy onto a server.
The underlying process uses the KPM command line tool, which has a number of switches for options to control the output.
There are levels out publishing:
The KRE runtime can now run code without it having to be compiled first. This enables deployment to be a simple xcopy of the source files. If you want to compile all the source files up front then rather than compiling them to a traditional bin folder they will get compiled into a nuget package and get deployed into the packages folder. This is an important point as it means that all executable C# code is modular and managed at the package level.
You starting folder structure will look similar to the below tree structure, this is the structure that would get checked-in to your source control provider.
All external nuget packages would be references under the dependencies node of your project.json file, however your packages are pulled down from the remote repository and stored in a local cache in your user profile. When your application runs the KRE runtime will probe for the packages and use the cached versions.
Packing the solution using kpm and the pack command will generate an xcopyable deployable.
kpm pack –out output file path
Output directory structure
This separation of source code and static files means that code can never be served.
IIS or Kestrel anchors to the wwwroot folder.
If your decided to compile your application when you package the output folder structure would look similar to the below:
The MyAppPackage represents your source code compiled to a nuget package, the package process all pulls all cached versions of the dependent nuget packages so that they can be xcopy deployed.
Using wild cards in version numbers enables automatic updates to dependencies without having to reupdate
KRE – is the new runtime, this can be included in the packages folder if the CLR is not already installed on the target machine.
CLR can be either remote deployed onto a server and shared, or deployed locally within the packages folder which will take precedence.
F5 != kpm pack
The Visual Studio publish UI uses this underlying command.
My notes from the ASP.NET Community Standup 4
Main goal of ASP.NET is to shink down the memory footprint of a Request.
Data access is a first class tenant of web applications, therefore Entity Framework is becoming a first class tenant of ASP.NET.
EF is becoming portable and is trying to provide a single consistent programming model for all data access, on the server, on the desktop, tablet of mobile.
EF 7, is also a complete rewrite. To make EF lighter the underlying EDMX has been removed, making it more modular and light weight. Little code is being moved forward, the focus is more on the API, designs and conventions that people have adopted.
EF 7 will be a PCL Nuget package, with the objective that it can run on Mono, Xamarin etc.
Providers will include Redis, Azure Table Storage.
No Model First, no designer. Only Code first using annotations and fluent api, however it will still include reverse engineering to generate code from an existing database table.
EF7 is not recommended for a complex datamodel, its focus is on providing a common abstraction over simple datamodels.
Indexes will now become a first class citizen of the mapping infrastructure so that they can be managed via code first. Providers can use this metadata to generate their own interpretation of an index based on their store.
Providers are created for each type of data store. There are some helpers for providers creating Linq. There is some capabilities for projecting those Linq expressions in memory when the underlying data store does not support Linq.
Heavy focus on performance and memory management, including batching, change tracking and the output sql generation.
If you want the EDMX and designer you should use EF6 which will be fully supported for the foreseeable future.