Tuesday, 6 December 2016

1..2…3…Freeze….Peak



I haven’t written anything for ages as  I’ve been the busiest I have ever been at my current employer – a large E-Commerce Website -  as a result of preparation (planned and unplanned) for its largest ever “Peak” weekend, my company’s name for the Black Friday weekend.    

Sorry this is a long read so grab a coffee or skip to the Summary, or don't read it!

I’ve started writing this on the train coming back from London where I was required to provide extra-support for the Payments Platform (Domain)  - where I ply my trade – a group of Microservices and Legacy components supporting Payment processing for the Website and sitting very closely to the Orders Domain. 

I expected to be bleary eyed and wired to the hilt, full of coffee and adrenaline as me and my colleagues worked like mad to pull out all the stops to make our giant incredible machine work properly. But no, as our CIO put it, everything just worked (well 99% of the time – more on this later). 


The freeze part of the title refers to the fact that, in the weeks up to Peak, as we tried to ensure stability of our software, only the most critical software releases were allowed out.  I , like many of my peers, do not think this is great.  This is costly in many ways; not least, all of the feature development still going on actively within the company is shelved and starts to gather dust and not being released means the software is accruing debt on many fronts.  However, I can’t be too hasty in judgement, this way of working comes from something along the way biting my company really hard during this crucial trading period, this mentality is understandable if not immediately completely excusable.  Hopefully, this will get better as the company gains more maturity, confidence and control over its systems.

Application Insights

As part of the preparations it became clear that our services did not have sufficient monitoring capabilities.   Services were performance tested in quasi-production like environments and so we had some idea about how they might perform, but only an idea.    So we had some expectations about performance but very little in the way of helping us see operational health .  Measuring and keeping  track both would have been impossible without telemetry and logging of some sort.    

We already had some monitoring available via dashboards powered by Grafana. A number of dashboards exist showing Order flow, and, as a result, part of an Orders path through Payments, but only at a high level.  With various counters representing the number of Orders at a particular status, e.g. arriving from the Website, currently being billed, and shipped to the warehouse, this is used to help try to generalise the performance and health of backend systems as Orders wing their way through the Website, through Order and Payment Processing and much more and then, eventually, out to the Warehouse(s). 

If problems start occurring, by some thresholds being exceeded or not met,  represented by numbers and traffic light boxes, focussed investigations into issues can take place in a particular area.   The problem with this is that in between the different statuses there are a vast swathe of software services in locations all over the world involved: including New-World Microservices, Legacy systems, stuff in the sky, Messaging, RESTful APIs, document DBs,  SQL DBs, No-Sql Dbs, Edge caching, and hardware on premise.  Application support have lots of fun trying to diagnose errors and, where they are at a loss ,they hand over to software development teams/developers who mostly run around in blind panic at the same time.

More recently with the release of a updated versions of the Website (which is multi-platform/device) New Relic was enlisted in helping see how the various APIs consumed by the Website were behaving and this has helped to start to build a more fine grained picture of the operational behaviour of APIs   ( and by association the Websites) where various HTTP statuses can be obtained and detailed analysis about response times, payload sizes, customer locations, can be gleaned, but that’s about it in terms of insight into what an API is doing. There is yet another monitoring tool called Kibana (sitting atop of the ELK stack) which is used for some of our Azure based services, largely this gives transparency to telemetry just like New Relic. My money’s on New Relic in this area though.

So, the introduction of performance counters, custom and otherwise were conveniently placed at the door of our APIs, on the 3rd party calls within them and in and around the messaging infrastructure used in the backend processing for our services.  Relevant logging was added too, to ensure that in the event of failure or exceptional behaviour we had some trace of what was going on.  So, having all of this information, but not at your fingertips is a bit of a nightmare. 

Consolidating this all into one single place is achieved with dramatic effect using Application Insights hosted in Azure.    Simply adding appropriate libraries in the service and hooking it up to an App Insights Resource Group in Azure and boom you are presented with a bewildering array of options to present you with your telemetry.    Coupled with this you can also leverage Analytics to perform queries on all of the data collected and herein lies a problem.  Making sense of all of this information is hard.  Certainly, there are some headline acts easily noticeable but more subtle problems like a semi dodgy 3rd party call, or reasonably flaky database writes can be eked out but not after some considered tweaking with Queries etc.    Also without reasonable SLAs from the business in most places and the fact that a message queueing system is used, what alerts should be used and when is a question we are still asking ourselves.

Long nights and Instability

So getting to a good place a lot of pain needs to be had.  And boy did we experience our fair share up to peak.  With SAN migrations causing mayhem with our messaging,  Windows Cluster failovers  misbehaving and deleted Orders in Azure there was plenty of time to practice and gain experience in trying to solve problems  quickly and still meet “customer promise”  a term used to describe meeting cutoff for various delivery options, which is not taken lightly. 

Graph Watching First Blood Part III

So the culmination in all of the Black Friday prep is a series of graph watching sessions, with eyes trained on graphs rolling around in multi-tabbed browsers and lots of suggestions and conjecture about what this trend is saying and what that trend is saying.   I was personally involved in 4 of the 5 days of Peak support, remotely on Thursday, Black Friday some of Saturday  and on premise (by demand) on Cyber Monday.  Certainly, at the beginning of the event focus was very high and everyone was apprehensive about what might occur and what problems might need to be addressed.  Previous year’s Peak had seen all manner of catastrophes including site outages and various problems.


   
The biggest problem this year, then,  from a Payments perspective, was that a Legacy component on dealing with timeouts with a Payment Service Provider couldn’t reliably recover and compensate  from the timeouts and so Orders would be left in a state where we didn’t know whether they had been billed.  Simply retrying the Orders meant risking a cancellation occurring as the legacy component is pretty dumb, so a quick support tool was knocked up and like a scene from Lost, a button was pushed every so often to make sure these were processed properly. 

The highlight though is that everything, generally, worked.   On Cyber Monday, due to a failure with a voucher service Orders dropped off a cliff.  Fixing the problem and spreading the word to our persistent and patient customers meant that between 9-10pm an average of 55 orders as second were being processed and all of this would land at Payments door step. The system coped admirably and a backlog of messages, of which there were up to about 40k at one point (spread across our system) were slowly but surely dealt with.  

Summary

The Payments Platform have now got a lot a data regarding capabilities during spiky load periods, a greater operational understanding and has breathed a big sigh of relief having been able to support Black Friday.   Thing is, its going to be even bigger next year and we are just about to start preparations in updating our Platform to be even more resilient, scalable and available.  Looking forward to it.  We will have a lot more control of our systems come this time next year and a maturity and experience to put what we have learnt to good use.  

Up Next....Reactive Extensions or maybe even HAL (Hypermedia Application Language)




Sunday, 5 June 2016

Learning to TDD is like learning to drive a car

Its true

I'm sure this comparison has been made but I'll have to tire it out a bit more. When I was learning to drive I was pre-occupied with where to put the gear stick, where it actually was, when to use the clutch and so on and so forth and being flustered all the time.

But, quite quickly, with continued practice and with growing familiarity and confidence with a car, I could think about where I wanted to go instead of every detail of what I needed to do, to get there, and now I drive like Lewis Hamilton.

The parallels between learning to drive a car and learning to test drive application development are striking.

Before  I could start to appreciate the nuances of TDD and be able to decide what works for me (for example choosing when to use classic and mockist TDD) I had to become comfortable with some pretty basic things including a good refactoring tool - ReSharper, a good test runner and Unit testing framework like NUnit and a good isolation (mocking) library, like Moq (all of these tools have their proponents and detractors)  but this is what I use for the majority of the TDD I do at the moment, this could change, but the principles won't.

I've been TDD'ing - or at least - trying hard to TDD over the last couple of years and it has been a grind. 

I've read TDD by example by Kent Beck about 10 times and watched plenty of videos and read blogs and - in spite of being told that practice makes perfect - did not practice anywhere near enough - I just didn't have the opportunities in work time alone to do this (but that's another story - and that is in spite of being a TDD first org).

Taking my lack of practice by the horns after reading Sandro Mancuso's book -  The Software Craftsman) this coupled with Roy Oshrove's string calculator kata,  Growing Object Oriented Software Guided by Tests  (GOOS) by Steve Freeman and Nat Pryce, Mocks aren't stubs  by Fowler and Sandi Metz's take on what and what not to test
a few things changed and things have, seemingly, finally clicked.

TDD Bigger than the sum of its parts

Using the tools I mentioned above will get you writing unit test's pretty quickly but not TDD'ing.  Using the tools and TDD'ing will get you TDD'ing but most likely in haphazard manner.

Firstly, TDD'ing at all requires confidence in the tools, mentioned above, and not worrying or having to think about how to use  them - its not that they are extremely complicated to use, but things can get bloody confusing.  Using Moq, we can create Mock objects which are Stubs or we can use Mock objects to verify behavior with them.   Of course, this can be done without Moq, but it is great at creating Mocks and Stubs, quickly.

Using a good refactoring tool really helps speed up things too, but - again - to use one fairly well there is a bit of a big curve learning useful shortcuts but then once this has been done learning when to pull objects into existence or when they should be Mocked or Stubbed needs to be learnt.  

So we have the tools and we need to learn how to use them and when to use them and why to use them.  This is a lot to learn and then apply. However, when all of this starts to fall into place, wonderful things start to happen and the productive ebb and flow of  TDD, which before, was likely staccato comes to the fore. 

With this, and something  I forgot to mention, pairing with people when TDD'ing loses the scare factor, worrying about how to use the tools is a bit of a disadvantage and can disrupt the flow of pairing, it can make you feel pretty stupid if someone is already pretty comfortable, even if you are far from it.  Nobody likes to appear like a novice - not knowing how to find a gear or the accelerator if comparing to driving.   

So the upshot of having the requisites in place is that shaping applications, driving the design by pulling in collaborators (either by Mocking or Stubbing them) with a quick refactoring short cut can all be done under the TDD cycle with confidence and very nearly without a thought as to how you are using the tools to get to your end goal - you are just using them as a means to an end. This is a really fulfilling feeling. 

Summary

I am writing about my experiences with TDD as I've struggled with it, and watching people around me get it and not having that same Eureka moment (or moments) has added to my anxiety, stress and worry that I would never be able to TDD effectively.  But I think I am just about there, I always like TDD'ing,  I now think that I can do it pretty well.  And on this note if other people are having trouble with it, stick with it, practice a bit more - read a bit more too, make sure you are reading the right things,  some things may likely only make sense when you have some of the other fundamentals boxed off.

Happy TDD'ing.



Monday, 25 April 2016

Newline character is 1 character

I will never forget the above

When using substring and you can't work out why "\n" is not 2 characters refer back to this title.  Does it make me look stupid, Yes, hopefully I will work this out sooner next time.  To be fair to myself its like and and. 

That is all. 


Tuesday, 19 April 2016

A Beautiful Solution?

^^Perhaps^^

For various reasons me and my team hadn't been spending a lot of time at the coal face, coding.  From spikes, POCs and analysis to release activity -  coding opportunities have been few and far between. Thankfully this changed over the last couple of weeks.

As part of the Payments Platform we have been given the opportunity to chip away at some of the Legacy processes to provide more flexibility in our Payments offering.  Part of this process see's us hoping to consolidate some existing logic to manage the integration with a 3rd party Payment Solution Provider more flexibly.  Specifically around handling particular types of payment which are handled differently based on certain criteria such as a customers Billing country, what currency they are paying with and the card type they are using.  

Stuck

In essence the work we are carrying out is a redesign and re-factor of some logic which is spread out in a number of places.  Bringing it all in, has been both interesting and frustrating.

Maybe its because of the lack of practice but I couldn't quite nail a solution (to a part of the problem) until about a week after starting the work.  

I'd started out test first, using TDD to drive out an initial solution, which I was sort of happy with, and then re-factored - went a bit mad - duplicated loads of tests and then had lost the essence of a simple solution.  As I wasn't happy, I made a V2 of my work, left all of - what had become sociable tests - in place and started again.   I ended up with real clarity, no over-engineering and a solution  I am really happy with.

The essence of the problem

When filtering data there are a number of options available.  


Typically if the data is stored in SQL (perhaps relationally - but not always) , querying and filtering can easily be performed in a stored procedure (or data layer). Certainly at the cost of having business logic in the DB (usually a pretty poor choice), amongst other trade-offs.   

Conversely a denormalised bit of data stored in a document store or - again  - in SQL can be retrieved and then worked on in memory and up in the vernacular of the UL of a domain  using LINQ and , giving a nice opportunity to easily test behaviour and nail requirements around a business critical process.

The existing solution, we are augmenting, used some pretty meaty  LINQ queries.  Now, arguably for some, this may suffice and do the job, but due to the number of things needed to filter on and various conditions, some pretty "all encompassing" LINQ 'WHERE' statements were being used with comments.  Coupled with there were no tests for some of the behaviour, some of it seemed coincidental.

The comments alone are enough to make you question the clarity of a design and are well documented as a marker of a code smell in a lot of situations (but not all of course).

Me and my colleagues also noticed there were lots of null checks thrown in too.  We all thought - we can do better than this!  I lobbied for this task and was certain I'd knock something out quickly.  I struggled!

My first approach saw me using the decorator pattern or at least - a bastardised version of it.  The basic concept is illustrated below.  In fact the approach can be visualised as moving some data through a pipeline of filters, each filter being stateless and all working toward refining some data to a reduced set. I talked about this kind of approach when mentioning folding in an earlier post. This is a functional programming technique.





In code my first attempt was facilitated by the code below: 

In the snippet above I have an interface that takes the seed collection (the original query) in the first argument - only the first filter in the chain would work on this.

The other arguments are values which are needed to drive the filtering, all of this information has to go down so that each filter can use what it needs. This could be simplified by applying Extract/Introduce Parameter Object to the parameters. 

Subsequent filters just work on the results from previous filters who have their Execute method invoked within a filters Execute method.  The other filter being a dependency of course.

Only one filter implementation is shown here, the currency filter.  Its Execute method dictated by the interface, applies any filtering and returns the result for the next filter to do its thing (if there is one configured).


Each filter is stateless, in the end after applying a number of filters, an input set is reduced to just one matching value based on various criteria.

Setting up a "chain" of filters would look like:


Now, this worked but after review by my colleague, the fact I'd gone out of my way to encapsulate the filters as separate classes, kind of indicated that the filters should be usable by him as standalone autonomous bits of logic and this is exactly what he wanted to do as he had some similar filtering which only required some of the filters I'd made. Fair enough.

With a few tweaks this would work. The problem was that with the dependency of the filter up in the constructor without some kind of null object pattern or specific handling to check for nulls or handle setting up the chain of filters, things would break if composed in the wrong way.  That is, for example, if he wanted to use a filter but didn't want to chain a specific child filter.

As well as a set of scenario driven tests for the domain service, I'd written tests for each filter and in so doing  duplicated lots of test.  I'd not quite implemented what I desired.

I started looking at Haskell list comprehensions and thinking about a functional way to tackle the problem and  came across a post  on S/O which was exactly what I was looking for.  In essence I could use a left-fold and have my filters called one after another with a LINQ extension method called Aggregate which - like my solution - takes a seed but - unlike my solution - handles 'accumulating' or keeping track of the result of applying filters without the need to hand-crank any other classes. Composition looks like below. A list of methods are used to impose filtering with the heavy lifting done by a bit of LINQ as we can see below in the next snippet.

Below, the  Aggregate function is called which takes a seed (our initial data to be filtered - number 1 in the diagram above), a lambda (or anonymous function) representing an accumulator method, which matches the signature of the methods in the filters list (numbers 2 and 3 in the diagram above).



It then applies each of the methods - composed above - in turn. Running each filter and passing its result, via the accumulator to the next filter, eventually pushing out a much more refined result adhering to the nip and tuck that the filters imposed along the way. 

Summary

I really like the elegance of this solution.  Yes it is simple (you may wonder why so many lines were devoted to something which may not have even been given a second glance on what you are working on)  and there are yet simpler ways to tackle our problem, the filters could just be looped over for example (or even revert to clunkier LINQ queries).  It just struck me as, erm, nice.  You may well disagree.  If so, please let me know  in the meantime I might do a V3 and just filter down in the persistence layer :)

 








Thursday, 31 March 2016

Book Review: The Software Craftsman: Professionalism, Pragmatism, Pride

Spur

There were a few things that made me read this: 
  • I saw it lying on my dev managers desk
  • We have a coach from the company that the author co-founded in our place at the moment
  • I was cynical about the software craftsman idea and thought it was about self-serving, elitist and pumped up self publicists
Inspirational
 
I'm not a soppy or effusive person, but I have to say this book really resonated with me and, like a really good rally call,  made me feel positive about my current position and how to sustain and develop my career further and just how I might achieve this. The thing is, I knew most of this already but just hadn't bundled the virtues, thoughts and values that are described, under a particular banner. 


The Software Craftsman Manifesto


It actually exists!  A set of values to which a software craftsman can hang their hat.  Now, like the author says, there are people who don't like to call themselves craftsman and I am one of them, but the ideas of professionalism, adding value and trying to do a good job are things I already embody and value in my peers. Its good to see that in the UK there is a large community of people who are interested in the same things, the LSCC being one of these groups.  I need to get to one of these groups locally, any of these in Brum? 


Not about TDD or XP

The book isnt about TDD or XP necessarily but they are advocated as, currently, the best ways of helping produce quality and valuable software.   In fact it is mentioned an awful lot  - but you will need to buy a different book for advice on specific technical practices.

What to do

I, like the author, often hear that people say that the company they are working at is not investing in them and propelling them forward in their career.  Sandro pulls no punches in saying that this responsibility is left squarely at your own feet, which I completely agree with.  Professionals in other industries have to stay current and continue a life long journey of mastery why should devs be any different.  

From reading books to attending meetups to pet projects to blogging,  a number of strategies are discussed to help you help yourself.  

This is probably obvious to most people, but doing it is different from talking or knowing about what you need to do. In the first part of my career -  after leaving Uni - I didn't invest enough time in learning  new stuff and keeping up to date with what is current - its a lot easier now and I'm lazy - but I'd be further ahead if I'd maintained my out of work learning after Uni. Don't get me wrong, I was still working hard and wasn't a 9 to 5'er but my working life may have been a bit easier if I'd supplemented it with stuff I'd learnt so that I could lead and not just be lead

Identifying your peers

There's a great section on the different types of personality traits of people you will work with from the "fanboy" to the "irrational" to the "cynic".  Discussing how to deal with different people and knowing that other people have identified them, certainly makes you realise that you weren't the only frustrated person.    There is some great advice about being humble and open to ideas, which is a quality some peers lack and hopefully when they read his hallowed words they will reflect and be a bit more open.   Also "fanyboys'" should take stock and just "chill the fuck out".  Yes, I'm sure the tech you learnt about last night is great I just don't want it rammed down my throat - again....


Interviewing/Being interviewed/Career choice


Yeah , tips on interviewing may seem a bit out of place but good advice on interviewing your prospective employers is indispensable.  Also, advice on owning your career and knowing what you want and when you want it vindicates some of the choices you have had to make where more money was necessary or opting for a slightly less hectic pace when building a family. 

Summary

I'm certain I've missed some pivotal parts of the book out of the review, my short term memory is terrible (and I promise to pay them their dues next time I pick up the book)  but I am left in no doubt that this is an excellent book, if you are disillusioned it may reignite you hunger, if you are looking at what you need to do to help you progress, it has sage advice.  I'd almost label it a developers self-help book,  I certainly wouldn't have bought it, if it had a self-help title but it makes me think I might read a few more of these if it gives you a spring in your step.  Or is that just because its Spring? Enjoy. 



Wednesday, 30 March 2016

Book Study: Patterns-Principles and Practices of Domain Driven Design

Book Study: As title

Summary

In summary this is an excellent book.  It is really accessible but some of the more advanced areas will need further reading elsewhere. For example, things like Aggregate design 

If you are involved in Enterprise application development and you haven't yet been able to take advantage of the Domain Driven Design (DDD) philosophy, this book will show you the way. I know this because it is almost like a narrative to my experiences of the philosophy at a large e-commerce website in terms of some of the problems experienced, practices adopted, conversations with the business and stakeholders, patterns used, misused, abused and not used and every conversation about everything in DDD in-between.

Go Compare

I previously read the fist 3 chapters  of Vaughn Vernon's Implementing DDD book and I ran out of steam for various reasons (the birth of my second son primary amongst them). Its not that it isn't a good book (I refer to it later on and why I think this is the case) but it felt a little bit formal and dry on occasion and I'm sorry to say the cowboy logic comic strips both confused me and grated with me, maybe you have to be an American to understand the jokes.  

Keen to still learn further about DDD and, on the recommendation of a friend, I'd picked up Patterns Principles and Practices of Domain Driven Design (PPPDDD - I know!!!)  after seeing an example of using Atom Feeds and HTTP to provide a reactive messaging  solution as an alternative to using a message bus.  This was a great antidote to having a particular service bus, which will rename nameless, rammed down my throat for the last 2 years as the way to "do asynchronous" messaging.

The language and concrete examples in the section seemed a lot more accessible than some of the stuff in Vaughn's book and so I decided to study it from cover to cover - doing as much of the code examples from scratch reading all of the stuff about strategic patterns and drawing out designs, before moving on to the less subjective tactical patterns and other patterns this book has to offer an opinion on.  

What followed was an epic 6 month journey where I ended up cross referencing the Vaughn's book, blogs and everything in between to try and get a better handle on DDD. 

After all this, I am only bit wiser about DDD, although I can now hold my own in a conversation on the subject and suggest a few patterns, decomposition techniques and identify domains, sub-domains and bounded contexts, it is clear that it will take me many years and experience of certain situations to be truly conversant and comfortable with DDD, and this book is only going to get you some of the way, actual real world experience is going to have to back up what is learnt in this book, otherwise you will lose what you have learnt. 

This might be ok, you might never need to do context-mapping at your current job with all of the seams and domains and sub-domains identified already, but having a deeper insight into what affects this kind of arrangement, as well as many other aspects, are key to having ammunition to engage with the business and help drive solutions to new problems that emerge in your business's domain.

The Book

There is so much stuff in this book I started writing notes and actually ended up taking about 100 pages of notes as well as creating about 5 solutions and posting gists of useful code examples to pinch for future use, I started this post the same day I wrote my post about Twitter in October last year.  It's taken me an age.  

There are 4 main parts covering:

  • Part 1 - The Principles and Practices of Domain Driven Design
  • Part 2 - Strategic Patterns: Communicating between bounded contexts
  • Part 3 - Tactical Patterns: Creating effective domain models
  • Part 4 - Design patterns for effective applications
I will discuss a few of the highlights of the parts, I will not go into too much detail,  but because of this I will be doing some of the book a disservice, so if reading this leaves you asking questions  (which it probably should) then buy the book!

Part 1 - The Principles and Practices of Domain Driven Design

The main thrust of this part is about DDD as a philosophy, what it is and isn't and that it isn't just about tactical patterns and when and where DDD should and shouldn't be applied.   

This part clearly spells out what things like the Ubiquitous language is, how to decompose complex problems into more manageable ones,  getting buy-in from the business, collaboration, context-mapping and what a bounded context is, to name but a few things. The very essence of what DDD is, is covered and you wont be left scratching your head after. 


This part of the book is very insightful and as a developer I was initially tempted to skip past large parts of this part, but doing this  - it quickly became clear - would mean I would be missing the point of DDD. I would recommend you do the same if you want to start to grasp it properly. 
There is so much which resonates in this chapter in terms of  what engaging with stakeholders is like, advice on managing the expectations and of stakeholders and advice  on stopping stakeholders jumping to solutions and keeping them focused on the problems in the domain and in the language of the domain and things like mapping a problem to a model of the problem.  The break down of what a core domain, domain, sub domain and a bounded context are, is indispensable too.   I personally found decomposition hard to understand  before I read this, and the chapters guide you through decomposition in a very clear manner.

Suffice it's to say, I don't agree with everything here though, some of the bounded context modeling is not how I would model bounded contexts necessarily, as some of the naming relies on the domain messages they receive, but the decomposition aspect of this exercise is the main point here.

There is lots of ground to cover and, if I am being honest, I would have to refer back to my notes (or the book) to help me remember large parts of this section in detail and here in lies a problem with a book of this magnitude, its really not possible to retain all of  its contents to a high level of detail and I'd recommend referring to it as a reference depending on your needs in a project.

Part 2 - Strategic Patterns: Communicating between bounded contexts 

As a developer the real fun starts here, after a lot of scene setting and ideological imparting in the first part,  this part focuses on some examples of getting bits (bounded contexts) of a domain (or other sub-domains) to talk to each other.  Using NServiceBus, EventStore and Atom Feeds for various types of communication between bounded contexts, the examples are clear and really help bring to life what and how reactive messaging  and RPC can be used to create solutions that communicate effectively.  Reading this you may wonder why RPC might even be a consideration these days, but it was an eye-opener to see that this should still be a worthy consideration.  If anything, this book helps you keep an open mind and you can practice your "it depends" come back when talking to colleagues about potential solutions to problems. 

A small gripe with this section is the fact that HAL (Hypermedia Application Language) is introduced as a basis for the messaging with atom feed event examples without a real good reason as to why it is introduced or used. Don't get me wrong it seems like a useful implementation of HATEOS but probably wasn't necessary for the examples.  

Part 3 - Tactical Patterns: Creating Effective Domain Models

This was my favourite part of the book, as a developer, it has stuff that I can use in my day to day and indeed some of the stuff I learnt here is out in the real world shortly to be used by millions of customers (albeit a small thing, but nevertheless) .   The examples around value types are great and although I had an understanding of what they were - again - I was left with no ambiguity in my understanding.   Likewise the clear distinction between domain services and application services is excellent. These two concepts were a blur to me before but now stand distinguished - side by side. Event sourcing is described in detail and is a good as guide as any I have seen.   The bit I was looking forward to the most was the chapters on Aggregates.  These are one of the fundamental tactical tenets of DDD and they are given thorough treatment here.   From what an invariant is to determining aggregate boundaries and considerations on when and where to use immediate and eventual consistency.   I must admit though that I cross referenced Vaughn's book here because he does provide more insight and consideration. It is worth saying that when I come to design aggregates I will refer back to both books to give me as much chance of modeling them correctly as possible. 

Part 4 : Design Patterns for Effective Applications

The stand out chapters of this section is the treatment of CQRS. Again, an area I am pretty au-fait with but this left me with no doubt about its applicability in certain situations and it left me crying out loud as to where we could have adopted it in some of the work we applied DDD to at work recently. We didn't go the whole hog in generating a simple read model and got a little confused between our write and read concerns 
Application and domain services are worked through well with good examples. However, one of my major gripes with some of the application service examples is the seemingly poor choice of aggregate design to show application service co-ordination, which was bit of a shame.  The example involved a customer referral example and saw an application service updating 2 separate aggregates in the same transaction which is generally advised against (but not always) in chapters in the previous part of the book.  I can see that the authors chose to really highlight co-ordination here but I think a more expansive example of eventual consistency and immediate consistency would have really helped and not meant I had to go looking in Vaughn's book. 
There are also chapters on reporting, by this time I was ready to put down the book but managed to see it through. 

Summary
If you want an accessible book and want to have a good grounding with DDD as a philosophy I would highly recommend this book. 
As far as I know all major angles of DDD are covered from both a design and implementation point of view, with a nice collection of current and relevant technologies covered.  The book comes more from a practitioner point of view and is less formal in it description of the various aspects of DDD as compared to say the blue or red book (Evans and Vaughn's books respectively)  the authors don't use the  this/that pattern(54)  calling out of patterns etc and some may consider it a little bit less high brow than the other books.  On this note some of the more advanced things like Aggregates could probably do with more examples, as with all of these books real world examples are thin on the ground.  The contrived example of SaaSOvation in Vaughn's book although obviously contrived doesn't easily map well to some of the problems found in more e-commerce web applications, but there is enough there as food for thought.  If this post seems a bit weary, it is, I could barely muster the energy to regurgitate what it is I learnt after a long haul.  I think I'll have to revisit this post when I revise some of the material to give the book the justice it deserves.  
In contrast to this marathon study, I  read the Software Craftsman by Sandro Mancuso in 2 days after this, and felt inspired and pumped this will be the subject of my next post. As one of my famous colleagues would say "I better go and do some work