Wednesday, 12 April 2017

Micro Post: Big Glass of Sand

Monolithic architecture is classified as a Big Ball of Mud
Microservices architecture is therefore a Big Glass of Sand

I said it first , right here.

Tuesday, 7 March 2017

Testing, Testing, 1.2.3.4.5.6..ad infinitum

Before testing...

I've now been in development/engineering/coding/hacking or whatever you want to call it for more than 10 years.  Testing has always been important, but it  - in my line of work - is now at the forefront of everything I do.   

For the business this is great, every line of code, component and group of components is covered by some set of tests. Be it unit, integration, component...the overloaded terms go on.   This means that a set of requirements -  to some degree  - have been determined to have been met by these tests and the business is getting what it wants (all manner of development and agile practices not withstanding - they all get us to the same place)

It struck me today though that earlier in my career, when testing was still important but lines were  - perhaps - a little more blurred and things were a bit more hit and miss, things were less clinical, less contrived and quite frankly less boring.  This is obviously not a good model for delivering quality software everytime but sometimes I yearn to literally hack out some code,  build it  and just experiment when things aren't working quite right.  Yes this can be done guided by tests and within the confines of the TDD cycle but, for me, having not come from a TDD beginning, feels a little stifiling if I was developing a hobby project (not something I was contributing to on GIT) I, dare I say it, would not necessarily use TDD - but make no mistake, at this time, developing quality software for my employers I would not develop without TDD, it gives me confidence I never had in the past. Boring yes, Confidence inspiring yes.

This kind of work is left now for spiking  or hobby projects, but it feels that, and although this term is bandied around,  software engineering  really is coming of age. 

Engineering, Not art/craft 

I often hear that software development is a craft or an art and that it is creative - this may well be the case for UI related disciplines but business software is the geography teacher of practices, boring and wearing tweed.    Once all the requirements are down and how these are going to be met are sorted  - that's it. Software is being driven out in a very prescriptive manner.  The biggest debates that happen in my team are not religious wars about coding standards - we have static code analysis, not religious wars about performance - we have resources to burn,  not religious wars about  general designs issues we have SOLID (and other guiding design guidelines), but no - we have wars about how we are going to test something and where.  ATDD brings this war right to the inception of a story/feature (sorry I often get these the wrong way around) and then we have a list of work todo and a checklist to say when its done.   This is all very organised and delivers value,  but I'd never realised how boring this actually is when you break it down. 

Summary

This might sound negative,  and boring is generally not considered good, but it is a matter or perspective.  Boring is good, as  I said before, for the business.   Now, I enjoy going to work and doing a good job, but I am wondering what it is I actually like doing anymore, maybe I have become boring and that's what I now like.  That's what my Mrs thinks anyway...

 


 

Tuesday, 6 December 2016

1..2…3…Freeze….Peak



I haven’t written anything for ages as  I’ve been the busiest I have ever been at my current employer – a large E-Commerce Website -  as a result of preparation (planned and unplanned) for its largest ever “Peak” weekend, my company’s name for the Black Friday weekend.    

Sorry this is a long read so grab a coffee or skip to the Summary, or don't read it!

I’ve started writing this on the train coming back from London where I was required to provide extra-support for the Payments Platform (Domain)  - where I ply my trade – a group of Microservices and Legacy components supporting Payment processing for the Website and sitting very closely to the Orders Domain. 

I expected to be bleary eyed and wired to the hilt, full of coffee and adrenaline as me and my colleagues worked like mad to pull out all the stops to make our giant incredible machine work properly. But no, as our CIO put it, everything just worked (well 99% of the time – more on this later). 


The freeze part of the title refers to the fact that, in the weeks up to Peak, as we tried to ensure stability of our software, only the most critical software releases were allowed out.  I , like many of my peers, do not think this is great.  This is costly in many ways; not least, all of the feature development still going on actively within the company is shelved and starts to gather dust and not being released means the software is accruing debt on many fronts.  However, I can’t be too hasty in judgement, this way of working comes from something along the way biting my company really hard during this crucial trading period, this mentality is understandable if not immediately completely excusable.  Hopefully, this will get better as the company gains more maturity, confidence and control over its systems.

Application Insights

As part of the preparations it became clear that our services did not have sufficient monitoring capabilities.   Services were performance tested in quasi-production like environments and so we had some idea about how they might perform, but only an idea.    So we had some expectations about performance but very little in the way of helping us see operational health .  Measuring and keeping  track both would have been impossible without telemetry and logging of some sort.    

We already had some monitoring available via dashboards powered by Grafana. A number of dashboards exist showing Order flow, and, as a result, part of an Orders path through Payments, but only at a high level.  With various counters representing the number of Orders at a particular status, e.g. arriving from the Website, currently being billed, and shipped to the warehouse, this is used to help try to generalise the performance and health of backend systems as Orders wing their way through the Website, through Order and Payment Processing and much more and then, eventually, out to the Warehouse(s). 

If problems start occurring, by some thresholds being exceeded or not met,  represented by numbers and traffic light boxes, focussed investigations into issues can take place in a particular area.   The problem with this is that in between the different statuses there are a vast swathe of software services in locations all over the world involved: including New-World Microservices, Legacy systems, stuff in the sky, Messaging, RESTful APIs, document DBs,  SQL DBs, No-Sql Dbs, Edge caching, and hardware on premise.  Application support have lots of fun trying to diagnose errors and, where they are at a loss ,they hand over to software development teams/developers who mostly run around in blind panic at the same time.

More recently with the release of a updated versions of the Website (which is multi-platform/device) New Relic was enlisted in helping see how the various APIs consumed by the Website were behaving and this has helped to start to build a more fine grained picture of the operational behaviour of APIs   ( and by association the Websites) where various HTTP statuses can be obtained and detailed analysis about response times, payload sizes, customer locations, can be gleaned, but that’s about it in terms of insight into what an API is doing. There is yet another monitoring tool called Kibana (sitting atop of the ELK stack) which is used for some of our Azure based services, largely this gives transparency to telemetry just like New Relic. My money’s on New Relic in this area though.

So, the introduction of performance counters, custom and otherwise were conveniently placed at the door of our APIs, on the 3rd party calls within them and in and around the messaging infrastructure used in the backend processing for our services.  Relevant logging was added too, to ensure that in the event of failure or exceptional behaviour we had some trace of what was going on.  So, having all of this information, but not at your fingertips is a bit of a nightmare. 

Consolidating this all into one single place is achieved with dramatic effect using Application Insights hosted in Azure.    Simply adding appropriate libraries in the service and hooking it up to an App Insights Resource Group in Azure and boom you are presented with a bewildering array of options to present you with your telemetry.    Coupled with this you can also leverage Analytics to perform queries on all of the data collected and herein lies a problem.  Making sense of all of this information is hard.  Certainly, there are some headline acts easily noticeable but more subtle problems like a semi dodgy 3rd party call, or reasonably flaky database writes can be eked out but not after some considered tweaking with Queries etc.    Also without reasonable SLAs from the business in most places and the fact that a message queueing system is used, what alerts should be used and when is a question we are still asking ourselves.

Long nights and Instability

So getting to a good place a lot of pain needs to be had.  And boy did we experience our fair share up to peak.  With SAN migrations causing mayhem with our messaging,  Windows Cluster failovers  misbehaving and deleted Orders in Azure there was plenty of time to practice and gain experience in trying to solve problems  quickly and still meet “customer promise”  a term used to describe meeting cutoff for various delivery options, which is not taken lightly. 

Graph Watching First Blood Part III

So the culmination in all of the Black Friday prep is a series of graph watching sessions, with eyes trained on graphs rolling around in multi-tabbed browsers and lots of suggestions and conjecture about what this trend is saying and what that trend is saying.   I was personally involved in 4 of the 5 days of Peak support, remotely on Thursday, Black Friday some of Saturday  and on premise (by demand) on Cyber Monday.  Certainly, at the beginning of the event focus was very high and everyone was apprehensive about what might occur and what problems might need to be addressed.  Previous year’s Peak had seen all manner of catastrophes including site outages and various problems.


   
The biggest problem this year, then,  from a Payments perspective, was that a Legacy component on dealing with timeouts with a Payment Service Provider couldn’t reliably recover and compensate  from the timeouts and so Orders would be left in a state where we didn’t know whether they had been billed.  Simply retrying the Orders meant risking a cancellation occurring as the legacy component is pretty dumb, so a quick support tool was knocked up and like a scene from Lost, a button was pushed every so often to make sure these were processed properly. 

The highlight though is that everything, generally, worked.   On Cyber Monday, due to a failure with a voucher service Orders dropped off a cliff.  Fixing the problem and spreading the word to our persistent and patient customers meant that between 9-10pm an average of 55 orders as second were being processed and all of this would land at Payments door step. The system coped admirably and a backlog of messages, of which there were up to about 40k at one point (spread across our system) were slowly but surely dealt with.  

Summary

The Payments Platform have now got a lot a data regarding capabilities during spiky load periods, a greater operational understanding and has breathed a big sigh of relief having been able to support Black Friday.   Thing is, its going to be even bigger next year and we are just about to start preparations in updating our Platform to be even more resilient, scalable and available.  Looking forward to it.  We will have a lot more control of our systems come this time next year and a maturity and experience to put what we have learnt to good use.  

Up Next....Reactive Extensions or maybe even HAL (Hypermedia Application Language)




Sunday, 5 June 2016

Learning to TDD is like learning to drive a car

Its true

I'm sure this comparison has been made but I'll have to tire it out a bit more. When I was learning to drive I was pre-occupied with where to put the gear stick, where it actually was, when to use the clutch and so on and so forth and being flustered all the time.

But, quite quickly, with continued practice and with growing familiarity and confidence with a car, I could think about where I wanted to go instead of every detail of what I needed to do, to get there, and now I drive like Lewis Hamilton.

The parallels between learning to drive a car and learning to test drive application development are striking.

Before  I could start to appreciate the nuances of TDD and be able to decide what works for me (for example choosing when to use classic and mockist TDD) I had to become comfortable with some pretty basic things including a good refactoring tool - ReSharper, a good test runner and Unit testing framework like NUnit and a good isolation (mocking) library, like Moq (all of these tools have their proponents and detractors)  but this is what I use for the majority of the TDD I do at the moment, this could change, but the principles won't.

I've been TDD'ing - or at least - trying hard to TDD over the last couple of years and it has been a grind. 

I've read TDD by example by Kent Beck about 10 times and watched plenty of videos and read blogs and - in spite of being told that practice makes perfect - did not practice anywhere near enough - I just didn't have the opportunities in work time alone to do this (but that's another story - and that is in spite of being a TDD first org).

Taking my lack of practice by the horns after reading Sandro Mancuso's book -  The Software Craftsman) this coupled with Roy Oshrove's string calculator kata,  Growing Object Oriented Software Guided by Tests  (GOOS) by Steve Freeman and Nat Pryce, Mocks aren't stubs  by Fowler and Sandi Metz's take on what and what not to test
a few things changed and things have, seemingly, finally clicked.

TDD Bigger than the sum of its parts

Using the tools I mentioned above will get you writing unit test's pretty quickly but not TDD'ing.  Using the tools and TDD'ing will get you TDD'ing but most likely in haphazard manner.

Firstly, TDD'ing at all requires confidence in the tools, mentioned above, and not worrying or having to think about how to use  them - its not that they are extremely complicated to use, but things can get bloody confusing.  Using Moq, we can create Mock objects which are Stubs or we can use Mock objects to verify behavior with them.   Of course, this can be done without Moq, but it is great at creating Mocks and Stubs, quickly.

Using a good refactoring tool really helps speed up things too, but - again - to use one fairly well there is a bit of a big curve learning useful shortcuts but then once this has been done learning when to pull objects into existence or when they should be Mocked or Stubbed needs to be learnt.  

So we have the tools and we need to learn how to use them and when to use them and why to use them.  This is a lot to learn and then apply. However, when all of this starts to fall into place, wonderful things start to happen and the productive ebb and flow of  TDD, which before, was likely staccato comes to the fore. 

With this, and something  I forgot to mention, pairing with people when TDD'ing loses the scare factor, worrying about how to use the tools is a bit of a disadvantage and can disrupt the flow of pairing, it can make you feel pretty stupid if someone is already pretty comfortable, even if you are far from it.  Nobody likes to appear like a novice - not knowing how to find a gear or the accelerator if comparing to driving.   

So the upshot of having the requisites in place is that shaping applications, driving the design by pulling in collaborators (either by Mocking or Stubbing them) with a quick refactoring short cut can all be done under the TDD cycle with confidence and very nearly without a thought as to how you are using the tools to get to your end goal - you are just using them as a means to an end. This is a really fulfilling feeling. 

Summary

I am writing about my experiences with TDD as I've struggled with it, and watching people around me get it and not having that same Eureka moment (or moments) has added to my anxiety, stress and worry that I would never be able to TDD effectively.  But I think I am just about there, I always like TDD'ing,  I now think that I can do it pretty well.  And on this note if other people are having trouble with it, stick with it, practice a bit more - read a bit more too, make sure you are reading the right things,  some things may likely only make sense when you have some of the other fundamentals boxed off.

Happy TDD'ing.



Monday, 25 April 2016

Newline character is 1 character

I will never forget the above

When using substring and you can't work out why "\n" is not 2 characters refer back to this title.  Does it make me look stupid, Yes, hopefully I will work this out sooner next time.  To be fair to myself its like and and. 

That is all. 


Tuesday, 19 April 2016

A Beautiful Solution?

^^Perhaps^^

For various reasons me and my team hadn't been spending a lot of time at the coal face, coding.  From spikes, POCs and analysis to release activity -  coding opportunities have been few and far between. Thankfully this changed over the last couple of weeks.

As part of the Payments Platform we have been given the opportunity to chip away at some of the Legacy processes to provide more flexibility in our Payments offering.  Part of this process see's us hoping to consolidate some existing logic to manage the integration with a 3rd party Payment Solution Provider more flexibly.  Specifically around handling particular types of payment which are handled differently based on certain criteria such as a customers Billing country, what currency they are paying with and the card type they are using.  

Stuck

In essence the work we are carrying out is a redesign and re-factor of some logic which is spread out in a number of places.  Bringing it all in, has been both interesting and frustrating.

Maybe its because of the lack of practice but I couldn't quite nail a solution (to a part of the problem) until about a week after starting the work.  

I'd started out test first, using TDD to drive out an initial solution, which I was sort of happy with, and then re-factored - went a bit mad - duplicated loads of tests and then had lost the essence of a simple solution.  As I wasn't happy, I made a V2 of my work, left all of - what had become sociable tests - in place and started again.   I ended up with real clarity, no over-engineering and a solution  I am really happy with.

The essence of the problem

When filtering data there are a number of options available.  


Typically if the data is stored in SQL (perhaps relationally - but not always) , querying and filtering can easily be performed in a stored procedure (or data layer). Certainly at the cost of having business logic in the DB (usually a pretty poor choice), amongst other trade-offs.   

Conversely a denormalised bit of data stored in a document store or - again  - in SQL can be retrieved and then worked on in memory and up in the vernacular of the UL of a domain  using LINQ and , giving a nice opportunity to easily test behaviour and nail requirements around a business critical process.

The existing solution, we are augmenting, used some pretty meaty  LINQ queries.  Now, arguably for some, this may suffice and do the job, but due to the number of things needed to filter on and various conditions, some pretty "all encompassing" LINQ 'WHERE' statements were being used with comments.  Coupled with there were no tests for some of the behaviour, some of it seemed coincidental.

The comments alone are enough to make you question the clarity of a design and are well documented as a marker of a code smell in a lot of situations (but not all of course).

Me and my colleagues also noticed there were lots of null checks thrown in too.  We all thought - we can do better than this!  I lobbied for this task and was certain I'd knock something out quickly.  I struggled!

My first approach saw me using the decorator pattern or at least - a bastardised version of it.  The basic concept is illustrated below.  In fact the approach can be visualised as moving some data through a pipeline of filters, each filter being stateless and all working toward refining some data to a reduced set. I talked about this kind of approach when mentioning folding in an earlier post. This is a functional programming technique.





In code my first attempt was facilitated by the code below: 

In the snippet above I have an interface that takes the seed collection (the original query) in the first argument - only the first filter in the chain would work on this.

The other arguments are values which are needed to drive the filtering, all of this information has to go down so that each filter can use what it needs. This could be simplified by applying Extract/Introduce Parameter Object to the parameters. 

Subsequent filters just work on the results from previous filters who have their Execute method invoked within a filters Execute method.  The other filter being a dependency of course.

Only one filter implementation is shown here, the currency filter.  Its Execute method dictated by the interface, applies any filtering and returns the result for the next filter to do its thing (if there is one configured).


Each filter is stateless, in the end after applying a number of filters, an input set is reduced to just one matching value based on various criteria.

Setting up a "chain" of filters would look like:


Now, this worked but after review by my colleague, the fact I'd gone out of my way to encapsulate the filters as separate classes, kind of indicated that the filters should be usable by him as standalone autonomous bits of logic and this is exactly what he wanted to do as he had some similar filtering which only required some of the filters I'd made. Fair enough.

With a few tweaks this would work. The problem was that with the dependency of the filter up in the constructor without some kind of null object pattern or specific handling to check for nulls or handle setting up the chain of filters, things would break if composed in the wrong way.  That is, for example, if he wanted to use a filter but didn't want to chain a specific child filter.

As well as a set of scenario driven tests for the domain service, I'd written tests for each filter and in so doing  duplicated lots of test.  I'd not quite implemented what I desired.

I started looking at Haskell list comprehensions and thinking about a functional way to tackle the problem and  came across a post  on S/O which was exactly what I was looking for.  In essence I could use a left-fold and have my filters called one after another with a LINQ extension method called Aggregate which - like my solution - takes a seed but - unlike my solution - handles 'accumulating' or keeping track of the result of applying filters without the need to hand-crank any other classes. Composition looks like below. A list of methods are used to impose filtering with the heavy lifting done by a bit of LINQ as we can see below in the next snippet.

Below, the  Aggregate function is called which takes a seed (our initial data to be filtered - number 1 in the diagram above), a lambda (or anonymous function) representing an accumulator method, which matches the signature of the methods in the filters list (numbers 2 and 3 in the diagram above).



It then applies each of the methods - composed above - in turn. Running each filter and passing its result, via the accumulator to the next filter, eventually pushing out a much more refined result adhering to the nip and tuck that the filters imposed along the way. 

Summary

I really like the elegance of this solution.  Yes it is simple (you may wonder why so many lines were devoted to something which may not have even been given a second glance on what you are working on)  and there are yet simpler ways to tackle our problem, the filters could just be looped over for example (or even revert to clunkier LINQ queries).  It just struck me as, erm, nice.  You may well disagree.  If so, please let me know  in the meantime I might do a V3 and just filter down in the persistence layer :)

 








Thursday, 31 March 2016

Book Review: The Software Craftsman: Professionalism, Pragmatism, Pride

Spur

There were a few things that made me read this: 
  • I saw it lying on my dev managers desk
  • We have a coach from the company that the author co-founded in our place at the moment
  • I was cynical about the software craftsman idea and thought it was about self-serving, elitist and pumped up self publicists
Inspirational
 
I'm not a soppy or effusive person, but I have to say this book really resonated with me and, like a really good rally call,  made me feel positive about my current position and how to sustain and develop my career further and just how I might achieve this. The thing is, I knew most of this already but just hadn't bundled the virtues, thoughts and values that are described, under a particular banner. 


The Software Craftsman Manifesto


It actually exists!  A set of values to which a software craftsman can hang their hat.  Now, like the author says, there are people who don't like to call themselves craftsman and I am one of them, but the ideas of professionalism, adding value and trying to do a good job are things I already embody and value in my peers. Its good to see that in the UK there is a large community of people who are interested in the same things, the LSCC being one of these groups.  I need to get to one of these groups locally, any of these in Brum? 


Not about TDD or XP

The book isnt about TDD or XP necessarily but they are advocated as, currently, the best ways of helping produce quality and valuable software.   In fact it is mentioned an awful lot  - but you will need to buy a different book for advice on specific technical practices.

What to do

I, like the author, often hear that people say that the company they are working at is not investing in them and propelling them forward in their career.  Sandro pulls no punches in saying that this responsibility is left squarely at your own feet, which I completely agree with.  Professionals in other industries have to stay current and continue a life long journey of mastery why should devs be any different.  

From reading books to attending meetups to pet projects to blogging,  a number of strategies are discussed to help you help yourself.  

This is probably obvious to most people, but doing it is different from talking or knowing about what you need to do. In the first part of my career -  after leaving Uni - I didn't invest enough time in learning  new stuff and keeping up to date with what is current - its a lot easier now and I'm lazy - but I'd be further ahead if I'd maintained my out of work learning after Uni. Don't get me wrong, I was still working hard and wasn't a 9 to 5'er but my working life may have been a bit easier if I'd supplemented it with stuff I'd learnt so that I could lead and not just be lead

Identifying your peers

There's a great section on the different types of personality traits of people you will work with from the "fanboy" to the "irrational" to the "cynic".  Discussing how to deal with different people and knowing that other people have identified them, certainly makes you realise that you weren't the only frustrated person.    There is some great advice about being humble and open to ideas, which is a quality some peers lack and hopefully when they read his hallowed words they will reflect and be a bit more open.   Also "fanyboys'" should take stock and just "chill the fuck out".  Yes, I'm sure the tech you learnt about last night is great I just don't want it rammed down my throat - again....


Interviewing/Being interviewed/Career choice


Yeah , tips on interviewing may seem a bit out of place but good advice on interviewing your prospective employers is indispensable.  Also, advice on owning your career and knowing what you want and when you want it vindicates some of the choices you have had to make where more money was necessary or opting for a slightly less hectic pace when building a family. 

Summary

I'm certain I've missed some pivotal parts of the book out of the review, my short term memory is terrible (and I promise to pay them their dues next time I pick up the book)  but I am left in no doubt that this is an excellent book, if you are disillusioned it may reignite you hunger, if you are looking at what you need to do to help you progress, it has sage advice.  I'd almost label it a developers self-help book,  I certainly wouldn't have bought it, if it had a self-help title but it makes me think I might read a few more of these if it gives you a spring in your step.  Or is that just because its Spring? Enjoy.