Behaviour-Driven-Development (BDD)

Traditional software releases, agile or not, usually contain the following steps:

  1. Specification
  2. Development
  3. Testing

In a perfect world, the specification enables developers to write a program with the right functionality. The software would be written with no bugs and be delivered to the customer after successful testing.

Unfortunately things go wrong.

  • communication problems due to complexity of the problem
  • misunderstandings about what the customer asked for
  • implicit assumptions about desired behaviour – e.g edge cases, performance etc..
  • environmental issues such as external systems not being ready, test data being unavailable or unrealistic
  • we may not appreciate the size of the work until we are in the middle of it

The later we discover these problems, the costlier they are to solve.

Specifications, no matter how well they are documented and in how much detail, are too often lost in translation. Also, if developers and testers are not involved to suggest simpler alternatives – the solution can become unnecessarily expensive. Traditional system documentation is out-of-date almost as soon as it’s written, whereas agile acceptance tests/specifications by example when used as the basis of integration/unit tests become your system documentation and are up-to-date, or the tests fail.

As a feature is developed, changes very often arise. Customers may not require what they first thought they wanted. Often new features are invented by developers that may cover (in their minds) possible future requirements. So what usually happens is that the specification does not match what the program actually does. How then can we keep the spec in sync with the code?

Typically, the development-testing cycle is a bottleneck that can seriously delay a project. Testers are lumbered with testing an application they know little about. They have to pick away at original requirements that may not match what the application actually should do. This may mean repeating the same information in further documents used as test scripts.

BDD addresses some of the problems that TDD cannot solve on its own. It involves collaboration between developers, QA, business experts and the customer. The aim is to uncover incorrect assumptions and discover functional gaps before development starts.

It was coined by Dan North in 2003 and has more recently become quite a mainstream idea. Around that time, the term ‘Agile Acceptance Testing’ had pretty much the same goals. However, when people hear the word Testing, it can create the impression that it is something done after coding and has nothing to do with requirements. In fact BDD is all about requirements and so the word Testing can lead to some misunderstanding within this context. BDD prescribes a particular way of wording the expected behaviour through concrete examples or scenarios. This tends to work well in situations such as workflows but not in others such as state transitions or calculations. They may be better described in table format.

BDD allows us to deliver the right software collaboratively. Communication is at the core of this. BDD is definitely not a development methodology whereas TDD is. It is also not so important which tool is used to link specifications to code. What is more important is the focussed discussion about what is required. The tool we use for BDD is nowhere near as important as the process of getting the examples: the collaborative approach to build and refine them; and (hopefully) a common language evolving.

BDD also really feeds nicely into TDD, iteratively guiding it from the outside-in.

Each scenario can be automated against the existing code so that we have living documentation – i.e. Anything changes in either the specs or the code and the business knows about it.

Automating acceptance tests can be quite a headache as they typically are more end-to-end. They might involve some complex setup and configuration before running. In that respect, the BDD tools are not ideal for writing lots of regression tests with. The focus should be on identifying the requirements and endeavouring to prove important scenarios via test fixtures.

Posted in BDD | 1 Comment

Tidying up technical debt using TDD

Where do we start to pay off  technical debt?


One of the problems is the time it takes to release functionality to customers. With each change we need to re-test the product. The QA people are overburdened with manual regression tests.

There’s also the problem of how to redesign some of the code base. We want some sort of check that makes sure that released code is of a consistent quality.

Agile Testing

My experience lies in Automated Testing. I first wrote a unit test around 2001 when Extreme Programming was the new thing. Since then, I’ve discovered many ways of doing things. Some good, some not so good.

Initially I saw automated testing as something you did purely for regression tests. I would jump right in and start making changes to code. Then I’d scratch my head and wonder about all the database setup and context I would need in order to validate my code. It became a maintenance headache.

A common question I came up against was what is the value of something that doesn’t manually test software end to end? I was told the only way a test was meaningful was if it went through the UI. But I’d worked on software that was tested through screens before. The testing tools were based on the idea of automating the pointing and clicking on the screen, then verifying text on the screen. These tests seemed to always fail even though nothing had changed. They were very brittle.

Then came the concept of TDD and unit testing.

Test Driven Development

TDD is a development methodology that aims to catch problems early on.

It is best to catch problems early. It is far cheaper, in terms of time and effort, to fix a problem in development than after it has been released to a customer.

There are many great examples of this in software and also in areas such as car manufacturing, the building industry and space missions.

In TDD, developers write small tests to specify what a unit of code should do. Then they implement the code to satisfy the requirement. The code is tidied up, as they go, without breaking existing functionality.

Instead of trying to do a big design up front, it works iteratively, evolving the code to solve a problem bit-by-bit. That is why it works best with iterative methodolgies such as Agile.


It provides the following benefits:

  1. a set of tests that exercise the code and tell us early if something breaks
  2. better quality code
    • more maintainable.
    • focussed on a single defined problem
    • modules are inherently less tied together
  3. documented functionality that is a true representation of what the code does
  4. promotes group ownership – the team collectively own the code and verify it as part of a build

It is important to note, it does not replace good software architecture and domain knowledge.

On its own, TDD will change the way we develop but it will not solve every bug. It also does not guarantee we deliver the right thing. It should be used in conjunction with other good practises.

A common question is how much longer will it take to do TDD? The answer is it depends. There’s a good post on @mikehadlows’s website addressing this. I would say to start with it can take some getting used to. It may take twice as long depending on the problem and experience of the team. Overall it should make things easier – especially as the post development testing cycle tends to add on a lot of release time and is rarely factored into the coding time.

For more information and some real world case studies see:

Posted in Uncategorized | 1 Comment

Progressive .Net tutorials

I had the pleasure of attending Progressive .Net tutorials last week at skillsmatter. Thanks to my current workplace 15below . The thing I liked best about these sessions is they covered each subject in 3 ½ hours which gave plenty of time to understand some theory, do some exercises and have discussion. On the whole the sessions were really useful and I recommend anyone interested in learning good software practises to look at the website for upcoming talks (plenty are free if you go in evening). Also you can watch the videos of any session from the links.

Information leaves my head as quickly as it enters, so I thought I’d better do a brain dump and summarise what I learnt for future reference.

Day 1 5th Sep


i.e. specifying requirements as tests.

So we went through some theory with Christian. The following points stood out to me:

  • Automate acceptance criteria with stories and a bunch of scenarios showing concrete examples
  • Do just enough stories/scenarios for a sprint (i.e. don’t do everything up front). Limit Work in progress
  • Examples are good for explaining complexity. We do it every day when talking to people. It’s difficult to explain abstract concepts without examples.
  • In order to keep examples up to date, automate them as tests against live code using continuous integration
  • Make the examples business readable. E.g They showed a nice dashboard showing progress of specifications etc
  • It is not necessary to replace burndowns and task lists with scenarios but it may be helpful to include a loose link between them (e.g. a task/bug Id referenced in the story title)
  • Developed or legacy code is fine to do BDD against, but beware – it will take longer. It is much easier for greenfield projects. We should measure timings against that to get a true idea of the time taken to work in this way.

We then learnt how the Gherkin language can be used to provide scenarios in a formalised manner that is succinct enough to be easily read and tests core business functionality.

We worked through some exercises like the one below (from the presentation slides):


In the afternoon, there was some overlap with the first session, but the interesting part was looking at how to integrate the specs with the code. I suggest go to the website, watch the tutorials, download the plugin and play around. There were some really easy ways to create the fixtures needed to run the scenarios from gherkin feature files. I’m planning to try and use this tool as soon as possible.

Day 2 6th Sep


The speaker gave a good talk on how architecture on its own does not solve the problem of scalability. Testing is needed as early as possible as a way of exploring how the application behaves with multiple users and also to prove how it should respond under known loads. There is also a distinction between Load Testing (testing under typical load) and stress testing (seeing how far we can go before breaking it)

Here is the overview of what we did:

The tools we used to exercise an example application were open source but he explained the UI was very old. Still, it worked and the point of the tutorial was to show the sort of concepts useful for load testing early in a software cycle.


This was a fun session. The speaker was very enthusiastic about CI and a variety of other topics like TDD coverage and feature branching

Generally the slides showed examples of building and deploying applications using Teamcity (one in less than 7 secs!) and provided the reasons why it is important to have software that is constantly in releasable state.

The core practises were given as follow:

  • Continuous Integration
  • Configuration Management
  •           Dependencies
  •           Documentation
  •           Environment
  •           App Configurations
  •           Data
  • Tests
  •           Unit
  •           Integration
  •           Functional/Acceptance
  •           Performance/Load
  •           Penetration

Feature switching vs feature branching was a hot topic.

The interesting part was how software was deployed onto staging, UAT and production. The speaker talked about the following flow that allows the build to check in packages that can be installed automatically on the test server via a build agent on every check in:

Day 3 7th Sep


Jon Skeet is a very entertaining speaker and covered a complex subject with some enthusiasm.

The first part showed how you would use the C#5 Async CTP library to write asynchronous code without having to write too much boilerplate code. The history of async in C# was described – this is the third attempt. It should result in much cleaner code: e.g.

// code to asynchronously determine the size of the Stack Overflow home pageusing System;
using System.Net;
using System.Threading.Tasks;</pre>
class Program
// Caller (block 1)
static void Main()
Task<int> sizeTask = DownloadSizeAsync("<a href=""></a>");
Console.WriteLine("In Main, after async method call...");
Console.WriteLine("Size: {0}", sizeTask.Result);

// Async method (block 2)
static async Task<int> DownloadSizeAsync(string url)
var client = new WebClient();
// Awaitable (block 3)
var awaitable = client.DownloadDataTaskAsync(url);

Console.WriteLine("Starting await...");
byte[] data = await awaitable;
Console.WriteLine("Finished awaiting...");

return data.Length;

In essence 2 keywords have been introduced to the compiler – async and await. async explicitly sets up your method as something that is intended to run asynchronously. await informs the compiler that the code following this line (called a continuation) should run after the async method comes back. Thus the compiler hold on to the state and but knows not to wait on the thread.

The second part was a little scary. Bit by bit, the speaker implemented the async functionality using .Net 4 in order to get an understanding of what is going on under the hood. What we could see is that the compiler is making use of delegates and iteration blocks to decide whether to carry on synchronously (if the async task is already completed) or keep the state and hand back control to the thread pool. The full blog posts are here


This was a slide show powerpoint presentation. I’ve downloaded the slides – they pretty much reflect the content of the talk..

In summary, the following points were discussed:

· Agile Manifesto

· Concepts

· Team

  • Developer
  • Tester
  • Business Analyst
  • Team Lead
  • Architect
  • Product Owner

· Methodologies

o Scrum

o XP

o Kanban

I think this talk would be really useful if we had had more discussion of some of the pain points in the process and how to deal with the customer who wants a project plan up front. In the break he touched on this, giving example of a scrum master who presented a GANT chart to the client while maintaining sprints internally with story points and factoring in slack.

Posted in .Net, BDD | Leave a comment

Netduino – 2. Program the Button to control the LED

I’m still waiting for some bits and pieces to arrive so I can actually connect something up to the Netduino. Perhaps I’ll use a breadboard so I don’t have to do any soldering yet. In the meantime I’ll have a look at the only other onboard component (unless you have a Netduino Plus) – the button.

The button in the middle of the board (Pins.ONBOARD_SW1) is just another port that you can read using the  .Net MF libraries. This simple example loops and reads the status of the button. If it’s up it returns true, down returns false. Running the program and pressing the button just toggles the LED on and off.

using System;
using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;
using SecretLabs.NETMF.Hardware;
using SecretLabs.NETMF.Hardware.Netduino;

namespace Netduino.TestButton
    public class Program
        public static void Main()
            // write your code here
            var led=new OutputPort(Pins.ONBOARD_LED, false);

            var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled);
            while (true)
                if (!button.Read())

There is another way to check button state. InterruptPort is an event driven class that fires when a port is in a certain state. This is better because it won’t tie up the processor. The changes in levels are fired as events at the leading and bottom edge (Imagine the output from the button port as a graph with y axis at 0 going to 1 when pressed and back to 0 when released). The constructor has the following signature:

public InterruptPort(Cpu.Pin portId, bool glitchFilter, Port.ResistorMode resistor, Port.InterruptMode interrupt);

Glitch filter tells the controller to ignore the button unless it is pressed down for a determinate length of time. This avoids spikes that would otherwise cause erroneous events.

For the onboard button we can disable the resistor mode and for interrupt mode we are interested in both edges (high=push down and low=released). Pushing the button turns on the LED

image image

using System;
using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;
using SecretLabs.NETMF.Hardware;
using SecretLabs.NETMF.Hardware.Netduino;

namespace Netduino.TestInterrupt
    public class Program
        static OutputPort led = new OutputPort(Pins.ONBOARD_LED, false);
        public static void Main()
            // write your code here
            InterruptPort button = new InterruptPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled, Port.InterruptMode.InterruptEdgeBoth);
            button.OnInterrupt += new NativeEventHandler(button_OnInterrupt);


        static void button_OnInterrupt(uint data1, uint data2, DateTime time)

It would interesting to knock up a little app that recorded a pattern of button presses. After recording, it loops, waiting for you to press the button in the same sequence. If you do, it will light up the LED. You could imagine this connected to some sensor measuring a secret knock-knock password or a keyboard with which you play some musical notes. If you get it right, it unlocks something like a box for instance. It might also be a nice way for sending morse code. There’s quite a few people that have worked on morse output but no readers as far as I know. It’s probably a bit tricky to get the timings right.

Posted in Netduino | 2 Comments

Netduino – 1. First Steps

I received my Netduino last week. This is an electronics prototyping board with a microcontroller that runs the .Net Micro Framework 4.1. It’s great for someone like me who wants to focus on learning electronics without the additional step of struggling with the software language. It’s one of a number of boards that use the .Net Micro Framework rather than the usual C/C++ or specialised embedded languages to control voltages and ports.

The board itself is open source so you can build one of these yourself. It is also PIN compatible with the Arduino – another very popular board for hobbyists . What this means is that, in most cases, you can attach the same daughter boards (called shields) to your Netduino and there are quite a wide variety of these shields available. They vary in functionality from enabling wifi/ethernet connections, Joysticks and MIDI to connecting with motors and LCD touchscreens.

While waiting for the Netduino to arrive, I started with the equivalent to Hello World application.There were a few things to install to get going:

  • Visual C# 2010 Express (or Visual Studio 2010)
  • .NET Micro Framework 4.1
  • Netduino SDK – 32-bit/ 64-bit

There’s a number of videos showing how to get started.

After installing .Net MicroFramework you will find a section for Micro Framework under the Visual C#New Projects dialog. Installing the driver for Netduino gives you a new icon for ‘Netduino Project’. If you were to buy a different board such as the Fez you would need to install the driver for that. In this way, the .Net Framework is quite generic and allows for a range of hardware. It is also another Microsoft open source project, Apache licensed, so you could in fact tailor it yourself to your own application.

The code you get when you create the new project looks indistinguishable from a standard console application, apart from a few differences in namespaces. SPOT stands for Smart Personal Objects Technology and dates back to about 2004 when Microsoft experimented with embedding .Net in wristwatches and so on. Add the lines to blink the LED and you are ready to test it works.

using System;
using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;
using SecretLabs.NETMF.Hardware;
using SecretLabs.NETMF.Hardware.Netduino;

namespace Netduino.HelloWorld
    public class Program
        public static void Main()
            // write your code here
            OutputPort led = new OutputPort(Pins.ONBOARD_LED, false);

            while (true)

If you have a Netduino plugged in, press F5 and you will get a few messages in the status area confirming the application is deployed to your board and, hey presto, your first blinkety-blinking led. You can step through the code using the debugger, trace lines to the output window etc..  Be warned, after stopping debugging, the device will continue to flash as the the program is now running from the unit’s memory space and will continue every time it is powered on until another program replaces it. To get round this, just run the following code to turn the led off.

OutputPort led = new OutputPort(Pins.ONBOARD_LED, false);

If, like me you, can’t wait to get started and your board is hanging around in some postal sorting office somewhere, you can try debugging with a Netduino emulator which is a really great idea. Its early days with this project and at the moment it works quite well if all you want to do is play with just a button and led

Posted in Netduino, Uncategorized | 1 Comment