Last year we published data on the productivity of our development team at 7digital, which you can read about here.

We've completed the productivity report for this year and would again like to share this with you. We've now been collecting data from teams for over 4 years with just under 4,000 data points collected over that time. This report is from April 2012 to April 2013.

New to this year is data on the historical team size (from January 2010), which has allowed us to look at the ratio of items completed to the size of the team and how the team size compares to productivity. There's also some analysis of long term trends over the entire 4 years.

In general the statistics are very positive and show significant improvements in all measurements against the last reported period:

  • a 31% improvement in Cycle Times for all work items
  • a 43% improvement in Cycle Times for Feature work
  • a 108% increase in Throughput for all work items
  • a 54% increase in Throughput for Feature work
  • a 103% improvement in the ratio of Features to Production Bugs
  • a 56% increase in the amount of Items completed per person per month
  • a 64% increase in the amount of Features completed per person per month

 Download the full report here (pdf)

The report includes lots of pretty graphs and background on our approach, team size and measurement definitions.

A brief summary of the last 4 years:

  • Apr09-Apr11* Cycle Time improved (but not Throughput or Production Bugs)
  • Apr11-Apr12 Throughput & Cycle Time improved (but not Production Bugs)
  • Apr12-Apr13 All three measurements improved!

 *The first productivity report collated 2 years’ worth of data.

It’s really pleasing to see we’re finally starting to get a handle on Production Bugs and generally things continuing to improve. It’s interesting to see this pattern for improvement. We haven’t got any particularly good explanation for why things happened in that order and curious if other organisations have seen similar patterns or had different experiences. We’d expect it varies from organisation to organisation as the business context has a massive influence. 7digital is no different from any other organisation in that you have to be able to balance short term needs against long term goals. If anything else our experiences just further support the fact that real change takes time.

We must add the caveat that these reports do not tell us whether we're working on the right things, in the right order or anything else really useful! They're just statistics and ultimately not a measure of progress or success. However we’re strong believers in the concept that you’ve got to be able to “do it right” before you can “do the right thing”, supported by the study by Shpilberg et al, Avoiding the Alignment Trap in IT.

We hope you find this information useful and can help other teams justify following best practices like Continuous Delivery in their organisations. We would of course be interested in any feedback or thoughts you have. Please contact me via twitter: @robbowley or leave a comment if you wish to do so.

sharri.morris@7digital.com
Thursday, September 20, 2012 - 16:14

Over the last month we've started using ServiceStack for a couple of our api endpoints. We're hosting these projects on a debian squeeze vm using nginx and mono. We ran into various problems along the way. Here's a breakdown of what we found and how we solved the issues we ran into. Hopefully you'll find this useful. (We'll cover deployment/infrastructure details in a second post.)

Overriding the defaults

Some of the defaults for ServiceStack are in my opinion not well suited to writing an api. This is probably down to the frameworks desire to be a complete web framework. Here's our current default implementation of AppHost:

 

For me, the biggest annoyance was trying to find the DefaultContentType setting. I found some of the settings unintuitive to find, but it's not like you have to do it very often!

Timing requests with StatsD

As you can see, we've added a StatsD feature which was very easy to add. It basically times how long each request took and logs it to statsD. Here's how we did it:

 

It would have been nicer if we could wrap the request handler but that kind of pipeline is foreign to the framework and as such you need to subscribe to the begin and end messages. There's probably a better way of recording the time spent but hey ho it works for us.

sharri.morris@7digital.com
Sunday, September 16, 2012 - 11:31

At 7digital we use Ajax to update our basket without needing to refresh the page. This provides a smoother experience for the user, but makes it a little more effort to automate our acceptance tests with [Watir](http://wtr.rubyforge.org/). Using timeouts is one way to wait for the basket to render, but it has two issues. If the timeout is too high, it forces all your tests to run slowly even if the underlying callback responds quickly. However if the timeout is too low, you risk intermittent fails any time the callback responds slowly. To avoid this you can use the [Watir `wait_until` method](http://wtr.rubyforge.org/rdoc/classes/Watir/Waiter.html#M000343), to poll for a situation where you know the callback has succeeded. This is more inline with how a real user will behave. ### Example

sharri.morris@7digital.com
Friday, September 14, 2012 - 13:21

At 7digital we use [Cucumber](http://cukes.info/) and [Watir](http://wtr.rubyforge.org/) for running acceptance tests on some of our websites. These tests can help greatly in spotting problems with configuration, databases, load balancing, etc that unit testing misses. But because the tests exercise the whole system, from the browser all the way through the the database, they can tend be flakier than unit tests. Then can fail one minute and work the next, which can make debugging them a nightmare. So, to make the task of spotting the cause of failing acceptance tests easier, how about we set up Cucumber to take a screenshot of the desktop (and therefore browser) any time a scenario fails. ## Install Screenshot Software The first thing we need to do is install something that can take screenshots. The simplest solution I found is a tiny little windows app called [SnapIt](http://90kts.com/blog/2008/capturing-screenshots-in-watir/). It takes a single screenshot of the primary screen and saves it to a location of your choice. No more, no less. * [Download SnapIt](http://90kts.com/blog/wp-content/uploads/2008/06/snapit.exe) and save it a known location (e.g.

sharri.morris@7digital.com
Monday, September 3, 2012 - 11:51

[TeamCity](http://www.jetbrains.com/teamcity/) is a great continuous integration server, and has brilliant built in support for running [NUnit](http://www.nunit.org/) tests. The web interface updates automatically as each test is run, and gives immediate feedback on which tests have failed without waiting for the entire suite to finish. It also keeps track of tests over multiple builds, showing you exactly when each test first failed, how often they fail etc. If like me you are using [Cucumber](http://cukes.info/) to run your acceptance tests, wouldn't it be great to get the same level of TeamCity integration for every Cucumber test. Well now you can, using the `TeamCity::Cucumber::Formatter` from the TeamCity 5.0 EAP release. JetBrains, the makers of TeamCity, released a [blog post demostrating the Cucumber test integration](http://blogs.jetbrains.com/ruby/2009/08/testing-rubymine-with-cucumber/), but without any details in how to set it up yourself. So I'll take you through it here.