Last year we published data on the productivity of our development team at 7digital, which you can read about here.

We've completed the productivity report for this year and would again like to share this with you. We've now been collecting data from teams for over 4 years with just under 4,000 data points collected over that time. This report is from April 2012 to April 2013.

New to this year is data on the historical team size (from January 2010), which has allowed us to look at the ratio of items completed to the size of the team and how the team size compares to productivity. There's also some analysis of long term trends over the entire 4 years.

In general the statistics are very positive and show significant improvements in all measurements against the last reported period:

  • a 31% improvement in Cycle Times for all work items
  • a 43% improvement in Cycle Times for Feature work
  • a 108% increase in Throughput for all work items
  • a 54% increase in Throughput for Feature work
  • a 103% improvement in the ratio of Features to Production Bugs
  • a 56% increase in the amount of Items completed per person per month
  • a 64% increase in the amount of Features completed per person per month

 Download the full report here (pdf)

The report includes lots of pretty graphs and background on our approach, team size and measurement definitions.

A brief summary of the last 4 years:

  • Apr09-Apr11* Cycle Time improved (but not Throughput or Production Bugs)
  • Apr11-Apr12 Throughput & Cycle Time improved (but not Production Bugs)
  • Apr12-Apr13 All three measurements improved!

 *The first productivity report collated 2 years’ worth of data.

It’s really pleasing to see we’re finally starting to get a handle on Production Bugs and generally things continuing to improve. It’s interesting to see this pattern for improvement. We haven’t got any particularly good explanation for why things happened in that order and curious if other organisations have seen similar patterns or had different experiences. We’d expect it varies from organisation to organisation as the business context has a massive influence. 7digital is no different from any other organisation in that you have to be able to balance short term needs against long term goals. If anything else our experiences just further support the fact that real change takes time.

We must add the caveat that these reports do not tell us whether we're working on the right things, in the right order or anything else really useful! They're just statistics and ultimately not a measure of progress or success. However we’re strong believers in the concept that you’ve got to be able to “do it right” before you can “do the right thing”, supported by the study by Shpilberg et al, Avoiding the Alignment Trap in IT.

We hope you find this information useful and can help other teams justify following best practices like Continuous Delivery in their organisations. We would of course be interested in any feedback or thoughts you have. Please contact me via twitter: @robbowley or leave a comment if you wish to do so.

sharri.morris@7digital.com
Tuesday, April 28, 2015 - 16:27

Why metrics? Since I joined 7digital I've seen the API grow from a brand new feature side by side with the (then abundant) websites to be the main focus of the company. The traffic grew and grew and keeps on growing in an accelerated pace and that brings us new challenges. We've brought the agile perspective into play which has made us adapt faster and make fewer errors but:

sharri.morris@7digital.com
Tuesday, March 17, 2015 - 12:49

I wanted to start looking at alternatives to our current set of cucumber feature tests. At the moment on the web team we're using using FireWatir and Capybara. So I though I'd take at look at what was available in Node.js. Many people think it's strange that a .Net shop would use a something written for testing Ruby or even consider something that isn't from the .Net community. Personally I think it's a benefit to truly look at something form the outside in.  Should it matter what you're using to drive your end product or what language your using to test it? Not really. So what are the motivations for moving away from Ruby, Capybara and FireWatir? In a word 'flaky', we've had heaps of issues getting our feature tests, AATs and smoke tests reliable. When it comes to testing, consistency should be king. They should be as solid as your unit tests.  If they fail you want to know that for definite you've broken something, rather than thinking it's a problem with the webdriver. It is with this aim in mind that I started looking at the following. Cucumber.js is definitely in it's infancy, there's lots of stuff missing but there's enough there to get going. Zombie.js is a headless browser, it claims to be insanely fast, no complaints here.

sharri.morris@7digital.com
Tuesday, March 17, 2015 - 12:44

After seeing some relative success in our Solr implementations xml response times by switching on Tomcats http gzip compression, I've been doing some comparisons between the other formats solr can return. We use Solrnet, an excellent open source .NET Solr client. At the moment, it only supports xml responses, but every request sends the "Accept-encoding:gzip" header as standard, so all you have to do is switch it on on your server and you've got some nicely compressed responses. There is talk of supporting javabin de-serialisation, but it's not there yet. I've decided to compare the following using curl with 1000 rows and 10000 rows in json, javabin, json/gzip compressed and javabin/gzip compressed.

anna.siegel@7digital.com
Wednesday, March 11, 2015 - 12:27

 

Following changes to our Catalogue API, we are releasing a change to the Basket API to support premium quality formats.

This release adds a package element below basketItem in all basket responses. This is to support the sale of music in different audio formats.

An example response would now look like this: