Last year we published data on the productivity of our development team at 7digital, which you can read about here.

We've completed the productivity report for this year and would again like to share this with you. We've now been collecting data from teams for over 4 years with just under 4,000 data points collected over that time. This report is from April 2012 to April 2013.

New to this year is data on the historical team size (from January 2010), which has allowed us to look at the ratio of items completed to the size of the team and how the team size compares to productivity. There's also some analysis of long term trends over the entire 4 years.

In general the statistics are very positive and show significant improvements in all measurements against the last reported period:

  • a 31% improvement in Cycle Times for all work items
  • a 43% improvement in Cycle Times for Feature work
  • a 108% increase in Throughput for all work items
  • a 54% increase in Throughput for Feature work
  • a 103% improvement in the ratio of Features to Production Bugs
  • a 56% increase in the amount of Items completed per person per month
  • a 64% increase in the amount of Features completed per person per month

 Download the full report here (pdf)

The report includes lots of pretty graphs and background on our approach, team size and measurement definitions.

A brief summary of the last 4 years:

  • Apr09-Apr11* Cycle Time improved (but not Throughput or Production Bugs)
  • Apr11-Apr12 Throughput & Cycle Time improved (but not Production Bugs)
  • Apr12-Apr13 All three measurements improved!

 *The first productivity report collated 2 years’ worth of data.

It’s really pleasing to see we’re finally starting to get a handle on Production Bugs and generally things continuing to improve. It’s interesting to see this pattern for improvement. We haven’t got any particularly good explanation for why things happened in that order and curious if other organisations have seen similar patterns or had different experiences. We’d expect it varies from organisation to organisation as the business context has a massive influence. 7digital is no different from any other organisation in that you have to be able to balance short term needs against long term goals. If anything else our experiences just further support the fact that real change takes time.

We must add the caveat that these reports do not tell us whether we're working on the right things, in the right order or anything else really useful! They're just statistics and ultimately not a measure of progress or success. However we’re strong believers in the concept that you’ve got to be able to “do it right” before you can “do the right thing”, supported by the study by Shpilberg et al, Avoiding the Alignment Trap in IT.

We hope you find this information useful and can help other teams justify following best practices like Continuous Delivery in their organisations. We would of course be interested in any feedback or thoughts you have. Please contact me via twitter: @robbowley or leave a comment if you wish to do so.

sharri.morris@7digital.com
Saturday, July 7, 2012 - 13:03

We have recently been working on an incremental indexer for our Solr based search implementation, which was being updated sporadically due to the time it took to perform a complete re-index; it was taking about 5 days to create the 13GB of XML, zip, upload to the server, unzip and then re-index. We have created a Windows service which queries a denormalised data structure using NHibernate. We then use SolrNet to create our Solr documents and push them to the server in batches.

Solr Update Process

sharri.morris@7digital.com
Friday, March 2, 2012 - 11:47

After having read the o’Reilly book “REST in Practice” , I set myself the challenge of using OpenRasta to create a basic RESTful web service. I decided for the first day to just concentrate on getting a basic CRUD app as outlined in chapter 4 working. This involved the ability to create, read, update and delete physical file xml representations of Artists. It is described in the book as a Level 2 application on Richardson’s maturity model, as it doesn’t make use of Hypermedia yet. One reason why OpenRasta is such a good framework to implement a RESTful service is that it deals with “resources” and their representations. As outlined in “REST in Practice”, a resource is defined as any resource accessible via a URI, and OpenRasta deals with this perfectly as it was built to handle this model from the ground up.

The Basic Web Service

sharri.morris@7digital.com
Thursday, February 2, 2012 - 17:05

When bootstrapping a structure map registry, you are able to set the "life style"  of that particular instance using Structuremaps fluent interface. For example, when using NHibernate, it is essential that you set up ISessionFactory to be a Singleton and ISession to be on a per Http Request basis (achievable with StructureMaps HybridHttpOrThreadLocalScoped directive). Example:

For() .Singleton() .Use(SessionFactoryBuilder.BuildFor("MY.DSN.NAME", typeof(TokenMap).Assembly)) .Named("MyInstanceName");
For() .HybridHttpOrThreadLocalScoped() .Use(context =>; context.GetInstance("MyInstanceName") .OpenSession()) .Named("MyInstanceName");
It's nice and easy to test a Singleton was created with a Unit Test like so:

[TestFixtureSetUp] public void FixtureSetup(){ ObjectFactory.Initialize(ctx => ctx.AddRegistry(new NHibernateRegistry())); } [Test] public void SessionBuilder_should_be_singleton(){ var sessionBuilder1 = ObjectFactory.GetInstance(); var sessionBuilder2 = ObjectFactory.GetInstance(); Assert.That(sessionBuilder1, Is.SameAs(sessionBuilder2)); }

sharri.morris@7digital.com
Wednesday, February 1, 2012 - 15:42

Introduction

We have been using Solr for a while for search, Solr is fantastic, but the way we get our data into Solr is not so good. The DB is checked for new/updated/removed
content, then written into a jobs table, which is checked to see if there are any pending jobs. There are numerous issues with using a DB table as a queue, some for MySQL are listed at:

http://www.engineyard.com/blog/2011/5-subtle-ways-youre-using-mysql-as-a...

To stop using our DB as a queue I decided to test out setting up and using an AMQP based message queue. AMQP is an open standard for passing messages via queues. The finally goal would be to allow other teams to push high priority updates or new content directly to the queue rather than have to go through the DB, which can add considerable latency to the system.

For this test RabbitMQ was used, as it has a .Net library and it runs on virtually all OSs, has good language support, and good documentation. This can be found at the RabbitMQ site: http://www.rabbitmq.com/

Getting Started

I strongly advise reading these before you start:
http://www.rabbitmq.com/install-windows.html
and