by Alan Hannaway, 7digital Product Owner for Data 

We often ask ourselves How different do you think our listening experience will be in the next ten years? It’s a difficult question to answer, but a great one to ask. Serving an industry where there is constant change, the question brings us right back to where we should be focused: the way people experience music and radio.

Having powered music and radio services for over 10 years, 7digital knows how to deliver listening experiences that delight millions of people. We regularly reflect on what works, and what doesn’t. Sometimes it is clear what works well, and if you have a culture where you fail early and loudly (we do; it is part of our tech manifesto) you can sometimes see exactly what you did wrong. It’s not always easy though, and when the reason for something happening is not at all clear, finding out why it happened is difficult. How can you make sure the reasons you say something happened, are because of the reason you have identified? Correlation does not imply causation.

When we think about the future, we need a way to look back, and with confidence and accuracy, inform our plans on what to do next. For this, we use data, as a tool. Like any tool, the way you use it determines what you get from it. Data is a difficult tool to use correctly, but if you learn to put data in its place, it becomes incredibly effective. It raises your confidence when making decisions. It helps you reflect accurately on what you’ve done and it validates your thoughts. You learn from it.

When it’s possible to measure everything, you run the risk of over-analyzing the wrong things. So how do you ensure the data you are looking at tells you something that you can trust in order to make a confident decision? The answer for us is context. Specifically, data context (not to be confused with current trend of using the word context in music).

To help with the description and adoption of this idea, we developed the 7digital Music Data Context Map.

 

Music Data Context Map

There are three core elements to providing a digital music and radio service; Music, Audience & Service. With any one missing, you don’t have much left. At 7digital, we have deep reach to all parts of each. We have a music catalogue of over 32M tracks, served to an audience of millions of people, through a brilliant variety of services

 

 

Each of these core elements have many dimensions. For example;

 

 

As a tool, we place the reports and insights that we use in our decision making on this map.

Consider a report that provides insights on subscribers’ skipping behaviour on streaming services. Before analysing the data, and attempting to derive insights, we put the report on the context map. It resides somewhere between Audience (subscribers) and Service (streaming).

 

With the aid of the map, we can quickly determine the report’s value. We know what it tells us, and don’t get distracted by wondering where the value lies.

The context map also serves a second benefit. It helps you maximise value from any given data point, or collection of reports. This is important, as preparing data can be expensive and time consuming. 

For example, the above report becomes valuable to more people when you add further dimensions to it. You gain greater insight into music consumption if you look at the same behaviour across different genres of music that people stream. Likewise, greater insight into the audience is possible if you consider where the music was discovered, and enhanced service insight is gleaned when exploring the same behaviour on hybrid streaming/radio services. 

 

By adding more dimensions, the value of the data increases. As a strategy internally, we strive to always improve map coverage. Any given report, or series of reports that are developed, are placed on the map, and careful consideration is given to ensure we are able to accurately describe the data we have. When things converge near the center of the map, we know we’re doing a good job at delivering maximum value, to the greatest number of people. This benefits our own plans, and those of our partners. Ultimately, it focuses us and we do a better job for the listener.

For more updates on the role that data plays at 7digital, including reports sharing insights on music, audience and service, follow us on twitter, connect with us on LinkedIn, and bookmark our blog.

About the author:

Alan joined 7digital as Product Owner for Data in 2015, with a responsibility for ensuring the company are extracting value from and developing a line of data products. Prior to 7digital, Alan worked in a variety of roles, most recently, providing data to the entertainment industry through his own startup. Alan started his career working as a researcher in computer science, focusing his interests on the application of technology to measure the scale and distribution of content consumption on large Internet networks.  

Tag: 
Music Data
Digital Music
Future Planning
Data
sharri.morris@7digital.com
Thursday, September 20, 2012 - 16:14

Over the last month we've started using ServiceStack for a couple of our api endpoints. We're hosting these projects on a debian squeeze vm using nginx and mono. We ran into various problems along the way. Here's a breakdown of what we found and how we solved the issues we ran into. Hopefully you'll find this useful. (We'll cover deployment/infrastructure details in a second post.)

Overriding the defaults

Some of the defaults for ServiceStack are in my opinion not well suited to writing an api. This is probably down to the frameworks desire to be a complete web framework. Here's our current default implementation of AppHost:

 

For me, the biggest annoyance was trying to find the DefaultContentType setting. I found some of the settings unintuitive to find, but it's not like you have to do it very often!

Timing requests with StatsD

As you can see, we've added a StatsD feature which was very easy to add. It basically times how long each request took and logs it to statsD. Here's how we did it:

 

It would have been nicer if we could wrap the request handler but that kind of pipeline is foreign to the framework and as such you need to subscribe to the begin and end messages. There's probably a better way of recording the time spent but hey ho it works for us.

sharri.morris@7digital.com
Sunday, September 16, 2012 - 11:31

At 7digital we use Ajax to update our basket without needing to refresh the page. This provides a smoother experience for the user, but makes it a little more effort to automate our acceptance tests with [Watir](http://wtr.rubyforge.org/). Using timeouts is one way to wait for the basket to render, but it has two issues. If the timeout is too high, it forces all your tests to run slowly even if the underlying callback responds quickly. However if the timeout is too low, you risk intermittent fails any time the callback responds slowly. To avoid this you can use the [Watir `wait_until` method](http://wtr.rubyforge.org/rdoc/classes/Watir/Waiter.html#M000343), to poll for a situation where you know the callback has succeeded. This is more inline with how a real user will behave. ### Example

sharri.morris@7digital.com
Friday, September 14, 2012 - 13:21

At 7digital we use [Cucumber](http://cukes.info/) and [Watir](http://wtr.rubyforge.org/) for running acceptance tests on some of our websites. These tests can help greatly in spotting problems with configuration, databases, load balancing, etc that unit testing misses. But because the tests exercise the whole system, from the browser all the way through the the database, they can tend be flakier than unit tests. Then can fail one minute and work the next, which can make debugging them a nightmare. So, to make the task of spotting the cause of failing acceptance tests easier, how about we set up Cucumber to take a screenshot of the desktop (and therefore browser) any time a scenario fails. ## Install Screenshot Software The first thing we need to do is install something that can take screenshots. The simplest solution I found is a tiny little windows app called [SnapIt](http://90kts.com/blog/2008/capturing-screenshots-in-watir/). It takes a single screenshot of the primary screen and saves it to a location of your choice. No more, no less. * [Download SnapIt](http://90kts.com/blog/wp-content/uploads/2008/06/snapit.exe) and save it a known location (e.g.

sharri.morris@7digital.com
Monday, September 3, 2012 - 11:51

[TeamCity](http://www.jetbrains.com/teamcity/) is a great continuous integration server, and has brilliant built in support for running [NUnit](http://www.nunit.org/) tests. The web interface updates automatically as each test is run, and gives immediate feedback on which tests have failed without waiting for the entire suite to finish. It also keeps track of tests over multiple builds, showing you exactly when each test first failed, how often they fail etc. If like me you are using [Cucumber](http://cukes.info/) to run your acceptance tests, wouldn't it be great to get the same level of TeamCity integration for every Cucumber test. Well now you can, using the `TeamCity::Cucumber::Formatter` from the TeamCity 5.0 EAP release. JetBrains, the makers of TeamCity, released a [blog post demostrating the Cucumber test integration](http://blogs.jetbrains.com/ruby/2009/08/testing-rubymine-with-cucumber/), but without any details in how to set it up yourself. So I'll take you through it here.