Astro Malaysia held it’s annual GoInnovate Challenge Hackathon on the 10th-12th October at the Malaysian Global Innovation & Creativity Centre (MaGIC).

Hopefuls from all over Malaysia massed together for an exciting challenge set by Astro - to build a radio streaming demo. The demo product was meant to redefine the way we watch, read, listen and play with content in two unique hacks to be completed within a 48 hour deadline. Astro offered substantial rewards to those whose ideas that came out on top!

Day 0: Demo - Friday evening

Attendees ranged from junior developers to start-up teams, so long as you’re 18 years old, you can take part!

To begin the Hackathon, entrants were fully briefed and given access to the APIs of both 7digital and music metadata company, Gracenote.

7digital’s lead API developer, Marco Bettiolo, flew in to act as Tech Support for the hackathon.

This photo shows Marco presenting a demo of a radio style streaming service he had previously built.

Day 1: Get Building!

According to the brief, hackers had to choose one of two innovative challenges:

1 - Product Challenge - To create a platform, or product that could redefine media consumption.

2 - Creative Challenge - A user-generated experience using Astro’s Assets.

The build could be an app or web-based and they let the competitors define the platform.

They feverishly worked into the night, honing their development skills, whilst nibbling on pizza  when they had chance.

Day 2: Pitching sessions - Sunday afternoon

Sunday evening drew the long competition to a close.

Well deserved drinks and cakes were on the menu, to celebrate all the hard work that these talented techies went through this weekend! 

It’s the taking part that counts, right?

Goinnovates’ 2014 Hackathon encourages skilled people from all over Malaysia to get involved, learn from one another and share experiences, so taking part does count! A 48-hour hack event however, wouldn't be complete without some prizes now, would it?!

A panel of judges picked the most creative ideas and offered several cash prizes with a sum of RM 50,000 (£9,500)! There was one final winner and 1st and 2nd runner-up.

Fancy your chances next year? Keep an eye out for 2015’s competition announcement at http://www.astro.com.my/goinnovate

sharri.morris@7digital.com
Thursday, September 20, 2012 - 16:14

Over the last month we've started using ServiceStack for a couple of our api endpoints. We're hosting these projects on a debian squeeze vm using nginx and mono. We ran into various problems along the way. Here's a breakdown of what we found and how we solved the issues we ran into. Hopefully you'll find this useful. (We'll cover deployment/infrastructure details in a second post.)

Overriding the defaults

Some of the defaults for ServiceStack are in my opinion not well suited to writing an api. This is probably down to the frameworks desire to be a complete web framework. Here's our current default implementation of AppHost:

 

For me, the biggest annoyance was trying to find the DefaultContentType setting. I found some of the settings unintuitive to find, but it's not like you have to do it very often!

Timing requests with StatsD

As you can see, we've added a StatsD feature which was very easy to add. It basically times how long each request took and logs it to statsD. Here's how we did it:

 

It would have been nicer if we could wrap the request handler but that kind of pipeline is foreign to the framework and as such you need to subscribe to the begin and end messages. There's probably a better way of recording the time spent but hey ho it works for us.

sharri.morris@7digital.com
Sunday, September 16, 2012 - 11:31

At 7digital we use Ajax to update our basket without needing to refresh the page. This provides a smoother experience for the user, but makes it a little more effort to automate our acceptance tests with [Watir](http://wtr.rubyforge.org/). Using timeouts is one way to wait for the basket to render, but it has two issues. If the timeout is too high, it forces all your tests to run slowly even if the underlying callback responds quickly. However if the timeout is too low, you risk intermittent fails any time the callback responds slowly. To avoid this you can use the [Watir `wait_until` method](http://wtr.rubyforge.org/rdoc/classes/Watir/Waiter.html#M000343), to poll for a situation where you know the callback has succeeded. This is more inline with how a real user will behave. ### Example

sharri.morris@7digital.com
Friday, September 14, 2012 - 13:21

At 7digital we use [Cucumber](http://cukes.info/) and [Watir](http://wtr.rubyforge.org/) for running acceptance tests on some of our websites. These tests can help greatly in spotting problems with configuration, databases, load balancing, etc that unit testing misses. But because the tests exercise the whole system, from the browser all the way through the the database, they can tend be flakier than unit tests. Then can fail one minute and work the next, which can make debugging them a nightmare. So, to make the task of spotting the cause of failing acceptance tests easier, how about we set up Cucumber to take a screenshot of the desktop (and therefore browser) any time a scenario fails. ## Install Screenshot Software The first thing we need to do is install something that can take screenshots. The simplest solution I found is a tiny little windows app called [SnapIt](http://90kts.com/blog/2008/capturing-screenshots-in-watir/). It takes a single screenshot of the primary screen and saves it to a location of your choice. No more, no less. * [Download SnapIt](http://90kts.com/blog/wp-content/uploads/2008/06/snapit.exe) and save it a known location (e.g.

sharri.morris@7digital.com
Monday, September 3, 2012 - 11:51

[TeamCity](http://www.jetbrains.com/teamcity/) is a great continuous integration server, and has brilliant built in support for running [NUnit](http://www.nunit.org/) tests. The web interface updates automatically as each test is run, and gives immediate feedback on which tests have failed without waiting for the entire suite to finish. It also keeps track of tests over multiple builds, showing you exactly when each test first failed, how often they fail etc. If like me you are using [Cucumber](http://cukes.info/) to run your acceptance tests, wouldn't it be great to get the same level of TeamCity integration for every Cucumber test. Well now you can, using the `TeamCity::Cucumber::Formatter` from the TeamCity 5.0 EAP release. JetBrains, the makers of TeamCity, released a [blog post demostrating the Cucumber test integration](http://blogs.jetbrains.com/ruby/2009/08/testing-rubymine-with-cucumber/), but without any details in how to set it up yourself. So I'll take you through it here.