The challange of managing a big website


It is no secret that until now I have spent a great deal of my web development career working for public sector organisations such as the NHS and local government.

During this time I have learned a lot, and I figured I would share my knowledge with those of you that may be in the process of making sense of a big website, and who may be trying to make it more customer focused.

In this post I have referred to tools and practices that I have used within both the NHS and local government. Do not let this put you off if you do not work in the public sector, as these tools can be useful as a reference for the management of any big website.

Historical factors

Large organisations that deliver multiple products and services to varying audiences have historically always had large and complex web presence. For example a hospital may deliver services from podiatry to neurology and everything in-between.

The historic presence of the website will have been driven by governing factors (in the case of local authority websites, a statutory need to deliver information online) and will have grown organically over time.

As a result of this it is likely that the website will be more reflective of the organisation’s internal directorial structure, rather than the needs of its customers.

The customer need

Customers will visit a complex websites for a variety of reasons, and each customers need may be different.

For example they may be seeking advice, or they may need to book an appointment, or they are coming into the premises and need to find the location of the department.

The reasons for a customer visiting a website could be endless, however one thing remains common regardless of the customer need, which is that all customers will visit the website to perform a task.

These tasks can be as simple as seeking information, or as complex as making an online transaction (for example filling in a booking form).

The key point is that every product or service that a large organisation provides exists to aid a customer with the completion of a task.

Identifying customers

Tasks are aimed at fulfilling a customer need, but you may find it hard to identify the tasks that an organisation delivers until you have identified its customers.

A large organisation, such as a local authority, will have many customers, both internal and external to the organisation.

If you were to list all of an organisation’s customers in one go, it could prove quite problematic, as you may find that the individual products and services have several types of customers each.

A method that I found useful was to group customers into categories, and then explore these further on a task by task basis.

Just some of the categories of customer that may have an interest in an organisation’s products and services are as follows (customers may sit within multiple categories):

  • Self-service customers
  • Mediators (e.g. carers)
  • Age range (e.g. young people, adults, older people)
  • Health and/or social care issues (may need assistance, may be vulnerable)
  • Peers (e.g. other local authorities)
  • Governing bodies (e.g. Ofsted)
  • Businesses

Identifying tasks

In order to identify tasks, we must first define what we consider to be a task. For the purpose of the exercises in this article, we will define a task as follows:

‘A task enables a customer to perform an interaction against a product or service.’

Getting a handle on products and services

As odd as this may sound, a large organisation may not actually have a handle on all of its products and services. This can be especially true in the world of the public sector, where departments may be operating in silos.

As somebody who is trying to redevelop the website to meet the customers need, you may need to conduct several workshops with key stakeholders to discover the extent of what an organisation delivers.

Local authorities are given help identifying the services they deliver in the guise of the LGSL (Local Government Service List) (

Granted it may take some interpretation, and I suggest you edit the list to your organisational needs, however for every service that the LGSL recommends should be provided as a council, it also recommends the interactions that should be performed against each service.


The task definition uses the word ‘interactions’, and as an organisation you may have many.

Local authority’s are fortunate in that they are given a starting point with regards to tasks with the LGIL (Local Government Interaction List), provided by the ESD (Electronic Service Delivery) toolkit (

It is worth noting that not every organisation is the same, and the interaction list will be unique to the organisation.

In the case of the LGIL a local authority will certainly find that it does not deliver all the interactions on that list, and they may very well have additional interactions which are not mentioned.

You should define your own interaction list, but the LGIL may be a good starting point.

Writing a task

Tasks should be written in such a way that they fully define their purpose. To do this I recommend taking into consideration three elements:

  • The customer (or stakeholder)
  • The interaction
  • The product or service

Lets say that we are writing about a service that a local authority would deliver, such as foster care.

An example of a written task using the elements above would be:

‘As a potential foster carer, I wish to find out about foster care’

Considering the example I gave about the LGIL interaction list, the part of the above sentence; ‘find out about’, implies an ‘Information Provision’ interaction (of course the interactions you have defined may differ from this list, but the principle remains the same).

Another example, this time using the adoption service could be:

‘As a potential adopter, I wish to apply to adopt a child’

Here the ‘apply’ part of the sentence implies an ‘Applications for service’ interaction.

User stories

You may recognise that the way the tasks have been written, as they take the format of a ‘user story’ (you may recognise this from agile project management).

The fact that we write our stories like this is no coincidence as this is the management method that I recommend for the development of each task.


As you write tasks you may realise that many customers may wish to perform the same task, and your tasks list could grow out of control. For example, consider the following tasks:

  • As a potential foster carer, I wish to find out about foster care
  • As a current foster carer, I wish to find out about foster care
  • As a child in foster care, I wish to find out about foster care

You may decide that you are going to deliver the same information to the potential foster carer, and the current foster carer, but deliver a different set of information for someone who is in foster care (as they will likely have different questions).

To do this, tasks can be written so that they assume the needs of the majority of customers first (generic), and where we could not fulfil the needs of a certain group of customers, we wrote additional tasks to meet these exceptions.

This means you can simplify our list so that the fist two become one task written as:

I wish to find out about foster care

With that being a generic task aimed at all customers (the potential and current foster carers).

For cases where you may wish to change the process or style of delivery for an interaction (such as the information aimed at people in foster care) you would create a separate task:

As a child in foster care, I wish to find out about foster care

By doing this, we are managing special cases by exception, still considering all customer types, and meeting the needs of those that may need an additional or alternative interaction. This method helps to keep the task list concise.


Because we now have two tasks based around the delivery of one service (with the same interaction), these exceptions can be referred to as subtasks.

Building a task list

The best way to deliver a customer focused website is to assess exactly what it is an organisation offers, the people it offers it to, and the tasks involved in the delivery of those offerings to those people.

A list of tasks will help with this, however just listing them in alphabetical order would be pretty useless.

The simplest thing to do would create a spreadsheet that can be used for cross reference. Here are the methods I used when identifying tasks on a spreadsheet:

Give each task a unique ID

We need a way of assigning a task ID to each task, in such a way that at a glance we instantly know what service (or product) it applies to, what interaction it covers and a way to uniquely identify it. A suggestion is the following method:

[Product / Service ID]-[Interaction ID]-[Subtask ID]

This breaks down as follows:

Product / Service ID

The ID of the product or service you are referring to (if you don’t have product or service ID’s now would be a good time to start assigning them).

Interaction ID

For each interaction you have defined, create an identifier for it (the LGIL gives you some numbers by default as a starting place), this will let you view the interaction type at a glance.

Subtask ID

As we mentioned, exceptions to the general task are called subtasks. This lets us know that this is a subtask, and which subtask it is.

Task ID example

An example of this in practice (in a local authority setting) would be for the task

‘As an adopted child, find out about your birth parents’

This may have an ID of 160-8-2 which can be broken down as follows:

160, because this task has a service ID of ‘Adoption’ in the LGSL.

8, because this task has an interaction ID of ‘Providing information’ in the LGIL.

2, because this a subtask of this particular service and interaction type (it just so happens that this is the second subtask that was identified in this area.

Obviously you would use the relevant ID’s for your organisation.

Defining other spreadsheet fields

Now you have got your tasks defined (in the form of a user stories), and you have given each one a unique identifier, you can now start to populate the rest of the spreadsheet. Before you can do this you need to start thinking about Information Architecture (IA).

Information Architecture

For a first pass at the IA an approach you could take would be first fit card sorting exercise. To do this you would need to sort through the entire list of tasks and group them into logical groups of related content (don’t worry this will not represent your final IA structure).

You should then ask a group of other people to do the same thing (you will have differing results). This group of people should be representative of your customer base, try and do this as a workshop led where possible, and make sure the group is manageable (no more then say 10 people).


During the card sorting process you will find that certain tasks could fit into more than one group.

Because polyhierarchical navigation can often be confusing to a customer journey, it is recommended that rather than including tasks in multiple groups it would be better to have a mono-hierarchical Information Architecture and identify a primary group for each task.

You should also let your customers have a different ‘view’ of the tasks depending on their need, so to allow customers to view all tasks that relate to a given task, or all tasks that are associated with a customer type, you should identify secondary groups that each task belongs to (where appropriate).

Building a related task list

We now have all of our tasks written as user stories, a unique ID, a list of customer groups and a list of related groups of information.

Now we need to plot the primary and secondary locations on this task list. You may do this the following way:

Task ID Group 1 Group 2 Group 3 Customer 1 Customer 2 Customer 3
Task 1 x-x-x Primary Secondary Secondary
Task 2 x-x-x Secondary Primary  Secondary
Task 3 x-x-x Primary  Secondary Secondary
Task 4 x-x-x Primary  Secondary Secondary
Task 5 x-x-x Primary  Secondary Secondary

You will note that the customer groupings are always secondary. This is because your customers will not always know what type of customer they are, but they will know what they are looking for, so the group (or topic) would be the primary area that they will look to complete their task.

Here is an example of what a typical local authority task list may look like:

Task listThe above example uses ‘R’ for related, instead of ‘secondary’.

Congratulations, once you have completed this stage, you will now have a development roadmap.

Development roadmap

This ‘first fit’ IA architecture forms the basis of a development roadmap. The roadmap identifies all of the ‘content groups’ that need developing, along with any initial form developments.

Forms and applications

Form and application developments have been identified by the tasks that were created that have an interaction which is something other than information provision (eg applications, reporting functionality etc).

Roadmap purpose

This roadmap has several purposes:

  • To identify groups of work, so that work can be scheduled appropriately
  • To monitor the progress of work that has commenced (including any additional tasks that have been created as part of the development process)
  • To identify form/application developments, and keep track of their progress
  • To act as a single point of reference for all the tasks (and their mappings) within the website

Development process

Now we can divide up our content groups and take them to our stakeholder groups. You will find that multiple stakeholders have an interest in each content group, so it is best to run these sessions as a workshop.

For example for the fostering section that we have been using as an example, there may be several stakeholders that you wish to invite to a workshop. Such as:

  • An subject matter expert from the children’s services team
  • A foster carer
  • A customer interested in becoming a foster carer
  • Someone who is in foster care
  • Someone who has left foster care
  • Someone from social services (who may have some cross over services, identified in the roadmap)

The goal of the workshop should be to refine the tasks that have been identified, and put some structure into the tasks. Goals you should have are to:

  • Give the tasks a meaningful title
  • Identify any stakeholders that may have been missed (and to create tasks for these stakeholders)
  • Identify tasks that are not delivered
  • Identify tasks that have been missed
  • Identify subtasks that may have been missed
  • Find out the order that tasks are most likely to be completed (eg you would read how to become a foster carer, before applying to become a foster carer) by simulating customer journeys
  • Find the starting points (the first task selected) for each journey (the common starting points, assuming we have no previous analytics, will become the top tasks).

After the workshop (you may need more then one depending on the amount of changes identified) you should have a good idea of how that section can be built up.

Tasks for everything

You have built a powerful tool in the form of the development roadmap. At a glance you can identify everything on the website, and you know its status.

When you are building the web sections, you may identify that you need things such as a landing page for a section, or a page for signposting. Do not be tempted to just create a page with no task ID when creating these pages.

Every page fulfils a task, even if that task is ‘As a customer I need signposting to the product that is most fitting for my needs‘.

Without a task, and way to track the task (your development roadmap) your big large and complex website will quickly fall back into disarray.

Would you like to know more?

If you would like to know more about the above article, or if you are interested in the concepts discussed, why not get in touch, or leave a comment below?

Debug PHP with Vagrant using Xdebug and Sublime Text


Xdebug logo

Following hot on the trail of my recent post about getting started with Vagrant, I figured I would let you know how you can debug the PHP that is running on your Vagrant box with Xdebug, using your host IDE (in this case, everybody’s favourite editor Sublime Text).

Xdebug will allow you to place breakpoints in your PHP code, and step through the code, letting you see how your variables are set. Gone are the days where you have to dump variables to screen to see what they are set as.

We are going to do a little bit of command line and bash scripting, but as always, I am going to explain what all those scary commands mean (and hey, if you still don’t get it, that is what copy and paste is for right?).

In this guide we are going to:

  • Edit our Vagrant provision file (I set this up in the previous tutorial) to automatically install and configure Xdebug
  • Setup Sublime Text to work with Xdebug on Vagrant
  • Install an extension for Chrome that makes working with Xdebug easier.

It is worth noting that I am using a Mac OSX, so my examples will be targeted towards that particular system. I’m also using iTerm 2 as my terminal program, and I am using Sublime Text 3 beta.

Installing Xdebug with Vagrant

If you remember the previous tutorial, we installed everything we needed for our Vagrant LAMP stack with the following line in our Shell provisioning file:

This bit of code installs MySQL, Apache and PHP.

To install Xdebug, we need to install something called PEAR, which stands for PHP Extension and Application Repository. In other words it will let us install handy little extras for PHP.

As well as PEAR, we are also going to install the php5-dev package, which will let us add modules to PHP, and the build-essential package, which will let us run the commands required to build the Xdebug package, in particular the ‘make‘ command, which is used to compile code.

Let’s add them into our provisioning file so they get installed in Vagrant:

We can now install Xdebug with the command pecl install xdebug. The command pecl got installed with PEAR, and stands for PHP Extension Community Library. We also need somewhere to install our Xdebug logs, so lets do that with the following block of code placed after we install our packages:

This code makes a directory called xdebug in the /var/log/ folder. Is then changes the owner of that directory (chown) to be www-data so that Apache can write to it. Finally we install Xdebug as a super user using sudo (super user do), with the line sudo pecl install xdebug.

There we go, Xdebug will be installed the next time we provision our Vagrant box. Are we done here? Not quite, we need to configure PHP to use it.

Configuring PHP to use Xdebug

In order to get PHP to work with Xdebug, we need to tinker with the php.ini file, that comes installed with PHP. I will show you how to do that in just a moment, but first let me explain what we need to add to that file:

The first thing you will notice is we haven’t put a path for the zend_extension parameter (denoted by the [enter path here] place holder). This is because this path is variable (it changes from instance to instance), but it should look a little something like /usr/lib/php5/20090626+lfs/

Fortunately for us, we can find this path with the shell command find / -name '' 2> /dev/null. So we will, when we put it in our script in a little bit.

That path, tells PHP where our Xdebug install is.

I will not go into the above script in detail, but the important parts are that we are telling PHP that we are enabling remote debugging (so we can get at it from our hosts instance of Sublime Text), that it should do this on port 9000, and our remote IDE environment (Sublime Text) is available on port

Automating the configuration of php.ini

Like I said, we are going to make Vagrant do all of this configuration for us. We will do this with the following chunk of code:

That block of code does exactly what we said to the php.ini file, with the exception that echo '[code goes here]' >> /etc/php5/apache2/php.ini writes the line (as a new line) to the php.ini file, which in this instance is located at /etc/php5/apache2/php.ini.

The only other point you will notice, is that we automatically figure out where our file is by adding the command find / -name '' 2> /dev/null into our script here:

By wrapping it in the $(...) we are telling our script to run the command, not just echo it.

Before we are finished though, we don’t want to start writing to php.ini every time we provision our Vagrant install (using vagrant provision).

In our we have a handy if statement that checks to see if we have made /var/www/ into a symbolic link or not with the following code (if you don’t know what I’m talking about, have a look at the previous tutorial):

We are only going to do that once, so lets put the code that writes to php.ini into that.

Our finished provisioning script

Here is our final file that now installs Xdebug and configures the php.ini:

Now to get it all up and running, navigate to the root of your project (for example: cd path/to/your/project) and enter the following command:

Yup, that script installed WordPress too (and if you still don’t know what I’m talking about, have a look at the previous tutorial).

Configuring Sublime Text to use Xdebug

Firstly, if you haven’t already, go ahead and install Sublime Text, with the handy installer (phew, a bit of a rest from command line).

Sublime Text has a really handy feature, where you can run commands with the shift + cmd + p key combination. This will fire up a prompt box, and as you start typing, you will notice lots of commands that you can take advance of.  One of these will let you install plugins, or at least it would, if we had installed the package that lets us do that yet (Package Control), so let’s go and do that first.

Go ahead and install Package Control by using their handy guide.

Once you have done this you can now install the Xdebug Client for Sublime Text by hitting  the shift + cmd + p key combination and by starting to type install.

Installing a package using Package Control

Hit the enter key and you will be given another prompt. This time start typing xdebug to be presented with the following screen:

Installing the Xdebug Client

Hit enter again, and you will have installed the Xdebug Client for Sublime Text (you may have to restart Sublime Text for the changes to kick in).

We should now have a new addition to our context menu under Tools we should now have an options menu for Xdebug.

Xdebug context menu

Configuring the Xdebug Client

Next up, we need to make sure that our Xdebug Client is configured correctly. There are a few ways to do this, but as we potentially will have many instances of Vagrant running, it is probably best to do the configuration on a per-project basis.

So let’s create a project in Sublime Text by using the contextual menu to go to Project > Add Folder to Project…

Add Folder to Project context menu

Once you have done this your Sublime Text window will now have a sidebar that contains all of the project files.

The project sidebar

We still haven’t made a project though. To do this go to Project > Save Project As… and save the project file in the root of your project folder.

You should now have a file with an extension of .sublime.project. Open this up.

Your project folder

Here is where we are going to make our changes.

Here we have a file that seems to consist of JSON, it looks like the following:

We are going to add the following code into this:

Well, almost that exact code, you will notice the [your user name] and [path to your project] holding text. Change these so they match your solution. For example they might be configured something like /Users/mattwatson/dev/my-project-name/public. I use my own project path in the full example below, you will need to change this.

Putting it all together we should have a file that looks similar to the following:

Save the file, and open it again using Sublime Text (by double clicking on it in your finder). It should launch the project.

Testing it works

If you have read the previous tutorial, you should know that you can now hit http://localhost:8080 and view the start page of the WordPress installer, so go ahead and do that.

WordPress installer screen

Nothing should happen just yet, because we haven’t yet done two things:

  • Started debugging
  • Added any breakpoints to our project

In Sublime Text go ahead and navigate to /public/index.php go down to the first big of PHP code you can see on that page (in this case line 14). You can now either right click (or option + click if that is your kind of thing, you Mac puritan you) to view the contextual menu that will let you add a breakpoint.

Inserting a breakpoint

To insert a breakpoint I prefer to hit command + F8 though (well, because I’m using a MacBook I have to hit function + command + F8, otherwise I will just turn the volume down or something).

Anyway, when you have done that, you should end up with a breakpoint on line 14.

File with breakpoint

We can now either in Sublime Text go to Tools > Xdebug > Start Debugging, or hit command + F9 to fire up our debugger.

Running the debugger

You will notice the new panels in Sublime Text that are going to present us with our debug information. You will notice that the Xdebug Breakpoint window is telling us about that breakpoint we placed on line 14.

In your browser go to http://localhost:8080/?XDEBUG_SESSION_START=sublime.xdebug that query string tells Sublime that we are ready to start debugging. We can remove that string after we have run it once, as it gets stored in a session.

If we look at Sublime Text now, you will notice that that breakpoint we added is highlighted yellow. That is because the PHP has paused where our breakpoint is.

Breakpoint hit

If you look at the Xdebug Context window you can now see all the variables that have defined so far.

If you use your arrow keys, you can move your cursor over these arrays, and view their contents the debug window.

Debug window

There you go, you now can debug your WordPress code using Sublime Text.

Easier debugging with the Chrome Extension

One last thing, that query string that we have to enter (http://localhost:8080/?XDEBUG_SESSION_START=sublime.xdebug) is a little awkward, don’t you think?

If you are using Chrome then there is an extension you can install to tidy that up a little bit.

In the Chrome Web Store, search for ‘Xdebug‘. You should find an extensions called ‘Xdebug helper‘. Go ahead and install this.

Xdebug helperI will not go into the details of how to install and configure this, as you can find it all right there in the Google store. But essentially this extensions will give you a little ‘bug’ in your browser bar that you can click to enable debugging.

Xdebug helper in browser

This sets the session, so you don’t have to muck about with all of that query string nonsense.

Getting started with Vagrant



So after my tutorial on how you can use Grunt to get everybody in your team on the same page when compiling SCSS and JavaScript, I figured I would delve into the world of distributable development environments.

Vagrant does just that, it lets you configure a development environment that you can distribute across platforms and team members. The good thing about that is that the environment will be identical for everyone, thus ending the need for anyone to say ‘it works on my box‘ ever again.

Best of all, you don’t have to login to the development environment that Vagrant creates. It leaves your files where you created them on your host machine, so you can still edit them with your favourite text editor, while they are running in the virtual machine, and you will be able to use the browser on your host machine to view the development environment, thanks to all the clever routing that Vagrant puts in place.

For this guide we will get deep and dirty in command line, and even do some bash scripting, but don’t worry I’ll be right there to hold your hand, and explain what all those scary commands mean.

In this guide we are going to:

  • Install Vagrant
  • Install a Vagrant VM (Virtual Machine), called a ‘box’
  • Install a simple LAMP (Linux, Apache, MySQL and PHP) stack on that box
  • Install WordPress on the box
  • View our WordPress installation by navigating to http://localhost:8080

It is worth noting that I am using a Mac OSX, so my examples will be targeted towards that particular system. I’m also using iTerm 2 as my terminal program.


Vagrant needs to harness the power of virtualisation software. There are a few choices available, but because it is nice and free, we are going to use VirtualBox by Oracle. Fortunately for us we don’t have to get our hands dirty just yet. Just run the installer for your machine, while you are at it go ahead and install the extension pack from the same page.


Now to install the programme of the hour, Vagrant. Again, no mess, no fuss, just go get the Vagrant installer for the platform of your choice.


Ok, time to delve into that command line (hold my hand if you need to). First of all you will need to use the terminal to navigate to the root of your project (for example: cd path/to/your/project), and run the following command:

That just created a file in the root of your project called ‘Vagrantfile‘.

The result of 'Vagrant Init'

Don’t worry about this file if you are using version control such as git, it is designed to be included as part of your project so that anyone working on the project will benefit from the same environment.

Go ahead, navigate to your project folder and take a look at the new Vagrantfile. The file is written using Ruby syntax, but don’t worry about that, we are only going to tinker with it a little bit.


Get a box

I’m removing the comments for readability, but that script reads as follows:

The line VAGRANTFILE_API_VERSION = "2" sets a variable (to a string of 2). The line Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| passes the variable that was defined in the first line into the Vagrant.configure() function. It also sets us up to ‘do‘ stuff with the config property of the Vagrant.configure() function, until we end it with the last line.

There is a line in the middle = "base", and that is where we are going to do our tinkering.

First lets change the line = "base" to:

This means that we are going to be using the Linux distribution of Ubuntu 12.04.3 LTS (Precise Pangolin) (the 32 bit build). But we need to get that from somewhere. So look down your Vagrantfile until you come across this commented out piece of code:

Lets uncomment that file and change the path to:

That line of code will go and grab that particular blend of Ubuntu, in a ready configured ‘box’, right from the Vagrant servers for us.

We are not done yet though. Keep hunting down the code until you find the following line, and go ahead and uncomment it:

That line of code will forward port 8080 on your host machine to the standard port 80 on the box. So you will be able to go ahead and visit http://localhost:8080 and view your website.

Finally, in the commented Vagrantfile you will notice a bunch of commented out things called provisions. One is for Chef, another for Puppet.

These are scripting languages that will let us install things onto our box after we have got it up and running. They are worth exploring in the future, but for now we are going to add our own ‘shell’ (yup, command line) provision, because I think it is the simplest thing to get up and running with without us having to learn an extra load of syntax on top of what we are already doing.

At the bottom of the code, just before the end add the following line:

This means that we are going to use a shell script to install all the goodies that we need, and its path is in the root of our directory and is called ‘‘.

We haven’t made that file yet, so in a moment we will go ahead and do that.

One more thing though, if you want to follow this tutorial to the bottom and install WordPress, those of you in the know will notice that WordPress has a habit of wanting to create files such as the .wp-config.php file when it is running the installer. To allow WordPress to do this we are going to have to tinker with the permissions for the Vagrant synced folder (the root directory of our project within the Virtual Machine).

To do this add the following line of code before the end statement:

This sets the default directory permissions (dmode) to 777 which means that everyone has full control to the folder. It also sets the default file permissions (fmode) to 666 which means everyone can read and write but not execute files.

Note to security experts: I am sure there is a nicer way to do this, and if so hit me up in the comments, but don’t worry too much, we can change these permissions in the vagrantfile after our initial setup, and these changes will be implemented the next time we run vagrant up.

Our full Vagrantfile (minus the comments) should now look a little like the following:

Shell script (building the LAMP stack)

In the root of your project make a new file called ‘‘.

We are fortunate that PHP MySQL and Apache are all deployed with Ubuntu, so we don’t have to do any downloading before we install them, but what we do need to do is configure a password for MySQL before we can install it. To do this, add these two lines to

These lines of code set the root password of MySQL (twice actually, an initial time and then again for confirmation) to ‘mypass‘ (you can change this to your choice of password).

In the code ‘sudo‘ means do something as a super user, and debconf-set-selections lets us set defaults in our Ubuntu config file (debconf). The rest of the lines simply set the password and the confirmation for MySQL.

Now we can install our LAMP stack. To do that add the following lines of code to

In the code above the apt part of apt-get stands for ‘Advanced Packaging Tool’. The command apt-get update will make sure that the source list for the apt-get tool is up-to-date.

The next line sudo apt-get -y install mysql-server-5.5 php-pear php5-mysql apache2 php5 means that we are going to install the packages mysql-server-5.5 php-pear php5-mysql apache2 php5 using apt-get. The -y means that we are going to automatically answer yes to any prompts that may come up.

Exposing the Vagrant folder

By default the box will go ahead and serve files from its /var/www folder. That isn’t much good to us, because we want to be able to edit our files using our host machine. The files from our project directory (where your Vagrantfile sits at the root of) is available at /vagrant on the guest box.

To be able to expose these files when we hit http://localhost:8080 we need to make a symbolic link between the /var/www folder and the folder that you want to be served (in most cases the /vagrant folder).

You don’t have to, but in I am actually going to create a public directory within the /vagrant folder that apache will use to serve files. That way the project will be configured so that the script containing your root MySQL password is outside of that public directory and in the root /vagrant folder.

To do this, we need to enter the following code into the file:

What on earth does all that mean? Well, I’ll tell you. first of all we have an if block of code.

That looks something like this:

That means if whatever is in the square brackets [] is true then do something, until you get to the end if statement (which for some reason is fi in this language).

So what does that if [ ! -h /var/www ] part of it mean? Well, /var/www is the folder on the virtual machine that serves our web files. The -h checks if the file exists and if it is a symbolic link (or a shortcut to another file, for those more familiar with that term). Finally the ! means not, so reverses a false to a true and vice versa. So basically the statement will be true if the file does not exist or is not a symbolic link (phew!).

Fortunately the next lines are a little bit simpler. The line mkdir /vagrant/public simply means make the directory /public within the /vagrant folder.

We then remove rm the directory /var/www with the line rm -rf /var/www sudo. The switches -rf mean force this to happen without confirmation (-f) and -r means remove any sub directories.

The magic happens with the line ln -s /vagrant/public /var/www. This means make a link (ln) that is symbolic (-s) from the directory /vagrant/public and call it /var/www (essentially makes a shortcut). With this line of code, the files that are in our project folder (specifically the ones in the newly created /public folder) will be served by apache when we hit http://localhost:8080.

The line a2enmod rewrite turns on the rewrite module of apache.

The line sed -i '/AllowOverride None/c AllowOverride All' /etc/apache2/sites-available/default edits (sed is the stream editor command) the file /etc/apache2/sites-available/default in place (with the -i switch). It changes the line AllowOverride Node to AllowOverride All using a regular expression. We do this so that our apache redirect can work.

Finally we restart apache with the line service apache2 restart.

We are putting our statements within the if statement so that if we run the code vagrant provision (to re-provision the Vagrant install) it doesn’t try to do all of the above again.

Installing WordPress

To be honest, unless you want to use WordPress, you can go ahead and enter vagrant up into your terminal window now, and whatever index file you put in your projects new /public folder will will be there http://localhost:8080.

But, because we can, and because I love WordPress, lets get get the latest stable build of WordPress and put it in the /public folder. To do this, you need to add the following lines of code to your file:

This code is also wrapped in an if statement. It checks that the directory /vagrant/public/wp-admin does not exist (by using ! -d before it).

Once we are sure of that the file doesn’t exist, we change the directory (hey, that’s what cd means, who knew?) to the /vagrant/public folder.

We then grab the latest stable release of WordPress from the URL with the command wget.

We unzip it using tar. The switch x means overwrite any existing folders, v means print the output to the terminal window, and f means read from the file we specified (which in this case is latest.tar.gz).

So far, the script will have put everything in a /wordpress folder within the /public folder. The next line moves (mv) all the files within that folder (wordpress/*) folder to the root of the current folder (./) .

We now do a little bit of tidying up, by removing the WordPress directory by using rmdir ./wordpress/ , and forcing the removal of the .tar.gz file with the command rm -f latest.tar.gz.

Creating the database

We are almost there. We just need to create our initial WordPress database so that we can install WordPress without much fiddling about on our new  box. To do this run the following code:

The code above checks to make sure that a file called /var/log/databasesetup does not exist with ! -f.

We then run a few SQL commands with the keyword mysql. The user is root which is entered by the line -u root (-u means user), and the password for root is the one we defined at the start of the script as mypass (you may have entered your own root password), combine this with -p (for password) making the command in this case -pmypass.

The part that lets us run our SQL commands is the -e which essentially means echo the following SQL string into the mysql console.

I will not explain all the SQL to you in detail, but a quick summary of what we are doing in this part is as follows:

  • Create a database called ‘wordpress
  • Crease a user called ‘mywpuser‘ with the password ‘mywppass
  • Give all privileges to the database ‘wordpress‘ to ‘mywpuser
  • Flush out the previlages

The final line touch /var/log/databasesetup will create the file specified.

Putting it all together

The final script should look a little something like the following:

To get it all working, in your terminal program, you need to cd to your project root directory (the place where you ran vagrant init), and enter the following command:

You may have to wait a while, and there will be a lot of output. But eventually you will end up with an output similar to the following:

'vagrant up' - output

That last bit of output was the unzipping of the WordPress files.

Now go ahead and navigate to http://localhost:8080 with your usual browser. Sure enough, there is our WordPress installer screen waiting for us to configure WordPress with our newly created username and password.

WordPress installer screen

This guide really only scratches the surface of what is possible with Vagrant. I will be doing more Vagrant work in the future, and I am pretty sure I will blog about it.

Stoping and starting

You should of course go read up on Vagrant to learn some more commands, but before you go I will give you a couple of pointers that should keep you busy for a while.

You can stop your VM at any time by going to your project folder and entering the command vagrant halt in your terminal.

If you have really really had enough, you can get rid of Vagrant by going to your project folder and typing vagrant destroy. By using the script that I have provided, none of your files will be deleted (but you will lose all of your WordPress SQL data). This means you will still have that /public folder in your project.

Start Vagrant again at any time by going to your project folder and entering vagrant up.

Compile SCSS and JavaScript with Grunt


Grunt LogoIn this post, I dive into my experience of installing and running Grunt.

In my build of grunt I will do the following:

  • Use compass to compile and compress my SCSS files into CSS
  • Compile and compress my JavaScript files
  • Use JSHint to warn me of JavaScript errors
  • Do all of the above automatically when a file is changed

Grunt relies on you using the command line. If you are not comfortable with this then don’t worry, I’ll try to be gentle.

According to the official ‘read me’ file Grunt is a task based command line build tool for JavaScript projects.

However it is much much more than this. You can use it for concatenation, file compression, unit testing for CSS, PHP and others languages too, all in a handy little package that will work cross platform and across your team.

One last thing before we jump in, this guide is for people using Mac OSX.

Installing grunt

First things first, grunt is built upon node.js so, if you haven’t already, go ahead and install that. Don’t worry terminal newbies, there is a nice installer file available.

Now that you have node.js installed you can run the Node Package Manager (npm) terminal command to install grunt. Open your terminal window and type the following:

The command sudo lets you do something as a super user. Typing this in means that you will be prompted for your password to continue (super user do).

The next part is the npm command which runs node package manager.

The part install -g will install the package globally, and finally grunt-cli is the package name for grunt (Grunt Command Line Interface).

When  you have run the command you should see something similar to the following happen in your terminal window (note: I’m using iTerm 2 as my terminal program).

Grunt being installed

So all is installed and well with the world right? Well, not quite. Unless you want the dreaded error: ‘Fatal error: Unable to find local grunt‘, you will need to use the terminal to navigate to the root of your project (for example: cd path/to/your/project), and enter the following command:

This will put the relevant files that are needed to run grunt commands into your project, creating a new ‘node_modules’ folder in the root of your project.

Getting started

To get started we need to add two files to our project in order for grunt to do it’s business. These are:

  • package.json
  • Gruntfile.js

Grunt uses plugins to perform certain tasks, such as the ones I mentioned at the start of this tutorial.

The first file; package.json is a file that tells grunt what plugins to install, and what version of grunt we are using. Gruntfile.js contains the configuration for each plugin.


Create a file in the root of your solution called package.json. In it, to get you started, add the following code:

The name will be the name of the project we are creating. For my example I am building a WordPress theme for my site, so I will call it matt-watson-theme.

The version is the version of the project. As this is my first draft I have set it to 0.0.1.

The devDependencies object is essentially a list of dependancies. The first one being the version of grunt we are using. This tutorial was written using Grunt v0.4.2 so that is the version I have defined in this file.

In this devDependencies object we will also add the following plugin references:

The resulting package.json will look like the following (note: all version numbers written in the file below are the latest at time that this post was published):

Before we move on, and before we can create the Gruntfile iteself, we can run the following command in our terminal:

This will automatically install the dependancies we have defined in our project and put them in the ‘node_modules’ folder at the root of our project.


On its own, package.json file isn’t going too do much. We need to define our Gruntfile. This file references package.json, and goes on to do the hard work.

In the root of your project create a file called Gruntfile.js. As a starter add the following code:

This simply tells Grunt where the package.json file resides.


Now lets add some configuration to enable the concatenation of JavaScript files into a main.js file.

The following code will configure the grunt-contrib-concat plugin so that the two JavaScript files ‘assets/js/modules/module1.js‘ and ‘assets/js/modules/modules.js‘ compile into ‘assets/js/main.js‘. When they are concatenated the separator will be a new line ('rn'):

We are almost there, but before we can put it all together we need to tell our Gruntfile to load the relevant plugin, and register a task to run by default. We can do that with this code

Now let’s put our three blocks of code together:

As you can see the configuration object ‘concat‘ is placed into the grunt.initConfig object, and the task registration code goes at the bottom before the module.exports function ends.

To run the code we have added so far, open your terminal window, navigate to your project folder and type:

If all is well, you should get the following output in your terminal window:

Using Grunt to Concatenate JavaScript

The code that was in ‘assets/js/modules/module1.js‘ and ‘assets/js/modules/modules.js‘ can now be found in ‘assets/js/main.js‘, separated with a new line.


So we have concatenated our code, lets compress (uglify) it too to save on precious bandwidth.

To do this, we need to add the following code into our grunt file:

The code above outputs a banner at the top of your destination file using the package name (taken from the package.json file) and todays date. It then uses the output of the concat object (concat.dist.dest), minifies it and places it in the destination file which here has been defined as: assets/js/main.min.js.

We also need to change the part of the script that loads our plugins and sets the tasks. Change this to the following:

You will note that the registerTask element now contains an array as its second argument, instead of a string. This now contains the names of the tasks ‘concat‘ and ‘uglyfy‘.

To test this, you can now add the ‘uglyfy‘ object into the grunt.initConfig object, and modify the tasks at the bottom of the Gruntfile.js, as mentioned previously.

Run the script with the grunt command and your previous two concatenated files should now be both concatenated and compressed within the file you defined (in this case it should be assets/js/main.min.js).


Now lets check our JavaScript files for errors. We can do this with JSHint which can be found in the handy plugin grunt-contrib-jshint.

We can do this by adding in the following object into our code:

The code above will run a JS linter (JSHint) against the files gruntfile.js, any file ending with .js within the folder assets/js/ and any JavaScript file within the folder assets/js/modules/.

Along with that add the following lines to the tasks area of the script:

To test the script run the grunt command. If you have followed everything so far you should get at least one error in your terminal, which will be: Use the function form of “use strict”.

Because of the error that we have found, the grunt script will abort (no more tasks will be run after that point).

JSHint Error

Lets fix the error we found by replacing the 'use strict'; with the following piece of code:

Replacing the ‘use strict‘ directive with this function variant will prevent it being repeated multiple times in our concatenated files.

Once you have fixed any errors that are in your code, you should end up with an output similar to the following:

JSHint Pass


As you can see, Grunt is pretty powerful when it comes to working with JavaScript, but how about we use it to concatenate and compress our SCSS files into CSS.

To do that, we need to configure our Grunt compass plugin by adding the following object into our grunt.initConfig:

The code takes anything in the assets/scss/ folder (that doesn’t begin with an underscore), converts it to css, compresses it, and puts it in the assets/css/ folder.

At the bottom of the script file, add the following along with the other tasks:

Test the file by running the grunt command. If all is well, you should get the following output:

Compass compiled successfully


The final piece of the puzzle is to automate it all. We can do this with the grunt-contrib-watch plugin.

Add the following object to the script:

This tells the script that whenever any of the files configured in the jshint object are changed, or any of the files within assets/scss/ (ending in .scss) are changed, the tasks ‘concat‘, ‘uglify‘, ‘jshint‘ and ‘compass‘ are run.

Finally at the bottom of the script file, add the following along with the other tasks:

Run the grunt command to start grunt listening for changes. This should look something like the following:

Grunt file watching files

Putting it all together

The final Gruntfile.js file, should look a little something like this:

A hackathon in Barnsley


On Saturday 6th July 2013 we held our (Make Do’s) first ever hackathon at the Barnsley DMC to great acclaim, which incidentally was the first ever event of its kind to be hosted in Barnsley.

The ‘Layershift’ hack day brought to the borough a mixture of entrepreneurs, technology enthusiasts, designers, developers, geeks, professionals and novices alike, for a 12 hour session of idea generating, designing and coding.

To help with the day experts were on hand offering leadership and skills to the various teams that were formed on the day.

A huge variety of projects were worked on, from an app that let you order milk from your iPhone, to a website that convert’s tweets to the Yorkshire dialect.

The idea boardPhoto credit: Shuan Bellis
The idea board

Here are some of the highlights:

Got milk?

Developed by James SheriffRichard Keys (Genius Division), Lee Redhead (Lounge Hopper), Matt Brailsford and Lucy Brailsford (The Outfield). Got Milk? Was an app that let you order groceries from local providers such as your local Milkman.

Team 'Got Milk?' hard at workPhoto credit: Shuan Bellis
Team ‘Got Milk?’

The app let you choose items to be delivered to your door the very next day (along with your milk), giving people an alternative to the hustle and bustle of large supermarket chains.

More about the Got Milk app can be read in the following blog posts:

Cheer the duck up

Developed by Craig Burgess (Genius Division), Lawrence GoldstienTim Harbour and Edward Bennigsen. This was my personal favourite app, the concept is simple. An algorithm mines twitter for tweets that are negative or depressing, and you are given the option to cheer the person that sent the tweet with a happy tweet, or a picture of a happy duck!

The more people you cheer up, the happier the duck on screen becomes, until you have cheered the duck up!

The happy duckPhoto credit: Shuan Bellis
One happy duck!

Although it sounds a bit silly, the algorithm has potential for fantastic real world applications, such as finding people who are depressed and suicidal, to finding negative tweets about a particular company.

More about cheer the duck up is available in this Genius Divison blog post: Make Do Hack Day

Tyke translate

Many of the apps, and ideas developed on the day got to the prototyping stage (and I promise I will update this post with links to them as and when they go live), but this app created by John CrowleyRuss BrownAndrew Deniszczyc and Maksims Mironovs is actually live and working.

The Tyke Translate app is available in the wild here: simply send a tweet with the hashtag ‘#tykeme’ and watch as your tweet is translated into the Yorkshire dialect.

Our winners

We gave out several awards on the day to the people who we thought made an outstanding contribution. These were:

  • Best overall hack – Team Got Milk?
  • Best Design – Jaz Kaur (for designing all the things)
  • Best Idea – Tracy Johnson (Ten Minute Tourist)
  • Best N00b – Team BOB (Katie Baugh, for developing a Ten Minute Tourist prototype in a technology that she had previously not used)

Thank you

Thank you to all of our sponsors, our leaders and of course thank you to all of our Hack Day attendees, we wouldn’t have had a day without you.

Final word

If you attended the hack day, and you have blogged, have got your app online, or you simply would like a mention, please drop us a line at, or contribute in the comments below.

Stop wrapping websites


Mobile apps are very popular right now, and some companies want to have their own app in the store just for the sake of having their own app in the store. This, in my opinion, is wrong.

The reason it is wrong, is because many companies that demand mobile apps for the sake of it end up with what is essentially their entire HTML5 responsive website in a mobile wrapper. You know wrapper’s, PhoneGap by Adobe is one of them (not that their is anything wrong with PhoneGap, when used correctly).

Wrapping a standard HTML5 website with a mobile wrapper gives absolutely no benefit to customers, other then them being able to find it in an app store. However it throws up many issues, such as:

  • Links to third party sites open up in the same app (not the mobile web browser as they should, causing a lot of customer confusion)
  • Information is not task specific (customers are bombarded with all of your company information
  • In some cases, an app will be slow and sluggish when compared to a native app (a reason for this is that the entire HTML interface has to be downloaded instead of just the content)
  • The data in a website is usually managed by a CMS, meaning that it cannot be interrogated in as much detail as a native app that solely pulls down the information it needs (not without major consideration as to how the information is architectured within the CMS anyway)

So my plea is, if you are going to offer your customers something mobile, you should offer them something native and offer them something useful. Also consider the justifications for the app, and the benefits it can provide.

Take the Barnsley Council website for instance (note: I am linking to, because this is the version that I am currently working on). It has a working mobile view, which you can use to browse the entire catalogue of services that Barnsley Council have to offer. But to deliver this entire website as a standalone app would not make any sense, and would prove to be a very bad user experience for our customers.

I do, however, think apps have a place in local government, and apps that could be developed for Barnsley Council are ones that meet the following criteria:

  • The app should perform tasks beyond that of the website
  • The app should be able to take advantage of mobile functionality (eg GPS, Camera)
  • The app should be specific to a particular audience

Using the previously listed principles, example applications that I can think of which would be useful for a local government to provide, and which meet the criteria are as follows:

  • Report a pot-hole, or a street-light, or an abandoned vehicle, or anything really… (use location GPS data and photo’s to enable customers to highlight the problem)
  • An app which has information targeted specifically to a particular demographic (eg older people, or people in care), the app would have information and use language specifically aimed at that demographic, and have native functions, such as buttons for quick contact with an appropriate service
  • An app that is a booking facility to see a customer service representative, that would be all that the app does, lets you book an appointment with any department

I am not saying that apps built in applications such as PhoneGap cannot provide this, in-fact they could probably do a pretty good job, I am just re-iterating my point about how you should not wrap a full website with one.

In short, if all you are creating an app for is to get an app in the app store, you are doing it wrong. I think having a section on your website that shows you how to add the site to your mobile’s home screen would be a far better use of resources. The purpose of an app is to deliver information for a specific need, targeted to a specific audience.

One last example, take the Barclays Pingit app for instance. It allows you to make transactions, but doesn’t allow you to apply for a mortgage. That is what their website is for.

Blog inspiration


After reading the amazing ‘Purposeful Paycheques‘ by Cory Miller, I have been inspired to finally get my own blog up and running.

I am a little too busy to give you all an amazing and lengthy blog post about anything interesting right at this minute, but for now, this will be my little outlet on life.

I hope you find my musings entertaining and/or educational. If you do, why not be generous and buy me a beer?