Measuring Our Impact

There are a number of difficulties in providing a straight-forward account of the impact and value of the work we deliver.  However, it feels important for us to continue to try – not least to open up greater understanding of what we do, how and why, and to provoke ongoing exchanges with clients about how results might be improved.

Here’s an example from a recent piece of work with an energy company, which involved our delivering approximately 9 days per month, over 10 months.

The initial objectives we agreed with the MD of the business were as follows:-

  1. to improve the effectiveness of the management team (MT) and the extended management team (EMT);
  2. to provide individual leadership development for EMT members;
  3. to improve MT communications with the rest of the business e.g. via management cascades and engagement events;
  4. to prepare the business for a transfer of ownership e.g. by readying the business for a culture shift, demanding greater focus on financial results and accountability for performance.


Time was short, given the imminent possible sale – and there was pressure to keep costs down, while getting traction quickly.   Such a context ruled out any comprehensive measuring of our ‘start point’ with regard to the four objectives, let alone spending precious time on devising an appropriate methodology and scale for monitoring results with great rigour.

There was another, deeper problem: our model of change consulting rests on our skill in helping clients to ‘own the change’ themselves and to participate fully in making it happen.  So we had to be careful not to construct a measurement model which could imply that success would be due to us alone or, worse, could actually incentivise us to ‘drive’ change in a way that would alienate our clients.  Instead, the measurement model needed to motivate their engagement and ownership of success.

A ‘good enough’ solution?

Our solution was to draft a simple change agenda from the language used by individuals during the confidential, 1:1 interviews that we held with each of the EMT during the initial ‘diagnosis’ phase of our work.  (See the table below.)

We then asked the MD for a slot at the next EMT meeting – not only to get whole team buy-in to this agenda (via dialogue and collective editing etc.), but also to obtain subjective ‘scores out of 10’ against each of the improvement headings from everyone there.  When averaged out, these gave us benchmark scores which everyone had evident interest in improving over time.

Periodically, we asked for updated scores.  These not only gave everyone a good sense of how we were all doing at any one point, but also triggered rich conversations about what needed to happen next in order to make further progress.


The table below shows progress over 9 months:-


Effectiveness Improvement Objective Benchmark score Month 1 End score Month 9
1. More efficient decision-making – resulting in clear, aligned plan & accountabilities 3.3 7.0
2. People more ready to accept accountability and proactively take the initiative – and also to take a bollocking when it’s deserved 3.8 6.8
3. We’re better at getting things done – feeling more empowered to deliver 5.0 7.3
4. Silo behaviours are addressed; we’re all on the same page 5.6 7.2
5. MT is working together without ‘drag’ e.g. grumbling behind backs. We accept we can improve some things and go for it. 3.3 7.0


The MD was impressed with progress, summarising his experience of our work as follows:

“It is your approach which sets you apart. Iterative, subtle changes ultimately building up momentum to create a surprisingly large delta of performance from individuals and teams…”

We certainly believe that this work demonstrates well the value of our ‘minimal intervention’ approach and illustrates how this encourages clients to own the change, rather than feeling ‘done to’.


However, this experiment still brought some interesting dilemmas into sharper focus for us, and we’d be interested to hear others’ views about these.

For example:-

a) What about the other objectives of the work agreed with the MD (see above)?  By measuring progress against some of these, but not others, there was a good chance that our work could have become skewed in an unhelpful way…

b) What are the limits to how much responsibility we can and should take for progress?  And how much should we leave with our clients, however agonising?

c) Where the client is not taking sufficient responsibility for progress, how to strike the right balance between pushing for greater efforts on their part, and showing sensitivity towards the capacity limitations of the client – especially where there are multiple other critical agendas requiring attention…?

Share this: