Prompted by a great presentation from Jon Hugget, The Social Innovation Partnership hosted a lively discussion on Evaluation and Social Innovation this morning, organised in partnership with Impact Hub Westminster and °directional thinking.
Starting from the premise that too often evidence just gets in the way of innovation, Jon posited the view that the focus of social innovation should be on ‘Improving’ what we do, not trying to ‘Prove’ the validity of what we are doing.
There was broad agreement that the desire to demonstrate robust evidence can sometimes be a barrier to responsive and engaged social action and innovation – and a range of examples were cited of programmes which have succeeded through a commitment to agile and responsive methods of evaluation and learning while the project is progressing: from driving to primary healthcare, safe sex counselling to early years intervention.
However, all participants acknowledged the value and importance of evidence, and a strong case was made for investment in data-gathering and of understanding impact over the longer-term. Indeed, the point was made that, although funders and commissioners will often rely too heavily on evidence and data which appears detached from practice, this will assist them in being more adventurous and risk-tolerant than practitioners who might be too close to the action to take risks and countenance failure.
But there was remarkable agreement on the need for a more flexible approach, with a call for a broader, more diverse, approach to evaluation and measuring impact. There was a particular emphasis on the importance of drawing in users and practitioners – to be part of a more interactive evaluation process, based on live, ongoing, experience, rather than just longer-term research. A more radical approach was also suggested – one which might invite users and practitioners (rather than funders and commissioners) to drive the process, and shape the measures against which success is determined.
Ultimately, of course, everyone wants to contribute towards social programmes which are ‘proven’ to make things better for the people they’re trying to support, and no-one wants to waste resources on programmes which fail. But for TSIP, and for other partners, social innovation will always require a more agile and dynamic approach where getting things wrong might actually lead to improvement. Placing evaluation at the heart of that ‘improving’ mindset will ensure that those failures drive future change.
Perhaps characteristically, the group called for a ‘learning’ approach – one in which evaluators shared ideas, and were open about what works and what doesn’t. As the first of a series of breakfast sessions, this meeting provided the starting point for such a dialogue with other sessions to come.