How to measure impact – learning from practice
Two and a half years ago we launched Nesta Impact Investments. We boldly stated that we would scale innovative solutions to social issues and at the same time measure impact in a robust way according to our Standards of Evidence framework. We envisioned a world of investments where growth, commercial success and evidence of effect were all positively aligned – something that is not always easy to find in this space. Today we launch our paper, Impact measurement in impact investing – learning from practice that discusses how we have implemented evidence and impact measurement into our investment process and support for investees. Our portfolio is now at a point where they are integrating impact measurement into their day to day activities and balancing impact with the commercial aspects of running a company. Our report presents five key lessons that we have learnt from this process:
- The importance of flexibility: In a start-up environment, the venture will naturally develop and innovate as they grow. Impact measurement needs to be flexible enough to allow for innovation and robust enough to measure meaningful results.
- The importance of an impact lead and expert input: Ventures need to invest in a point person to lead on impact and partnerships with academic researchers to make more progress on impact measurement
- Avoiding mission drift: Impact measurement can ensure that the company considers its mission in making decisions about what markets and revenue streams to pursue
- Taking a staged approach: Ventures most often run an evaluation, learn about works and repeat this process before investing in a higher rigour trial
- Balancing the commercial and social: Some ventures have seen commercial success before focussing on evaluation, and vice versa. This means we focus support on wherever a company is falling behind – so that in the long run, we see a balance in commercial success and impact success.
One of the key things we have learnt as an investor is the importance of flexibility and iteration when it comes to impact measurement. Our investees are all developing new approaches to social issues and will refine their products and services as they learn about what works best and for whom. The evaluations they run need to be similarly iterative. For example, Oomph!, a social enterprise running innovative exercise classes in care homes, has trialled a number of pre-post evaluations to learn about what works and what kind of results they are achieving. They will then feed this learning into the design of a more rigorous trial when they are ready.
Recent years have seen an incredible level of interest and activity in the impact investment space. If the ambitions of impact investing are to be realised, the sector needs to focus on measuring and articulating its value in a robust way. We hope this paper adds to this conversation and offers a starting point for ventures and funds that are interested in evidence and evaluation.
Eibhlín Ní Ógáin