Field:

What can we learn from impact studies?

– An examination of some of Vinnova's impact studies

A learning innovation policy requires that there are procedures for evaluations and processes and how these affect the policy’s direction and content. As commissioned by Growth Analysis, Sweco Eurofutures conducted a meta-analysis of some impact evalua­tions of various research programmes targeting sectors and industries. For this purpose, a number of specific impact evaluations of sectoral programmes that Vinnova and its predecessors have had were selected. The sectoral programmes focused on the manufac­turing industry in general, programmes specifically targeting the automotive industry, programmes for medical technology and food aiming to generate health economic impacts, and programmes that stimulated the development of renewable raw materials. The evaluations were carried out during the period 2008 to 2010.

The meta-analysis discusses the results of the evaluations from three perspectives. The first concerns how the term ‘impact’ is treated in the evaluations. The second gives an account of the information and data used in the impact evaluations. The third perspective provides a description of Vinnova’s view of impacts, their work with impact assessments over the years, and lessons learned from these: What has been learned from the impact assess­ments? Have they changed over time, and how is work being done today to ensure access to the right information to be able to carry out the evaluations and follow-ups?

An important starting point in modern evaluation literature is that an impact ought to be viewed as a result of an intervention that is compared to a norm identified as relevant in the context. The comparison can be made with other alternative interventions, or that no intervention has been carried out. An impact that a public sector actor achieves is a result that could not have been realised without the action of the actor. This implies a need for a counterfactual methodological approach, which is often associated with methodological challenges and high demands on the data quality.

The impact evaluations reviewed are comprehensive and ambitious in their approach, while the operationalization of their implementation is generally of a lower level in terms of the empirical methods and requirements on the trustworthiness and reliability of the information. The system perspectives, which meant that a number of programmes and interventions were analysed together, combined with the long time span between the intervention and the impact that one was attempting to measure, makes it difficult to establish credible causal relationships between interventions and impacts. The appraisal is thus that, despite this, many of the impact assessments often draw far-reaching conclusions regarding the impacts of the programmes. The impact assessments studied thus have more of a quality of looking for positive impacts rather than being neutral in their scrutinizing with a view to contributing lessons learned from the interventions. As the impact assessments are presented, the studied programmes therefore are advanced more as “good examples” of results from implemented programmes.

Thus, a shortcoming is that the norms are often weakly defined for being able to talk about impacts in strict counterfactual terms, which has meant that it is difficult to empirically evaluate the direct and indirect impacts. This is partly due to the fact that the programmes evaluated have been very complex and extensive, and also ran over a long period of time. These conditions make it more difficult to describe a clear causality between interventions, results and impacts, which is not unusual in these R&I programmes. In recent years, Vinnova’s perspective has changed somewhat due to a greater awareness of the substance of the term ‘impact’ having developed.  At the same time, Vinnova has worked to develop and consolidate the program theory and intervention logic from the start in new pro­grammes with more stringent requirements for controllable impact targets, which make it easier to track anticipated impacts at later stages. Vinnova has also worked to improve information management in the programmes by means of a more extensive system for project accounting and follow-up, including by means of various types of surveys sent to the actors involved in a project. These changes aim to facilitate future studies. Based on Vinnova’s own experiences of the difficulties and shortcomings they have found in many previous impact assessments, later impact assessments have therefore been focused to a greater extent on more limited programmes, with shorter time spans between conclusion and impact assessments.  These also focus more on the impacts that the programmes have real opportunities of influencing.

Even if an awareness of how impact evaluations should be carried out has grown within government agencies, a broad debate on appropriate interventions, in terms of both methods and what data should be gathered, is needed in order to be able to furnish the Government and its agencies with learning impact evaluations. The knowledge that impacts require a counterfactual condition is still assessed to be relatively weak – not only within government ministries and agencies, but also among many evaluators. The implementation of impact evaluations based on a consciously counterfactual approach will therefore place higher demands on the design of programme theory, the use of various methods, and empirical evidence than has previously been the case.

What can we learn from impact studies? – An examination of some of Vinnova's impact studies

Serial number: PM 2014:01

Reference number: 2012/008

Download Swedish report Pdf, 1.1 MB.

test

Håll dig uppdaterad, prenumerera på vårt nyhetsbrev