Direct support to research and development is a central component of Swedish business policy. Depending on what is classified as direct support, such aid constitutes between 2.8 and 11.3 billion SEK per year. Unlike general support such as tax credits, direct supports require the responsible authorities to actively choose, and thus also reject, projects to invest in. There is much research to suggest that the effects of direct support to a large extent depend on whether or not rigorous selection processes have been created and implemented.
In this report we describe and compare the selection process within three important direct support schemes; IPF at Vinnova, AFFU at the Swedish Energy Agency and part of the European Regional Development Fund managed by the Swedish Agency for Economic and Regional Growth. The purpose of the report is to contribute to an understanding of the choices the implementing authorities face when designing and implementing selection processes. In other words, the purpose is to answer the question, how do you choose how to choose?
Based on a previous literature study and discussions with implementing authorities, we have identified four common challenges that the authorities and programs, in spite of their different circumstances, all must handle.
Getting the target group, and not others, to seek support is something that requires an iterative approach, no program succeeds on the first attempt. The ability and freedom to adjust call texts, selection criteria and other aspects of the selection process is central to getting it right, something that not all programs have had. In these programs, the authorities have developed very different relationships to the target group, from close cooperation to arm's length distance, attempting to strike a balance between encouraging and instructing the target group and ensuring equal treatment.
Operationalizing goals and the target group into concrete selection criteria is a central part of the craftsmanship in the design of selection processes. Not least, deciding what will become a formal requirement and what will become a criterion for comparative ranking of projects is difficult. In particular, we highlight two goals that the authorities have chosen to define and implement in different ways, additionality or added value in the efforts and sustainability goals that partly go against the purely economic objectives. There are no given answers for how such goals should be operationalized, but we can highlight different approaches.
Enabling assessment by ensuring sufficient expertise and information is a particular challenge, one which the authorities handle differently. One of the authorities relies on external assessors, another on purely internal expertise, the third program's selection is carried out in cooperation between different authorities. The choice can largely be explained by the technical breadth and the height of innovation in the projects to be assessed, in combination with purely legal requirements. However, all the authorities use internal competence for the legal examination aspect of the assessment. The assessors' access to information is largely determined by the application templates within the programs, and here we observe a balance between precision in assessment and openness for heterogeneity in the applications.
Finally, the fact that selection processes require collaboration between different competencies and perspectives means that there is a risk for group and delegation problems. Delegation issues appear to be the more serious of the two and we observe several problems when the assessment is divided between different organizations. Training and instructions for the assessors are central tools for overcoming these problems.
Based on a previous literature study and discussions with implementing authorities, we have identified four common challenges that the authorities and the programs, in spite of their different circumstances, must all handle.
An important starting point for the project has been that the implementing authorities can learn from each other. Three learning seminars have therefore been conducted which have contributed to pinpoint challenges and potential measures. To program owners in the design or update phase of a program, we have ten recommendations on issues and trade-offs that one should reflect on, based on our cases. For most of the recommendations, at least one of the programs and authorities has already come a long way, although the work can often be further developed.
For the agencies, we recommend that the dialogue on selection questions that has taken place through the project seminars continues and is broadened to include discussion of the state aid rules. In order to determine the cost-effectiveness of different selection processes, and thus facilitate their design, we recommend authorities that distribute direct supports to, in collaboration, find a common way of calculating the resources used for selection.
The documentation of the selection processes and the selection we have received is uneven in contents and partly unsystematic, focus is on other matters. We recommend that authorities create clear routines on how the selection process and results should be documented, especially when the selection process is updated. We recommend that the authorities save structured information on also those projects that do not receive support, in order to facilitate the construction of control groups and evaluation, and to systematically follow up if the assessments made in the selection process develop as expected.
Selection in direct support schemes requires detailed planning of both complicated processes and resource consumption. Iterative updating is also required to find the target group. We therefore recommend that new government assignments that include a selection process are preceded by pilot projects in which the selection process can be tested, evaluated and adjusted.
Selecting for innovation: choosing how to choose