POLICY EVALU FOURTH FIFTH FIRST SECOND THIRD FOURTH FIFTH FIRST SECOND THIRD FOURTH FIFTH FIRST
ENTREPRENEURSHIPPOLICYEVALUATION FIRSTFIFTHFOURT
HTHIRDSECONDFIRSTFIFTHFOURTHTHIRDSECONDFIRSTFIFTHFOURTHTHRIDSECONDFIRSTFIF
SME ENTREPRENEURSHIP FIRST SECOND THIRD FOURTH FIFTH FIRST SECOND THIRD FOURTH FIFTH FIRST SECOND
SMEPOLICYEVALUATIONENTREPRENEURSHIPDFIRSTFIFTHFOURTHTHIRDSECONDFIRSTFIFTHFOURTHFIFTHFOURTHTHIRDSECONDFIRSTFIFTH
ENTREPRENEURSHIP POLICY EVALUATION SME
POLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSME
POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME
ENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPS
SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHI
SMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATI
ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP PO
POLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENT
POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME
ENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATION
ENTREPRENEURSHIP
SMEPOLICYEVALUATIONENTREPRENEURSHIPSME
SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHI
SMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATI
ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP PO
POLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATION
POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION
ENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPS
SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EVALUATION SME ENTREP SMEPOLICY
EVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENT
ENTREPRENEURSHIP POLICY EVALUATION SME ENTREPRENEURSHIP POLICY EV POLICYEVALUA
TIONENTREPRENEURSHIPSMEPOLICYEVALUATIONENTREPRENEUR POLICY EVALUATION SME ENTREPRENEURSHIP PO
ENTREPRENEURSHIPSMEPOLICYEVALUATION
SME ENTREP
�����������������������
OECD Framework
for the Evaluation of SME and Entrepreneurship
Policies and Programmes
-:HSTCQE=UYUU]X:
The full text of this book is available on line via this link:
www.sourceoecd.org/industrytrade/9789264040083 Those with access to all OECD books on line should use this link:
www.sourceoecd.org/9789264040083
SourceOECD is the OECD’s online library of books, periodicals and statistical databases.
For more information about this award-winning service and free trials, ask your librarian, or write to us at [email protected].
ISBN 978-92-64-04008-3 85 2007 04 1 P
of SME and Entrepreneurship Policies and Programmes
This Framework provides policy makers with a concrete, explicit, practical and accessible guide to best practice evaluation methods for SME and entrepreneurship policies and programmes, drawing upon examples from a wide range of OECD countries.
It examines the benefits of evaluation and how to address common issues that arise when commissioning and undertaking SME and entrepreneurship evaluations. Key evaluation principles are set out, including the “Six Steps to Heaven” approach, and illustrated with examples of evaluations of national, regional and local programmes that can be explored further by the reader. The publication focuses not only on the evaluation of individual policies and programmes but also on bigger picture peer review evaluations and assessment of the impact on SMEs and entrepreneurship of mainstream programmes that do not have business development as their principal aim.
E C D F ra m ew o rk fo r t he E va lu at io n o f S M E a nd E nt re p re ne ur sh ip P o lic ie s a nd P ro g ra m m es
OECD Framework
for the Evaluation of SME
and Entrepreneurship
Policies and Programmes
AND DEVELOPMENT
The OECD is a unique forum where the governments of 30 democracies work together to address the economic, social and environmental challenges of globalisation.
The OECD is also at the forefront of efforts to understand and to help governments respond to new developments and concerns, such as corporate governance, the information economy and the challenges of an ageing population. The Organisation provides a setting where governments can compare policy experiences, seek answers to common problems, identify good practice and work to co-ordinate domestic and international policies.
The OECD member countries are: Australia, Austria, Belgium, Canada, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Japan, Korea, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, Turkey, the United Kingdom and the United States. The Commission of the European Communities takes part in the work of the OECD.
OECD Publishing disseminates widely the results of the Organisation’s statistics gathering and research on economic, social and environmental issues, as well as the conventions, guidelines and standards agreed by its members.
Also available in French under the title:
Cadre de l’OCDE sur l’évaluation des politiques et des programmes à l’égard des PME et de l’entrepreneuriat
Corrigenda to OECD publications may be found on line at: www.oecd.org/publishing/corrigenda.
© OECD 2007
No reproduction, copy, transmission or translation of this publication may be made without written permission.
Applications should be sent to OECD Publishing [email protected] or by fax 33 1 45 24 99 30. Permission to photocopy a portion of this work should be addressed to the Centre français d’exploitation du droit de copie (CFC), 20, rue des Grands-Augustins, 75006 Paris, France, fax 33 1 46 34 67 19, [email protected] or (for US only) to Copyright Clearance Center (CCC), 222 Rosewood Drive, Danvers, MA 01923, USA, fax 1 978 646 8600, [email protected].
This work is published on the responsibility of the Secretary-General of the OECD. The opinions expressed and arguments employed herein do not necessarily reflect the official views of the Organisation or of the governments of its member countries.
Foreword
T
he OECD Working Party on Small and Medium-sized Enterprises and Entrepreneurship (WPSMEE), in line with a recommendation of the 2004 Istanbul Ministerial Declaration on Fostering the Growth of Innovation and Internationally Competitive SMEs, has prepared this report aimed at strengthening the conceptual framework for SME policy evaluation. This report seeks to be of direct practical assistance to public administrators and politicians concerned with evidence on the effectiveness of SME and entrepreneurship policies and programmes at a national and local level.The Framework was written by Dr. Jonathan Potter, Principal Administrator, OECD Centre for Entrepreneurship, SMEs and Local Development, and Prof. David Storey, Warwick Business School, UK, and prepared under the supervision of Mme Marie-Florence Estimé, Deputy Director of the OECD Centre on Entrepreneurship, SMEs and Local Development (CFE).
A Steering Group, co-chaired by Dr. Roger Wigglesworth, New Zealand and Mr. George Bramley, United Kingdom, guided the preparation of the Framework. The Co-Chairs along with the members of the Steering Group offered many valuable comments during the drafting, revisions and review of the Framework: Mrs. Sue Weston and Ms. Vicki Brown, Australia; Mrs. Laura Morin, and Ms. Kaili Levesque, Canada; Ms. Annukka Lehtonen and Mr. Pertti Valtonen, Finland; Mr. Serge Boscher and Mr. Jean-Hugues Pierson, France; Mr. Tamas Lesko and Dr. Ágnes Jánszky, Hungary; Mr. Young-Tae Kim and Dr. Sung Cheon Kang, Korea; and Ms. Ana María Lagares Pérez, Spain.
Sincere appreciation is extended to the Delegates of the OECD WPSMEE for their numerous comments and inputs during the compilation of the Framework.
Thanks also go to Mr. Kevin Williams, Principal Administrator, OECD Council and Executive Committee Secretariat, Mr. Hans Lundgren, Head of Section, Evaluation, Development Co-operation Directorate, and Mrs. Mariarosa Lunati, Administrator, CFE/
SME and Entrepreneurship Division for their drafting suggestions and Ms. Brynn Deprey, Mr. Jorge Gálvez Mérdez, Mr. Damian Garnys, and Ms. Elsie Lotthe for their operational support.
Table of Contents
Summary and Route Map. . . 9
Section 1. Evaluation Issues. . . 15
Defining evaluation . . . 16
Why do an evaluation? . . . 17
Typical objections to evaluation and responses . . . 19
Key evaluation debates . . . 22
Doing evaluations. . . 27
Some key principles for evaluation practice . . . 32
Notes . . . 34
Section 2. Evaluation of Individual National Programmes. . . 37
Introduction . . . 38
Evaluations of financial assistance . . . 39
Enterprise culture. . . 42
Advice and assistance . . . 43
Technology . . . 47
Conclusion. . . 48
Section 3. Evaluation of Regional and Local Programmes . . . 53
Introduction . . . 54
Advice, consultancy and financial assistance . . . 55
Clusters and local innovation systems . . . 57
Support to areas of geographical disadvantage . . . 59
Conclusion. . . 64
Section 4. The Role of Peer Review in Evaluation . . . 67
Introduction . . . 68
The peer review methodology . . . 68
OECD national SME reviews . . . 70
OECD regional and local entrepreneurship reviews . . . 71
OECD evaluation guidance . . . 72
Section 5. Reviewing the Aggregate Impact of Public Policies . . . 75
Introduction . . . 76
Impact of mainstream policies on SMEs. . . 77
Capturing the total policy package . . . 91
Conclusion. . . 94
Notes . . . 95
References . . . 97
Appendix A. Six Steps to Heaven: Methods for Assessing the Impact of SME Policy . . . 106
Appendix B. The OECD Istanbul Position . . . 103
Appendix C. Examples of Evaluation Guidance . . . 109
Appendix D. Assessing the Quality of an Evaluation . . . 111
Appendix E. Framework Condition Indicators: Entrepreneurship Conditions in Denmark in 2005 . . . 113
Appendix F. Summary of the Evaluation of State Aid to SMEs in the Member States, European Economic Area and the Candidate Countries . . . 120
List of tables 1.1. Qualitative compared with quantitative evaluation . . . 23
1.2. The choice of internal and external evaluators . . . 25
2.1. SME and entrepreneurship policy areas covered . . . 38
2.2. Loan guarantee scheme, Japan . . . 39
2.3. Loan guarantee scheme, Canada . . . 40
2.4. Assistance to new enterprises started by young people, Italy . . . . 40
2.5. Grant assistance and small firm performance, Ireland . . . 40
2.6. Public subsidies to business angels: EIS and VCT, UK . . . 41
2.7. Public subsidies to business angels: EIS, UK . . . 41
2.8. Assisting young disadvantaged people to start up businesses, UK . . 42
2.9. Graduates into business, UK . . . 43
2.10. Investment readiness, New Zealand . . . 43
2.11. Impact of marketing advice, UK . . . 44
2.12. Impact of business advice, Belgium . . . 44
2.13. Impact of advisory support, Bangladesh . . . 44
2.14. Bank customers receiving business advice, UK . . . 45
2.15. Assistance and advice for mature SMEs, UK . . . 45
2.16. Use and impact of business advice, UK . . . 46
2.17. Evaluating entrepreneurial assistance programs, US . . . 46
2.18. Encouraging partnerships amongst SMEs, Sweden . . . 47
2.19. Technology assistance to small firms, US . . . 48
2.20. The SBIR program, US . . . 48
2.21. The UK SMART programme . . . 49
2.22. Impact of science parks, Greece . . . 49
2.23. Impact of science parks, Sweden . . . 50
2.24. University/SME links, New Zealand . . . 50
2.25. Impact of management training on SMEs, UK . . . 51
2.26. Small firms training loans, UK . . . 51
3.1. Regional/local policy areas covered . . . 55
3.2. Subsidised consulting, Belgium, Wallonia. . . 56
3.3. Business advisory services, UK, South West England . . . 56
3.4. Enhancing the capability of the SME owner through use of consultants, UK, Scotland . . . 57
3.5. Export information and advice, Canada, Quebec . . . 57
3.6. Enterprise partnerships for exporting, Sweden, Örebo . . . 58
3.7. Small business grants, UK, North East England . . . 58
3.8. Regional development agency grants, Ireland, Shannon . . . 59
3.9. Local innovation system policy, EU regions . . . 60
3.10. Business networking, UK, North East England . . . 61
3.11. Enterprise Zone evaluation, US, Indiana . . . 61
3.12. Enterprise Zone evaluation, US, Five States . . . 62
3.13. Enterprise Zone evaluation, UK. . . 62
3.14. Evaluation of enterprise support in disadvantaged areas, UK . . . . 63
3.15. Regional policy evaluation, UK . . . 63
3.16. Regional policy evaluation, Italy . . . 64
3.17. Rural policy evaluation, Canada, Quebec. . . 64
3.18. Rural enterprise support, United Kingdom, Northumberland . . . . 65
5.1. The indicators . . . 83
5.2. Ease of Doing Business ranking. . . 84
5.3. Starting a business in 1999, 2004 and 2006 . . . 85
5.4. Average conversion rates young businesses/nascent entrepreneurs, 2000-2004 . . . 92
5.5. Selecting policy areas . . . 92
B.1. Six Steps to Heaven: Methods for assessing the impact of SME policy . . . 106
D.1. Grid for a synthetic assessment of the quality of evaluation work. . . 112
Figure 1.1. New Zealand Trade and Enterprise (NZTE) Growth Range Programme Logic Model . . . 31
Policies and Programmes
© OECD 2007
Summary and Route Map
T
his Framework document provides a forum for the international exchange of knowledge on best practice evaluation of Small and Medium-sized Enterprise (SME) and Entrepreneurship policy. Its target readership is public administrators and policy-makers concerned with the formulation, development and implementation of SME policy, together with professionals concerned with evaluation of such policies. It seeks to be concrete, explicit, practical and accessible, drawing upon examples from a wide range of OECD countries. Almost all the evaluations documented are publicly available online. It is also intended that the text will assist SME policy makers in non- member countries.In line with the OECD Istanbul Position, which underlines the need to strengthen the culture of evaluation of SME and entrepreneurship policies (Appendix A), this document has four objectives:
● To increase the awareness of politicians and public officials of the benefits from having an evaluation culture.
● To disseminate examples of good micro evaluation practice at national and sub national levels.
● To highlight key evaluation debates: Who does evaluations? What procedures and methods should be used? When to do the evaluations?
What about the dissemination of findings? Should all policies be disseminated in the same way?
● To make a clear distinction between policies that operate at the micro level, i.e. SME and entrepreneurship specific policies, and those that operate at the macro level, i.e. mainstream policies that nonetheless influence SMEs and entrepreneurship.
To achieve this end, the Framework is divided in three main parts. The first deals with evaluations of micro-entrepreneurship and SME policies formulated and delivered at the national level. The second deals with entrepreneurship and SME policies delivered at the local/regional level. The third section is rather different. It reviews approaches to establishing the aggregate impact of a range of public policies that strongly influence entrepreneurship and SME performance, yet are rarely the responsibility of
the main department of government responsible for SMEs. Prior to that, the Framework reviews good practice in evaluation more generally.
It should be noted that the Framework does not seek to be a handbook or manual that sets out the steps that need to be taken to complete an evaluation. A substantial body of such handbooks and manuals exist and selected examples are provided in Appendix C. Rather the focus of the Framework is on discussing the difficult issues that arise in evaluating SME and entrepreneurship policies and programmes, particularly with respect to quantitative impact evaluation, and providing examples of evaluation approaches that have been used to address these issues. The Framework should therefore be read in conjunction with, rather than in place of, other evaluation guidance in this field.
This summary provides a route-map for the reader, highlighting its key conclusions. It then moves on to setting out the key conclusions from each of the three parts. Finally it sets out a proposal for continuous improvement in the evaluation of SME policy.
So, why do evaluation?
● To establish the impact of policies and programmes.
● To make informed decisions about the allocation of funds.
● To show the taxpayer and business community whether the programme is a cost-effective use of public funds.
● To stimulate informed debate.
● To achieve continued improvements in the design and administration of programmes.
When and how should programme evaluation be done?
● Evaluation has to be integral to the policy process. Hence there is merit in undertaking prospective evaluations – as policy options are being formulated; formative evaluations as the policy is in operation; and summative evaluations once a clear policy impact can be judged. The summative evaluation findings have to feed back into current policy making.
● For summative evaluations we favour a dual approach. The first is to establish the impact of established large scale programmes by using quantitative, statistical methods using “control groups” that score highly on the “Six Steps to Heaven” metric.
● These can be valuably complemented with qualitative approaches such as case studies and peer reviews for more detail on how policy works and how
it may be adjusted. Qualitative approaches are also useful for smaller scale programmes for which the costs of quantitative evaluation may be too high.
And by whom?
● Evaluation undertaken by specialists is essential for reliable impact evaluation.
Sometimes the necessary independence can only be delivered by “outsiders”
but independent evaluation units within government can also perform this role.
The bedrock of good evaluation comprises:
● The programme has to have clearly specified objectives from which it is possible to determine whether or not it succeeded.
● The evaluation has to be set in progress and data collection begun as, or even before, the programme is implemented.
● The evaluation has to be able to lead to policy change.
The evaluation of national programmes
● This section of the Framework provides examples of evaluations that have been undertaken on the following policy areas: Financial Assistance;
Enterprise Culture; Advice and Assistance; Technology; and Management Training.
● It concludes that, whilst there are examples of high quality evaluations, this is not the norm.
● Broadly, lower quality evaluations seem to produce more “favourable”
outcomes for the project because they attribute observed change to the policy when this may not be justified.
The evaluation of local and regional programmes
● At the regional and local level less costly and less sophisticated approaches are often adopted because the programmes are often smaller and because evaluation structures in terms of information bases, professional evaluation capabilities and understanding of evaluation methods by users may be weaker.
● The work of the OECD Local Economic and Employment Development (LEED) Programme with city and regional governments and development agencies has shown that a critical issue for policy development is increasing understanding of the real policy needs of the region or locality and assessing the alternative options for intervention given the specific local context.
Peer review: a tool for evaluation
● Whilst evaluation of programme impacts is still required, broader “peer reviews” are also useful in providing “big picture” assessments of the full range of entrepreneurship and SME policies including in selected regions.
Reviewing the aggregate impact on SMEs and entrepreneurship of public policies
● Although explicit and targeted SME and entrepreneurship policies influence the creation of new firms and the development of SMEs, so also do other government policies which do not have such a focus. They are also rarely the responsibility of the main SME department of government. These policies include control of interest rate and tax policies, social policies such as the setting of unemployment benefits, the cost and time of starting a new business and the role of immigration and emigration.
● These policies represent substantial expenditures in many countries and our review shows they impact powerfully on entrepreneurship and SME development. However, control, or influence, over that total expenditure is rarely exercised by the department of government responsible for SME policy. Instead, other departments or organisations of government often have considerably larger budgets, but may have different priorities to that of the main SME department.
● The challenge for SME and entrepreneurship policy makers is to identify these macro policies and their links to enterprise. It is then to seek to ensure that they work in a way which is congruent with the objectives of enterprise support.
● Evaluation approaches need to be developed that permit policy makers with SME and entrepreneurship responsibilities to be able to engage more fully in cross-government discussions on priority setting.
Future work
The current document is not to be regarded as the definitive or “final”
statement on how SME policy and its constituent parts should be evaluated.
In our judgement there remain too few examples of top quality evaluations. We also have too little knowledge about the impact which these evaluations have had upon the formulation of policy and the impact which policy changes have upon SMEs and the economy more widely for this to be “the last word”. We therefore propose that this document should evolve over time to reflect the direct interests
of policy makers, SMEs and the taxpayer. A future text could benefit from the following:
● More examples of high quality evaluations that can be shared between countries; and
● More evidence, probably in a case study format, of the links between evaluations undertaken and policy changes. An example here might be the review by OECD of SME policy in Mexico and the changes that subsequently occurred in that country.
In short, what we are able to provide in this current text are some generic approaches to evaluation and some examples of evaluations undertaken, some of which are better than others in terms of their technical merit.
Policies and Programmes
© OECD 2007
Section 1
Evaluation Issues
Defining evaluation
In their review of policy evaluation in innovation and technology, Papaconstantinou and Polt (1997) provide a very helpful definition of evaluation. They say: “Evaluation refers to a process that seeks to determine as systematically and objectively as possible the relevance, efficiency and effectiveness of an activity in terms of its objectives, including the analysis of the implementation and administrative management of such activity”.
Several words or phrases in this definition merit strong emphasis. The first key-word is “process”. This emphasises that evaluation is not a “once-off”
activity, undertaken once a particular programme has been completed.
Instead it is an integral element of a process of improved policy or service delivery.
A second key phrase in the definition of evaluation is “as systematically and objectively as possible”. Given that evaluation traditionally takes place “at the end of the line”1 there are likely to be strong entrenched interests in place once a programme has been in existence for a number of years. These entrenched interests include the direct beneficiaries of the programme, such as the businesses receiving funds, but they will also include those who are responsible for initiating and administering these programmes. All else held equal, it is to be expected that all these groups will choose the programme to continue or expand. The task of the evaluator, however, is to “systematically and objectively” assess the merits of the programme. In this task, the evaluator may well conflict with those committed to the programme. Only through the use of objective techniques, discussed later in the paper, can the evaluator demonstrate their independence to those delivering programmes.
The third key phrase in the definition is “the relevance, efficiency and effect of an activity in terms of its objective”. The implicit assumption in this statement is that the policy has clear objectives and that these are stated in sufficiently clear terms for them to be used by the evaluator. In practice, this is by no means always the case. As will be shown later, a key role for evaluators is often to formalise for the first time the objectives of programmes, often after such programmes have been in operation for many years.
This definition, and the OECD Istanbul paper,2 emphasised that evaluation has an integral role to play in the policy process. Evaluation cannot be left “at the end of the line”. Instead, it has to be a key element of initial
policy formulation. Once the policy is operational, all organisations and individuals responsible for delivery have to be aware that evaluation is to take place. Once the evaluation has been undertaken, and sometimes as it is taking place, it should be used as the basis for dialogue with policy makers, with the objective of delivering better policy. The outcome of the evaluation can then become an input into a debate on the appropriate ways for governments and SMEs to interact.
Why do an evaluation?
Whilst some countries have a long established tradition of undertaking evaluation, others do not. For those seeking to champion a culture of evaluation, the following arguments summarise the case in favour. We then also take the arguments that are often used against evaluation and address them.
To establish the impact of policies and programmes against their objectives
The principal reason for doing evaluation is to establish whether or not policy has contributed to correcting or ameliorating the problem it set out to resolve. This is often thought of in terms of tackling market failures that reduce economic efficiency, such as inadequate availability of finance, skills, advice and technologies, but may also encompass a desire to improve equity among groups of people or places, for example by supporting entrepreneurship among unemployed youth or entrepreneurship in poor localities. Evaluation of these impacts is facilitated by a clear statement of measurable outcomes right at the start of the policy/programme design and the collection of relevant data throughout its life.
To make informed decisions about the allocation of funds
Governments manage a portfolio of policies and programmes each with its own rationale and justification. Evaluation assists managers to assess the relative effectiveness of these policies and programmes and to make judgements about where to place their efforts in order to obtain the greatest benefits for given costs. Evaluation evidence can help to identify where government can make the biggest difference to its objectives and targets.
To show the tax payer and business community whether the programme is a cost effective use of public funds
The scale of tax-payers’ funding for entrepreneurship and small business policies clearly varies from one country to another. It also varies according to precisely what is incorporated into the definition. Nevertheless the amounts
are usually substantial. For example, EIM (2004) reports that approximately six billion Euros was spent annually by EU Member States on state aid to small and medium sized enterprises.3 However, even this may be a considerable underestimate. One EU Member State – the United Kingdom – in a comprehensive review of tax-payers’ funding directed towards SMEs, reported that 2.5 billion GBP of public money was spent on direct support to SMEs in England alone, PACEC [2005] quoted by [National Audit Office (2006)]. A third example is a programme in the United States – the Small Business Innovation Research Program. Cooper (2003) reports this programme made annual awards of USD 1.1 billion in the 1997-1999 calendar years.4
These examples illustrate that, probably for most developed countries, public funding of SMEs is substantial, even if it is extremely difficult to quantify in aggregate and may still be relatively modest in terms of tax-payers’
support to large enterprises. Given these substantial sums of public money, it is reasonable for tax-payers to be reassured that their funding is being spent in an appropriate manner. It is reasonable for tax-payers to demand evidence that public programmes are spending funds in accordance with their stated objectives. This role is normally played by public auditors. A second role, but one not normally played by auditors, is to assess whether the public funds are achieving the objectives set out by politicians. This is the function of evaluators.
To stimulate democratic debate
In democracies, it is reasonable for the electorate to question the decisions made by governments. In order to facilitate that debate, it is appropriate for organisations to be able to have access to evidence on the impact of policies. In this regard, SME and entrepreneurship policies are no different from other areas of government expenditure. For this reason, the results of evaluations enhance and inform public debate.
This debate only takes place when the results of evaluations enter the public domain. This emphasises not only the importance of undertaking evaluations, but also of their findings being disseminated.
To achieve continued improvement in the design and administration of programmes
Politicians and public servants administering SME and entrepreneurship programmes should be seeking continuous improvements and there is of course a need to ensure adaptation to changing conditions. Evaluation is a key tool for learning about how well policies and programmes are delivering, what problems may be emerging, what elements work well and less well and what could be done better in the future. For example, policy makers may seek to
deliver policies to different groups, for example by directing more resources towards enterprises established by the socially disadvantaged or by those likely to employ others, or those in high technology. They may seek to deliver policies using different organisational forms, to stimulate the take-up of policies or to deliver them in a more cost effective manner. All these changes of focus can emerge from undertaking appropriate evaluations. Alternatively, existing policies can be delivered more effectively as a result of accumulated evaluation experience.
Typical objections to evaluation and responses
The discussion above focussed on the positive aspects of evaluation.
However, one of the barriers to spreading an evaluation practice is a resistance to evaluation amongst a range of politicians, policy makers and practitioners.
Here we discuss some of the most common objections to evaluation and the degree to which they stand up to critical assessment. Our judgement is that although the objections have some weight, on balance, they do not amount to a solid case for rejecting evaluation and hence sacrificing the benefits cited above.
But evaluation is expensive and bureaucratic
Evaluation is not costless. Costs include the payment of consultants/
evaluators, the collection of data and the time taken from those delivering programmes to inform the evaluation. The United Kingdom statistical office, for example, requires the time of recipients of the programme in providing their opinions and information about the programme to be costed (i.e. the cost of the respondents time must be explicitly included in the cost of the evaluation). Data may also have to be collected from both clients of the programme, and a “control group” of non clients.
However, the resources committed to evaluation are normally very modest in comparison with the total size of the programme. For example, the review by Sheikh and Steiber [2002], “Evaluating Actions and Measures Promoting Female Entrepreneurship” identified an appropriate budget of between 2% and 5% for the purposes of evaluation. This may be appropriate for small programmes but for programmes in larger countries a figure of between 0.5% and 1% of annual expenditure would be more usual.
Given the opportunity which evaluation provides for using resources more efficiently, and for the design of new programmes, these seem to be very modest costs indeed.
But evaluation does not always lead to policy improvements
Evaluations of programmes can fail to lead to policy change for several reasons. It may be because those responsible for programme management are hostile to the concept of evaluation. It can also happen where evaluators fail to engage programme managers, or where they fail to understand the details of the programme. Evaluators themselves may fail to express their findings in a language that is easily understandable to policy makers and those responsible for policy delivery.
Although there are instances where evaluations have not led to improvement, this is not a sufficient justification for being reluctant to undertake any form of evaluation. To minimise the potential problems, programme managers have to be persuaded that the quality of programme delivery can be enhanced through evaluations and the consultants have to
“reach out” to programme managers to engage them wherever possible.
But ultimately evaluation takes place for the benefit of the tax-payer, and not for the provider[s] of the programme. Those programmes that are shown to be demonstrably ineffective have to be closed, and this has to be recognised by programme managers.
In practice, if evaluation is to lead to change, a balance must be struck between, on the one hand, ensuring the independence of the evaluator whilst, on the other, engaging support of those involved with programme delivery.
And risks diverting attention away from programme delivery
It is the case that there are cultural differences between evaluators and deliverers of programmes. The former are often analytical individuals, often with an academic background, whereas the latter consider themselves practical individuals focused upon delivering services to their clients. Because they are so close to their clients they view themselves as the best judge of the effectiveness of the programme. They have difficulty seeing what value a
“detached” consultant can provide in terms of programme improvement. For this reason, programme deliverers often resent the time taken in completing forms and collecting data which are, however, vital to the success of an evaluation. Programme managers and deliverers understandably can also feel threatened by an evaluation, especially when they know they do not fully understand the techniques used by the evaluators, but fear the evaluators do not fully understand the programme.
For an evaluation to be a success however, these cultural differences have to be managed. The most effective way of achieving this, as identified above, is to demonstrate that the interests of both the evaluators and the programme managers/deliverers can be more closely aligned by both parties focussing on areas for programme improvement. This can be most effectively achieved by
engaging those delivering the policy through ensuring the issues of concern are addressed in the evaluation and by being adequate opportunity to comment upon, and offer their interpretation of, provisional findings.
It is, of course, a simplification to imply that the programme managers most hostile to evaluation are those fearing negative feedback. Nevertheless senior policy-makers need to be aware that evaluation, whilst it is in the taxpayers interests, may provoke considerable hostility from programme deliverers. The latter have to be engaged but not the ultimate voice.
But evaluation is only for advanced countries
It is the case that programme evaluation is more frequently undertaken in advanced, rather than in developing economies. In part this may be because it is more difficult to find sufficient numbers of individuals with the type of analytical skills necessary to conduct good quality evaluations in developing economies. Major donor organisations, such as the World Bank, can therefore play a role in both undertaking evaluations themselves and in training others to perform these tasks.
Nevertheless it is not only the most developed countries that undertake evaluation. In its review of state aid to SMEs, EIM [2004] surveyed EU Member States, European Economic Area and candidate countries. A total of 29 countries were identified. Only Ireland, the Netherlands and Slovakia performed state aid evaluations on all schemes, implying that evaluation is not simply characteristic of the more wealthy countries. EIM specifically noted that the State Aid Act obliges the Slovak Government to evaluate all state aid using statistical analysis of aid recipients and control groups. They also noted that the analyses are performed on macro and micro levels. Full details of this important survey are provided in Appendix F.
Mexico has also recently committed to undertaking SME evaluation. It believes this will “improve support systems” and identify areas of opportunity, thus granting certainty to the population on the efficient use of resources.
These examples illustrate that it is not necessarily the most economically developed countries which are committed to undertaking evaluation.
But there is no history of undertaking evaluation
In countries without a tradition of evaluation it can be difficult to make this transition. Nevertheless, it is clear that the electorates in many countries are becoming more sophisticated, in part because of access to the media and the internet. Countries where evaluations do not take place are likely, in the future, to be asked why it is that such policy assessments take place elsewhere. The, perhaps unjustified, inference is that evaluations do not take place because there is something to hide. It is not sufficient to imply that
policies are being delivered efficiently because there is no information to the contrary.
Key evaluation debates
This section reviews four key evaluation debates. The first is the appropriate technique for evaluating SME and entrepreneurship policies. The second is the appropriate level of sophistication of the quantitative evaluation approaches. The third is whether evaluation should be undertaken by
“insiders or outsiders”. The fourth is whether the same evaluation techniques should be used for all programmes.
The choice of technique
There are two basic options in undertaking summative evaluations5 – the quantitative and qualitative approaches. Quantitative evaluation involves assessment of the impact of programmes through a comparison of outcomes between the group in receipt of aid and some form of “control group”, for example a similar group of enterprises that have not benefited from policy or the same enterprises before and after receipt of policy support. Such data may be collected either directly from the firms themselves or from official data.
Qualitative evaluation or approaches are much more likely to rely upon the opinions of programme stakeholders including managers and beneficiaries about the functioning and impact of the programme through techniques including surveys, case studies and peer reviews. Both approaches will rely upon a careful scrutiny of programme documentation.
Table 1.1 reviews the advantages and disadvantages of the quantitative and qualitative approaches.
The principal advantage of qualitative evaluation is the additional information that it can provide beyond that associated with quantitative evaluations. Qualitative evaluation normally involves face-to-face discussions with those in receipt of aid, those responsible for delivering programmes and other stakeholders. These conversations help not only to obtain information from stakeholders that can lead to a deeper understanding of the mechanisms by which policy impact is achieved and how policy might be adjusted but also to engage stakeholders in policy learning processes. The approach can also pick up a wide range of other information of interest to policy makers, going beyond impact to issues such as client satisfaction, policy appropriateness, sustainability and conflict with other policies.
However, qualitative evaluation has the major disadvantage that it is not good at providing reliable estimates of policy impact for a number of reasons.
First, surveys of a sample of stakeholders run the risk of being unrepresentative of programme participants. Increasing the numbers however either adds
considerably to budgets or reduces the quality or depth of the interviews.
Second, despite the best efforts of interviewers, there remains a strong risk of interviewer bias. Thirdly, the outcome of qualitative evaluation is more often to describe a process rather than to evaluate an outcome. Fourthly, there is no opportunity for independent verification. Finally, programme participants may be asked questions that are virtually impossible to answer. The classic example is “What impact do you think this programme had on your business?” Implicitly the respondent is required to hold every other influence on their business constant and estimate how a programme which probably took place some years previously has influenced their business in the intervening period. Even if some programme participants were able to undertake such mental gymnastics others clearly are not and there is no way of distinguishing between the answers of the two groups.
The principal disadvantages of the quantitative approach concern its technical difficulties and the relatively narrow nature of the results it offers, which focus primarily on issues of effectiveness and efficiency. In terms of the technical issues, effective quantitative evaluation requires extensive data collection on the performance of policy-targeted and control group firms.
More importantly, however, in SME and entrepreneurship policy evaluation situations, there may sometimes be no natural, uncontaminated control group. Whilst good quantitative analysis seeks to match as closely as
Table 1.1. Qualitative compared with quantitative evaluation Qualitative evaluation methodologies Quantitative evaluation methodologies
Advantages Disadvantages Advantages Disadvantages
Engages participants in policy learning
Respondents and interviewers may be biased or poorly informed
Clear answers on impact Cost of data collection and technical demands Can vary the scale and
hence cost
Rarely provides a clear answer
If well done will get close to true impact
Lacks information on context and mechanisms behind policy impacts Deeper understanding of
processes leading to impacts
Tends to “describe” rather than “evaluate”
Can be independently verified
Absence of pure control groups
Should be easy to interpret Risks including “un- representative” groups
Possible false impression of precision
Can assess against a wide range of evaluation criteria
No opportunity for independent verification
Narrow focus on effectiveness and efficiency Picks up unintended
consequences
Hard to judge efficiency and effectiveness
Difficult to use on indirect interventions that seek to influence the business environment Better understanding of
policy options and alternatives
Hard to establish cause and effect
possible policy-influenced and non-policy-influenced firms and seeks to account for possible selection bias between the two groups, there are always some differences between the “treatment” and “control” groups that cannot be taken into account. To address this, evaluators in several OECD countries have collaborated with their own statistical agencies to derive samples of SMEs with prescribed characteristics so as to act as a “control group”. Even so, some evaluators may be tempted to give a false impression of precision in reporting their results. In terms of the nature of the results the main drawback is the problem of the “black box”, i.e. that little information is provided on the nature of the policy problem and how it is addressed by policy and hence on how policy might be adjusted to increase impact. This can be reflected in an unduly narrow focus of quantitative approaches on two evaluation criteria, namely efficiency (impacts against expenditure) and effectiveness (impacts against targets), that can leave other evaluation questions unanswered.
On the other hand, the fundamental advantage of quantitative evaluation is that it should provide clear answers. If it is well done it will get as close as possible to a value-free assessment of impact. Of course, no evaluation is wholly value-free.
Given the advantages and disadvantages of both approaches, this Framework argues for the use of a plurality of approaches that are able to gain from the complementarities in the information they can provide. The role of the qualitative approach to evaluation is recognised and the role of survey, case study and peer review approaches is outlined in this respect.
However, the Framework focuses in particular on setting out the issues involved in undertaking good quantitative evaluations, reflecting the original concern of the OECD Working Party on SMEs and Entrepreneurship (WPSMEE) to share information on best practices in impact evaluation. This reflects both the perception that quantitative impact evaluations are not sufficiently used in SME and entrepreneurship policy evaluation and the presence of some difficult issues that are not sufficiently well understood by policy makers, particularly in accurately establishing the counterfactual.
Assessing quantitative evaluations: The “Six Steps to Heaven”
A useful guide in developing robust quantitative evaluations and assessing the quality of such evaluation evidence is the so called “Six Steps to Heaven” approach, (Storey, [2000]), reviewed and operationalised recently by Lenihan et al. [2007], Bonner and McGuiness [2007], and Ramsey and Bond [2007].
The Six Steps methodology is a categorisation in which Step 1 is the least, and Step 6 the most, sophisticated approach. The six steps are:
● Step 1. Take up of schemes.
● Step 2. Recipients’ opinions.
● Step 3. Recipients’ views of the difference made by the assistance.
The above three steps tend to be associated with qualitative approaches, but the following three steps typify quantitative evaluations:
● Step 4. Comparison of the performance of the assisted with typical firms.
● Step 5. Comparison with match firms.
● Step 6. Taking account of selection bias.
This is an approach that is mainly relevant to quantitative and ex post evaluations rather than to qualitative and ex ante evaluation. It is nonetheless a very helpful framework for assessing the former type of evaluations and is referred to a number of times in this Framework, notably in relation to where the evaluation examples provided later in the report stand in relation to the different levels of sophistication in establishing impact.
Fuller details of the approach are to be found in Appendix B, which is taken from OECD (2004a).
Evaluation by insiders or outsiders?
A second key evaluation debate is who should undertake the evaluation – should it be insiders or outsiders? The arguments for and against are set out in Table 1.2 below.
The key argument in favour of using external evaluators is that they are not only less likely to be influenced by the political regime, but they are also more likely to be seen, by others, to be independent.6 This independence is likely to provide more objectivity to the evaluation. A second argument for the
Table 1.2. The choice of internal and external evaluators
External evaluator Internal evaluator
Advantages Disadvantages Advantages Disadvantages
Less likely to be influenced by political regime
Less well informed of the
“real” situation
More insights through
“understanding the realities on the ground”
Lack of independence
Seen by others to be independent
Less able to drive through change as a result of the recommendations
More chance of “buy-in”
from those delivering the programme
Less likely to be able to
“think outside the box”
Brings new ideas and fresh approaches
More chance of really changing policy
use of external evaluators is that they can bring new ideas and fresh approaches not only to the evaluation but also to subsequent policy development.
In contrast, the key advantage of using internal evaluators is that they frequently have a much better knowledge both of the policy itself and of the political context in which it is undertaken. Internal evaluators therefore have to spend less time in acquainting themselves with the detailed workings of policy and can focus much more upon producing targeted recommendations.
Internal evaluators also are more likely to engage the support of the managers delivering the programmes because of their greater knowledge and because they are perceived to be less threatening. Finally, internal evaluators are also more likely to be careful about their policy recommendations, since they will have to perhaps live with, and possibly implement, any changes they recommend. Unlike external evaluators they are less likely to be able to “walk away from the issue”.
The OECD (2004a) recognised that the choice of internal or external evaluators was a close call. Much might depend upon whether, in commissioning the evaluation, the purpose was to undertake a “root and branch” approach in which case external evaluators might be preferred. In contrast, evaluation designed to ensure programmes were “on track” might favour the use of internal evaluators.
Ultimately, therefore, there is a broad choice between selecting evaluators who are more independent but with perhaps less policy insight and evaluators who may be less radical in their recommendations but who perhaps are more likely to induce changes in programmes. The Istanbul Ministerial Declaration, however, made it clear that it favoured “independent but informed evaluators.”
It is also possible to develop alternative models that are neither fully internal nor fully external. For example, some government departments and agencies create independent evaluation units that are not directly attached or responsible to the particular units responsible for the programmes that they evaluate. Another option is to create teams of evaluators, with some coming from inside and some from outside the organisation. This latter approach is typical of the peer review method described in Section 4.
Should the same evaluation techniques be used for evaluating all programmes?
A third debate is whether the same approaches should be used to review all programmes. The central argument favouring a similar approach to evaluation is that the ultimate purpose, if the tax-payer is to obtain value for money from SME and entrepreneurship policies, is that all programmes
should have the same effectiveness at the margin. In simple terms it should not be possible to transfer funds from one programme to another and increase the benefits to SMEs and/or the wider economy. So, the economic impact of policies to reduce taxes for SMEs should have the same marginal benefit as policies to provide export advice or management training or access to finance.
There is clear evidence from work on SME evaluation that the methods used for evaluation appear to influence the apparent effectiveness of programmes and policies. Expressed baldly, the less sophisticated the evaluation the more likely it is to apparently demonstrate benefits. This reflects the more simple evaluations failing to hold constant the myriad of influences on outcomes and, by implication, attributing them to the programme. In contrast, the more sophisticated approaches strip out the other influences, and so only attribute to the programme its “real” effects.
This finding has major implications because it means that it is not valid to compare the findings from a study which uses a Step 2 or Step 3 approach with that which uses a Step 5 or Step 6 approach. Indeed it may even be invalid to compare findings between Steps 5 and 6. Hence it means that only by using a uniform methodology can governments really ensure that entrepreneurship and SME policy is efficiently delivered.
The opposing argument is the following: that programmes vary considerably in scale and budget and that if a fixed proportion of programme funds are to be allocated to evaluation then inevitably evaluation budgets will also vary. More sophisticated evaluations are, of course, generally both more expensive and with higher fixed costs than less sophisticated approaches.
This means that if the same approach were used across all programmes then small programmes would have to devote a much higher proportion of their funds to evaluation than is the case for larger programmes. This is unrealistic.
Both arguments, of course, have validity, but some form of compromise is possible. If the desirability of uniform evaluation procedures is accepted, then it still may be possible for individual smaller programmes to be evaluated less frequently, or possibly as part of an evaluation of a package of small programmes.
What is clear is that programmes with small budgets should not either escape from all evaluation or be assessed by radically different – and by implication less challenging – procedures.
Doing evaluations
This section examines the practical issues of how to prepare, manage and disseminate evaluations. Further useful information on preparing, managing and disseminating evaluations is provided in the evaluation guidance documents referred in Appendix C, including the web resources of the OECD Development Assistance Committee (DAC) Evaluation Resource Centre.
Preparing an evaluation
A number of key issues have to be addressed when preparing an evaluation: the first is to identify precisely what it is that will be evaluated.
This can be a problem when one item in a “package” of assistance is being assessed. For example, some SME programmes combine both financial assistance and business advice. It is therefore important, at the outset, to decide whether the whole programme is to be evaluated or whether the component parts are to be evaluated separately. The advantage of examining the whole programme is that an overall assessment of the use of public funds can be undertaken. But separately examining the component parts – the finance and the advice – may show that it is only one or the other that is really effective. For example, the evaluation might show that the impact on firm performance is primarily due to access to finance. In that case, resources might be moved away from the advice towards activities that improve access to finance.
A second question is when the evaluation should be conducted. Here again there is no simple answer because some forms of assistance take longer to impact on firm performance than others. For example, a programme designed to network firms with one another at a trade fair might be expected to have an impact in terms of additional sales within 3-6 months. In contrast, a programme to provide management training for SME owners might not be expected to have significant impacts for at least 2-3 years. Finally, entrepreneurship policies – such as those designed to influence the attitudes of school children to enterprise creation might not be expected to be observable for at least 20 years. Given these varying likely outcomes the period after the policy is implemented after the evaluation takes place is also likely to vary. However, a broad rule of thumb is that SME policy initiatives such as the impact of loans and grants should plan for the evaluation immediately the policy is introduced and begin the formal evaluation within 2-3 years.
The objectives of the programme have to be clearly specified…
Unless programmes have objectives which are in principle capable of measurement then a quantitative evaluation cannot be undertaken. Very often these objectives are set out in a logic model that provides policy makers and evaluators with a clear understanding of possible programme outcomes and how they are likely to be achieved. This is important for evaluation because it provides a guide to what should be assessed and measured. Logic models can take many forms, which will all be valid as long as they clearly express what policy is seeking to achieve. An example from New Zealand is set out in Figure 1.1. The University of Wisconsin online reference guide on Enhancing Programme Performance with Logic Models provides further useful
information for the development of these tools.7 The involvement of skilled evaluators at the outset of programme formulation will help ensure that objectives and programme logic are clearly stated. In practice, however, this is not always the case. This means that a key function of the evaluator has to be to infer – perhaps even guess – what the objectives of the policy maker were when the programme was designed. Although this might seem curious, it is often in practice a very valuable role of the evaluator, and even more so for policy-makers and programme manager.8
Specifying the content of the evaluation
Those responsible for preparing the evaluation have to be clear, particularly when the evaluation is to be undertaken externally, about what it is expected to achieve and what the role of the evaluation manager will be.
The latter may have to assist consultants in clarifying both the objectives of the programme and the current requirements of politicians. However evaluation managers should not normally specify in detail the methodology to be used but merely identify the questions to be addressed such as additionality, dead weight or displacement.9 To specify the methodology in detail would be to exclude the possibility of evaluators employing new or novel approaches. The only clear exception would be where the purpose is to directly compare policy impacts at two points in time. Here there would be merit in using a similar methodology, providing the chosen method was deemed to be satisfactory.
Ensuring good data are available
Whatever level of sophistication of evaluation is used, a minimum requirement is that data are available. For example an evaluation of the impact of business advice or of loans or grants requires, as a minimum, a complete and up to date list of clients to be available. Until such information exists no evaluation should even be contemplated.
Managing an evaluation
Six major issues arise in managing an evaluation.
Should the evaluator be internal or external?
This issue was discussed in-depth above and it was concluded that whilst the external evaluator was more likely to be independent from the organisation responsible for designing and delivering policy, and hence more likely to be critical of it, the internal evaluator was likely to have greater knowledge and political awareness. There would be, therefore, circumstances
in which either was appropriate but, all else equal, the “independence” of the evaluator was critical.
The scale of the budget
As noted earlier the scale of budget strongly influences the methodology which can be undertaken. It also influences the outcome of the evaluation since it seems that inexpensive evaluations seem to produce more “positive”
findings. This can produce considerable pressure to undertake only the most simple of evaluations.
Terms of reference
Terms of reference need to be clearly specified, but not in a way which precludes innovative thinking.
Choosing a consultant
As noted earlier the first key choice is whether the consultant should be external or internal. If the choice is to restrict the consultants to those from outside the organisation, the key choice then is to select individuals or organisations which have a track record in delivering timely and appropriate evaluations using their chosen methodology. So, for example, if a statistical study was required, it would be inappropriate to choose consultants whose prime track record was in the use of qualitative methods.
Timetable
A timetable for the delivery of the evaluation has to be specified. Very often this coincides with a wider appraisal of policies within the commissioning department. To achieve this there has to be agreed milestones in the form of interim and final reports to ensure that research is on track. To achieve this it may be necessary to have a small steering committee.
In practice, however, the more sophisticated evaluations frequently tend to overrun in terms of time. This is because of the difficulty of contacting appropriate numbers of enterprises – often because the lists given to the consultants are incomplete. Hence, a crucial element of ensuring that evaluations are on time is to ensure that the base data are of high quality before the evaluators begin their work.
Quality assessment
Initially an assessment has to be made of the quality of the evaluation.
One clear purpose of this Framework is to enable an accurate assessment of the quality of the evaluation to be made. Our overwhelming focus is upon the technical quality of the evaluation – defined as the extent to which polic
Figure 1.1.New Zealand Trade and Enterprise (NZTE) Growth Range Programme Logic Model Source:New Zealand Trade and Enterprise.
iterativeiterativeGOVERNMENT BUSINESS ASSISTANCE PROGRAMMESPOLICY PROBLEMS
Initial appraisal
All NZTE ServicesIntermediate* Ultimate Accelerated development of firms with high growth potential: Increased revenue Increased profits Increased salaries and wages Increased employment (FTEs)
Improving market engagement and market development outcomes Improving strategic, management and business capabilities, including ability to identify and respond to market opportunities and to confidently manage growth Increased capacity to innovate and access new technologies,
including ability
to create, absorb and commer
cialise new ideas
Improving likelihood of accessing capital for growth Non-NZTE government business assistance
External factors beyond the control of NZTE
Non-NZTE government business assistance programmes
All clients
Client management Growth services fund
High growth potential Market development services
Medium or high growth potential Development plans
*Not all of the intermediate outcomes will be applicable to all clients. The redevance of the outcomes will depend on client needs and services received.
Denotes scope of this evaluation project
Assessment (Iterative
process. Depth of assessment depends on needs and growth potential of firm)
Financial Needs Capability Market
Segmentation by growth potential (fluid) High Client manager assigned Medium Client manager assigned