Article Information

Authors:
Gemma Paine Cronin1
Mastoera Sadan2

Affiliations:
1Organisation and Strategy Development, Design and Evaluation Services, South Africa

2Programme to Support Pro-Poor Policy Development, National Planning Commission, South Africa

Correspondence to:
Gemma Paine Cronin

Email:
gemma@hixnet.co.za

Postal address:
26 Central Avenue, Pinelands, Cape Town 8000, South Africa

Dates:
Received: 24 Apr. 2015
Accepted: 26 Aug. 2015
Published: 30 Sept. 2015

How to cite this article:
Paine Cronin, G. & Sadan, M., 2015, ‘Use of evidence in policy making in South Africa: An exploratory study of attitudes of senior government officials’, African Evaluation Journal 3(1), Art. #145, 10 pages. http://dx.doi.org/10.4102/aej.v3i1.145

Copyright Notice:
© 2014. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Use of evidence in policy making in South Africa: An exploratory study of attitudes of senior government officials
In This Original Research...
Open Access
Abstract
Introduction and background
Method and design
Findings
   • Sources
   • Attitudes and use of concepts: Evidence and EBPM
   • Shared perspectives
   • Divergent orientations and norms
   • Attitudes to EBPM and systematic reviews
   • Current practice: The use of evidence in policy making
   • Use of evidence in the policy cycle
   • Levels of influence and inclusion5
   • Factors impacting on effective use of evidence
   • Summary of what must change, why and how
   • Predictive emphasis
   • Formative emphasis
Conclusion
Acknowledgements
   • Competing interests
   • Authors’ contributions
References
Footnotes
Abstract

This paper outlines a 2011 study commissioned by the Presidency’s Programme to Support Pro-Poor Policy Development (PSPPD)which promotes evidence-based policy making (EBPM) in South Africa. EBPM refers to norms, initiatives and methods aimed at improving evidence-based policy in countries from which South Africa traditionally borrows public service reforms, particularly the UK and Canada. The study provides a descriptive snapshot of attitudes to evidence-use in policy making. All 54 senior government officials interviewed felt that evidence-use is too limited to ensure relevant, effective policy responses. This includes policies on which complex results depend and those with long-term and high-resource implications. Although all respondents regarded EBPM as self-evidently desirable, there were different views on practical application. Examples provided suggest that, where evidence was used, it was very often related to a borrowed international policy without a prior evidence-driven analysis of successes and failures or its relevance and feasibility in terms of local issues and context. Policy makers generally know they should be making optimal use of available evidence, but highlighted systemic barriers beyond the influence of individual managers to resolve. The study suggests that improved use of evidence throughout the policy cycle, particularly in analysing problems and needs, is a requirement for learning through evidence-based policy development. It suggests that political and administrative leadership will need to agree on norms, ways of dealing with the barriers to effective use of evidence and on the role of each throughout the policy cycle in ensuring appropriate evidence is available and used.

Introduction and background

This paper is based on a study commissioned by the Programme to Support Pro-Poor Policy Development (PSPPD)1 in the Presidency in October 2011 and presented to its conference in November 2011. The study aimed to provide a descriptive snapshot of current attitudes to the use of evidence in policy making in South Africa and to inform the activities to be carried out in the second phase of PSPPD. It is particularly relevant bearing in mind the creation of a Department of Planning, Monitoring and Evaluation in the Presidency in 2010 with an explicit mandate to strengthen the use of M&E evidence in government, and the challenge of how to do this most effectively.

The phrase ‘evidence-based policy making’ (EBPM) is shorthand for a set of norms, initiatives and methods, specifically systematic reviews, aimed at improving the use of evidence in policy making in countries from which South Africa has traditionally borrowed or adapted a range of public service reforms, particularly the UK and Canada:

’Evidence-based policy is an approach to policy making that helps provide information by putting the best available evidence from research and evaluation at the heart of policy development and implementation … (it) challenges opinion-based policy making and ad hoc methods of decision making. (Davies 2004:3)

This approach stands in contrast to opinion-based policy, which relies heavily on either the selective use of evidence (e.g. on single studies irrespective of quality) or on the untested views of individuals or groups, often inspired by ideological standpoints, prejudices, or speculative conjecture. (Segone 2004:27)

Method and design

The study was largely descriptive and exploratory and suggested insights and possible directions for further study. It was intended to parallel a similar study done in the UK public service (Campbell 2007) and the interview frame was kept as consistent as possible with the core questions used in that study. However, differences in context, regulatory and normative expectations were taken into account. The UK study explored practice in relation to the requirements established by the UK Cabinet Office (UK Cabinet Office 1999a; 1999b; UK National Audit Office 2001; UK Treasury 2000). Unlike the UK, South Africa does not have explicit requirements for the use of evidence in policy making or an explicit set of criteria, norms or guidelines or standardised policy cycle that could be used as a basis for describing and analysing practice.

The study was based on interviews with 54 senior officials in 15 entities, 8 transversal departments at national level responsible for regulating and supporting the public service,2 the national departments of Education and Social Development, the Limpopo and Western Cape Premier’s Offices and the Western Cape departments of Local Government and Basic Education. Most interviewees occupied senior posts, Director General (7), Deputy Director General (19) and Chief Director (19); 9 were directors or deputy directors. Interviewees were promised anonymity.

Interviews moved from open questions through a group of semi-structured exploratory questions to more structured questions. At the outset, respondents were asked to outline examples of effective and ineffective use of evidence in policy making, enabling an initial open exploration. The final, more structured questions explored attitudes and practice in regard to predetermined aspects of evidence-use which were analysed in terms of the specific normative and interpretative frameworks applied. The open questions were analysed in terms of whether the respondents regarded them as examples of effective or ineffective evidence-use in policy making and then each category was further analysed to determine if there were patterns regarding where in the policy cycle evidence was used, how it was used and who used it and how this correlated with policy implementation and outcomes if possible.

This study was done under considerable time pressure in order to ensure an indicative snapshot was available to inform the second phase of PSPPD. The open-ended nature of the majority of the questions and the fact that not all interviewees answered all questions presented some challenges for synthesis and interpretation.

Findings

The findings are presented in a number of areas – the sources of evidence people use; how they use EBPM concepts; the use of evidence in policy making; the use of evidence in the policy cycle; levels of influence and factors impacting on the use of evidence

Sources

Figure 1 shows the main sources of evidence that respondents identified. 38 of the 54 officials identified ’my networks‘: (’my seven or eight trusted sources’; ’my academic circle of friends’; ’Sources are people I know’; ’I contact the experts – a network of people we know’). Some of the same names came up repeatedly, suggesting a limited set of voices are accessed/have access. Whilst administrative data was routinely used, problems with reliability, coherence and consistency of data and systems were noted. Only six of 54 officials used research papers/syntheses/literature reviews. Use of monitoring and evaluation was limited because current approaches to policy development and planning do not enable reliable diagnostic and adaptive approaches, and evidence that is available does not get used in policy review decisions anyway. Another source of evidence identified was international study tours or ‘benchmarking reports’ or, less often, literature reviews to identify ‘best practice’.

FIGURE 1: Main sources of evidence (numbers of respondents).

Attitudes and use of concepts: Evidence and EBPM

Shared perspectives

Officials across departments were unanimous about the urgent need to improve the use of evidence in policy making. However, significant variation in understanding and use of key concepts, norms applied and assumptions regarding the reasons for current problems were evident in the responses which have implications for reform efforts.

Those interviewed drew little distinction between policy and planning. There was almost unanimous agreement with the UK and Canadian view (Davies 2004; 2011; Lomas et al. 2005; Mulgan 2008; Segone 2005; 2008) that EBPM should represent a move from opinion as the basis for policy to a more rigorous use of the available body of evidence. It was generally agreed that EBPM should replace the use of power derived from position as the basis for policy decision-making, although there were differences about what it should replace it with.

Although officials recognised the need for improved methodology, there was no agreement between officials that the more linear and rigorously scientific approach to evidence-use, associated with systematic enquiry, is uniformly desirable. The majority of officials favoured a more heuristic, iterative approach, with some emphasising that other methods are more suited to policy issues in complex fields of knowledge and dynamic contexts requiring a more emergent, inclusive context-specific approach to evidence and learning.

Divergent orientations and norms

Different opinions on what constitutes reliable evidence correlated with differences on what the key current problems are with evidence-use, their underlying causes and, therefore, differences regarding what should be improved, how this could be achieved and the institutional mechanisms and capacities that would be required.

There was consistent patterning of underlying normative assumptions about what constitutes adequate evidence made in characterising ‘good’ and ‘bad’ examples of evidence-use. Although many officials said that what constitutes evidence spans a relatively wide spectrum of types of information, sources and methods, attitudes on what evidence was most desirable for EBPM fell into two main groups. These could be characterised as an orientation towards the use of evidence for primarily ‘predictive’ or for ‘formative’ purposes which echo the longstanding debate in the philosophy of science regarding the nature of knowledge and link to recent thinking related to complexity and programme theory (Freiberg & Carson 2010; Mintzberg 1994; Mulgan 2008; Pawson 2006; Rogers 2008). Although this generalised distinction masks nuances in the two groups and members may not identify fully with the type depicted, the groupings seem to emerge sufficiently strongly and consistently to justify the distinction. Table 1 provides a selection of quotations from each group.

TABLE 1: Examples from interviews indicative of differences in orientation.

The following broadly represents the core of the two orientations and the numbers of officials that relatively consistently emphasised this view:

Predictive, scientific and objectively verifiable: Independent experts derive unambiguous facts through replicable, valid scientific methods providing objectively verifiable proof (emphasised by 15); or

Formative, emergent, probabilistic and contested: an iterative, heuristic search for better explanations and understanding of how to achieve politically derived values in which the choice of facts and sources is influenced by existing ideas, ideology, mind-set, values and interests and is subject to specific and changing contextual factors (emphasised by 32).

A small third group (7) did not explicitly articulate either view and/or straddled the two orientations and/or relatively consistently emphasised that the choice should be contingent on the type of policy and its context.

Although both the ‘predictive’ and ‘formative’ groups accepted that political leadership should have a role in policy making, there were different attitudes to the role of politics and political leadership in policy making. Two relatively common, but contrasting images were used by each group to describe how evidence should be used in policy making. Whilst they acknowledged that policy making is often messy in practice, the ‘predictive’ group tended to emphasise that good policy involves a direct, linear relationship between evidence and consequent policy decisions and between these and the achievement of intended results. As far as possible, the desirable situation would be for policy decisions to be based on reliable objective technical knowledge that would enable policy makers to predict the results of a particular policy with a high degree of certainty. Reliable evidence is based on objective and replicable proof. In this case the role of political leadership was seen as largely confined to setting the policy agenda by identifying priority problems or issues that policy should address. From this point onwards, objective facts should be the basis for decision-making and would involve adopting the ‘right’ policy response based on the available evidence. This group stressed the need to ‘get it right’ from the outset so as to avoid failure and a waste of resources. The ‘right’ policies enable effective centralised co-ordination and control through performance management-based checks to ensure that the policy is correctly implemented. It is assumed that, if reliable evidence was used, correct implementation will relatively automatically achieve desired policy outcomes. The emphasis is, therefore, less on evaluation and more on performance management through monitoring implementation. Political leadership has a further role in oversight of implementation. Good examples of policy based on evidence for this group often entailed identifying (international) ‘best practice’.

The ‘formative’ group stressed an iterative formative cycle as the desirable approach. The key image for this group is a cyclical process through which evidence, used at the outset to determine what option would have the best probability of success, would be progressively improved based on experience. Evidence would translate into improved knowledge about what works, what does not and why, over time, in specific contexts. Evidence seldom points to one clear course of action and interpretations will differ, as will applicability to different contexts. They emphasise that values, interests and ideology shape the collection and interpretation of evidence and that it is, therefore, important to include those whose perspectives, interests, values, experience and understanding are important if the resulting decision is to be relevant, adequately informed, understood and actively supported.

This group also tended to stress the potential of the process of decision-making to enhance relevance, to build institutional capacity, understanding, momentum for change and commitment for implementation and for the application of future learning. Evidence is important throughout the policy cycle and in informing each new cycle, particularly through monitoring and formative evaluation. This group suggested that building understanding and agreement on the needs and context at the outset is crucial because of the need for locally appropriate responses to specific locally assessed needs and to provide baselines. Technical knowledge has to be adapted if it is to be situationally appropriate, and there are always choices to be made in response to particular circumstances that affect different interests and needs. This group sees a different role for political leadership, as playing a significant role in setting the agenda in the public interest and in making decisions at key points in the policy cycle where choices that influence ’who benefits, how, and who pays‘ need to be informed by the best available knowledge.

Attitudes to EBPM and systematic reviews

All those interviewed regard EBPM as self-evidently a good thing, ’Evidence-based policy making? It is too obvious – can you do policy any other way?’ (Interview, Director General),but acknowledged that what this means in practice and how it is to be achieved are far from obvious. However, there was a wary response to ‘evidence-based policy’, and, from the formative group, to systematic reviews, as ‘the solution’. The issues and challenges facing policy making and policy makers need to be analysed first in order to enable an appropriate solution to be found. Almost all senior officials are aware they should be using evidence and, therefore, building increased awareness of the need for evidence or capacity to assess and use evidence alone will not be adequate. An enabling environment for the application of such awareness and capacity should first be created. Those with a ‘formative’ orientation stressed that, in borrowing this reform initiative because of its prominence in other countries, we may fail to understand and address our own needs.

Many in the ‘formative’ group noted that high levels of complexity and instability often typify policy contexts in South Africa and make rapid cycles of evidence-driven learning more appropriate than trying to use systematic reviews to identify ’best practice’ that can be successfully transplanted or adapted.

Over half of the officials expressed some concern that EBPM was just another piecemeal initiative rather than part of a coherent institutional capacity development programme designed to address local problems in contextually relevant ways. They felt that national transversal departments tend to create multiple initiatives that are not adequately coherent and aligned or developed and improved through monitoring and evaluation. The following quotations indicate typical concerns emerging from the interviews:

‘We need to improve the policy methodology, including participation processes, not just evidence.’ (Director General)

‘Is EBPM different from just making good use of the policy cycle?’ (Director General)

‘EBPM is an attempt to remove politics from the policy process – we can’t and shouldn’t.’ (Deputy Director General)

‘Randomized controlled trials, not relevant to most social policy issues – can’t control all the variables, and will never be able to prove attribution. Social systems are not static enough and are too complex to make it worth it.’ (Deputy Director General)

Current practice: The use of evidence in policy making

Of the 39 examples of evidence-use in policy making that were given, only eight were examples of ‘good’ use. Interestingly, a further eight were offered as both good and bad examples by different interviewees. This appears to be further evidence of conceptual and normative differences as discussed above. Specifically, a number of different people gave examples of policy responses borrowed from other countries but judged differently as either good or bad based on either approval of borrowing from ‘international best practice’ or on disapproval of borrowing without ensuring relevance to context and needs.

Many of the challenges related to the effective use of evidence parallel those in the UK study (Campbell 2007). Some, however, appear to be specific to the South African policy context. Officials were very frank about problems with policy development, giving as examples processes for which they took responsibility, providing insight into factors promoting or inhibiting effective use of evidence.

Respondents almost unanimously noted that, although some evidence is used, and this is improving, it is seldom adequate to ensure reliable and informed policy decision-making, even when it is available and the level of risk, long time frame and/or resource implications indicate that every effort should be made to identify the policy choice with the highest level of potential success. The majority of respondents gave examples of important policy decisions made with limited reference to evidence or in the face of evidence supporting alternative approaches.

Whilst the expected tension between the challenges of real-world policy making and normative models for how policy should be developed were acknowledged, all officials except one noted widespread failure to ensure adequately informed decision-making and stressed that improvement was urgent. They noted that, in key policy areas, poor policy outcomes and weak implementation could have been avoided by improved use of existing evidence in a systematic policy cycle.

Where evidence is used, it is generally to motivate, persuade or defend decisions already made, often to secure funding from Treasury. This use of evidence has been described by Beyer as ‘symbolic’, and ’involves using research results to legitimate and sustain predetermined positions’ (Amara, Ouimet & Landry 2004:17). Officials note this is extremely common, as ‘ad hoc’ policy decisions are the norm and have to be justified and ’made to work‘ after the fact.

In many of the negative examples there was no time for consideration of even minimal evidence, even when the decisions had very significant implications. Over a third of those interviewed could not think of an example of an effective use of evidence, and those that did usually indicated that there was still significant room for improvement. This may not be a very abnormal situation in the bumpy terrain of real-world policy making, but the extent to which this is reportedly the case for major foundational policies is probably unusual. Whilst officials gave some outstanding examples of efforts to improve the use of evidence, the overwhelming picture was of significant systemic weaknesses in the policy development process that go beyond the use of evidence. A systematic policy process that lays the ground for evidence-based policy development is reportedly rare to non-existent.

Use of evidence in the policy cycle

’If I know the best option won’t fly, I present the second best option.’ (Interview, Director General)

Officials were asked where in the version of the policy cycle below (a composite representation drawing on Booth 2011; Cable 2003; Carden 2009; Hayes 2002; Lomas et al. 2005; Segone 2005; Sutcliffe & Court 2006; Sutton 1999; Unicef 2005) they would relatively routinely use evidence.

Policy agenda setting: Evidence of important need for intervention.

Analysis of needs, problem, causes, options and operational feasibility: Evidence needed to identify the most relevant and feasible policy option and develop a testable causal hypothesis for the intervention (a theory of change).

Design: Evidence on how best to implement, operationalise, monitor and evaluate the policy decision.

Implement and monitor: Evidence on progress and whether operational assumptions are working out as expected.

Evaluate: Evidence on change results as expected (outcomes and impact), what did and did not work, why, and how to improve results. See Figure 2

FIGURE 2: The stages of the policy cycle.

Again, almost all those responding indicated that some evidence was used but it was often not adequate to enable robust decision-making at any stage in the cycle with significant, and sometimes disastrous, effects on policy outcomes. Poor use of evidence in the policy cycle and to establish a basis for effective M&E was regarded as a key causal factor, weakening the capacity of government to use evidence heuristically for evidence-driven learning and policy improvement leading to high levels of policy change that are not driven by reliable learning.

The following is a simplified depiction of the predominant trends and gaps in the use of evidence in the policy cycle (Figure 3).

FIGURE 3: Pattern of problems with use of evidence emerging from the interviews.

Key observations on problems related to the policy cycle were:

Typically, the policy agenda is not set by evidence-informed proactive political decisions about priorities but often by incident or anecdote. The formative group felt that failure to use evidence effectively, particularly monitoring and evaluation, in the agenda setting phase, was a key reason for policy weakness and failure. In only six of 39 examples the decision on the need for policy development was based on information derived from M&E. The predictive group often emphasised the identification of international ‘best practice’ as the starting point for policy development, jumping directly from there to intervention design or even implementation based on borrowed design.

In 27 of the 31 examples of ‘bad’ use of evidence, the absence of effective problem, needs or options analysis was explicitly highlighted. In the majority of policy processes there had been a jump from the perceived need for a policy response (agenda setting) directly to a policy ‘solution’ without adequate evidence on the nature and extent of the problem, who it affects, how they see their needs, the possible causes of the problem and possible policy options for tackling the root causes (rather than the symptoms). Available information from research, as well as administrative data, was seldom effectively used. Recent improvements in the inclusion of beneficiaries through a participative analysis of needs were noted but, although inclusiveness and participation are often essential for policy relevance and success, they are generally given little time and attention. Twelve of 39 policies were reportedly based on a borrowed policy concept and design with little or no reference to evaluative evidence of effectiveness, analysis of relevance to local needs and context, or potential alternatives.

Several noted that some policy options are ‘taboo’, even if all the evidence points to their being the best option in the circumstances. Officials screen evidence, often based on political assumptions, thus limiting political principals’ capacity to make informed choices. Policy makers did not generally request or assess options before deciding on a policy approach. Implementation requirements and feasibility seldom inform choices about the best and most sustainable policy option.

Even where some analysis is done, a clear ‘theory of change’ or testable hypothesis spelling out how it is assumed the intervention will work is seldom made explicit, making evaluation and ongoing evidence-driven learning and policy improvement difficult.

The intervention design was felt by half of those responding to be usually better informed by evidence than many of the other phases of the policy cycle. However, this often takes the form of adopting ‘best’ or good practice models. Whilst the ‘outcomes-based approach’3 is regarded as having improved the extent to which evidence is used to design, monitor and evaluate the intervention logic and implementation plan, this was often focused on outputs rather than on whether these will or do address the needs of citizens. The example of counting hectares of land transferred rather than assessing change in sustainable rural livelihoods was given by a number of people.

Managers reported that monitoring information is extensively collected and used to pick up implementation problems or blockages in some policy areas but not all. However, approximately half noted that multiple frameworks and formats for reporting and oversight create a confusing mass of information that is difficult to adequately use for the improvement of policy. Systems for the collection and analysis of implementation information are often designed to pump information up the system for control purposes but are seldom analysed and used by front line units for learning purposes.

Good evaluation evidence is reportedly rare or non-existent but improving. Planned formative evaluations of different kinds, randomised controlled trials, various kinds of counterfactual studies and impact evaluation were mentioned. Officials highlighted little appetite in government for transparent and effective evaluation and failure to establish an effective basis for evaluation in earlier phases of the cycle as key problems. When anticipated policy results are not achieved as expected, it is seldom possible to determine whether this resulted from poor policy formulation, design or implementation. Policies cannot, therefore, be improved using reliable evidence about what does and does not work, and so are abandoned, amended or continued based largely on opinion. Some noted that, when a policy appears not to be working, it may be left to simply continue, whilst a different policy, based on similarly little evidence, is added. The result is multiple uncoordinated and ineffective initiatives.

Although nearly all officials noted high degrees of complexity in their policy areas, implying that the knowledge base is emergent and highly reliant on learning,4 the ‘predictive’ and ‘formative’ split remained consistent. The predictive group did not emphasise formative evaluation, as there was often an assumption that ‘best practice’ would ‘work’ if only implemented effectively and/or compliance was achieved. Approximately 32 of 54 officials believed that, given the complexity, variation and dynamism of the policy issues in South Africa, a formative approach to policy was needed – using the cycle to produce evidence-based learning to drive policy improvement with a general reliance on interpreting patterns and trends rather than trying to prove strict causality and attribution. Many felt that planning, monitoring and evaluation frameworks in government do not effectively support a cycle of evidence-driven policy improvement. Extensive dependence on consultants to develop policy limits the development of internal capacity to test and further develop policy.

Officials believe the lack of consistent and effective use of a policy cycle is responsible for the tendency to zigzag from one policy to another when expected results do not materialise, rather than adaptively improving policy based on experience. This may explain the high levels of policy instability noted as a significant problem by the National Planning Commission (2011).

Although there were differences on how the cycle should be used and introduced, there was overwhelming support for applying a standardised policy process, allocated adequate time and integrated with coherent planning, information, M&E and reporting frameworks and systems designed to enable improved cooperation and co-ordination of action and information across government and spheres. The following is a composite of the suggested phases (Figure 4).

FIGURE 4: Stages of policy cycle suggested by officials.

Levels of influence and inclusion5

Of 33 respondents on this issue, 23 indicated that the perspectives of beneficiaries have very little influence whilst the pattern of involvement of (mainly government) implementers and those that are the ‘targets’ of policy is more varied.

As Figure 5 indicates, beneficiaries are treated as ‘consumers’ of policy. Imple­menters might be drawn into cooperative relationships through involvement or simply expected to comply. The direct target group of the policy, who are usually expected to change as a result of the policy, are reportedly far less included in decision-making. Nineteen of 33 respondents believe that these key role-players are mostly expected to passively ‘consume’ or comply with policy decisions that they have not had any significant role in shaping, which do not draw on their understanding and experience or their perceived needs. In comments on the policy cycle, most officials felt that improved input of beneficiaries, implementers and ‘targets’ was a necessary condition of improved relevance and policy success. Inclusion directly impacts on the effectiveness, commitment and understanding of implementers.

FIGURE 5: Current levels of involvement of role players (numbers of respondents indicating which groups are generally involved in policy processes in their policy area and the level of involvement).

For some officials, differences of opinion on a policy signalled that participation should be minimised and tightly controlled, whilst for others, potential controversy was the signal for opening discussion to deepen understanding and coherence.

Although it was generally recognised that, in some cases, compliance is all that is needed, members of the ‘formative’ group suggested that the ‘default’ response of many policy makers to a problem is to impose regulations even where it is necessary to win more informed and active support. This difference of approach was also evident in regard to views on how evidence-use could be improved. Those using the predictive image tended to stress increasing regulation and control as the means to improve evidence-use, whilst the ‘formative’ group argued that evidence shows that this approach is seldom successful.

Factors impacting on effective use of evidence

‘[We are] running to stand still.’ (Director General)

A key issue arising from the interviews as noted above is the lack of common norms and standards to guide and assess policy development practice. Of 39 examples given, only eight were of ‘good’ use. Interestingly, a further eight were offered as both good and bad examples by different interviewees, indicating significant conceptual and normative differences. Specifically, a number of different people gave examples that were based on policies borrowed from other countries but judged differently as either good or bad based on either approval of borrowing ‘international best practice’ or on disapproval of borrowing without ensuring relevance to context and needs.

The key barrier to the effective use of evidence in policy making identified was the limited time allocated to the policy process and to any one phase of the process, often resulting from political pressure. This was linked to, and closely followed by, ‘culture’, specifically the lack of value placed on learning, research, expertise and open debate, as well as mistrust between political leadership, officials and experts. Other factors most often noted were fixed attitudes and preconceptions, vested interests, weak capacity and planning systems, the politicisation of senior officials and restricted access to experts.

Availability of reliable evidence was raised as a constraint but not a decisive one. Policy processes very seldom include an assessment of what information is required for effective decisions and what information is available. Evidence used is ’not based on what information is needed but what is to hand, lots of unstructured information [is] used at senior level‘ (Director General). Basic data sets such as personnel, unemployment, school enrolment and learner retention levels are regarded as unreliable (’we may be out by 10 to 20%’). Baselines drawn from these data sets are, therefore, compromised which will limit evaluation and learning.

Positive examples given included exceptional initiatives to improve the information base for policy and used to initiate a policy process or identify the need for policy improvement. Evidence collected through evaluation was used at the start of the policy process in 6 of 39 examples, but this was reportedly hard and often unappreciated work. Difficulties were mostly unrelated to technical challenges of ensuring reliable research but rather to arduous processes to convince principals of the value of research and overcoming procurement barriers:

‘It is a tough job to get policy research on the agenda – there are so many role players and factors. People look for quick ways of pushing it through. It matters more who is pushing, not the evidence. We take the route of least resistance because producing hard evidence is a tough job and nobody thanks you for it.’ (Deputy Director General)

Even if there is investment in research, it is not always used to inform the ultimate decision made. Of eight positive policy examples given, at least six were of extensive (and costly) research done but largely disregarded in the final policy decision. Two examples were given of over R15m spent on research to inform a key policy decision that was ultimately ignored. A number of examples reported high levels of stakeholder involvement, including action research processes, which built effective momentum and commitment for change, only to be bypassed in the decisions made.

Summary of what must change, why and how

‘Why are our results not improving? I don’t have time for why.’ (Deputy Director General)

Just as views of optimal use of evidence fell into the two broad groups, responses on what must change and how it can be achieved correlated with the predictive and formative orientations. At times officials noted the need for a situation-dependent or contingency approach (Mintzberg 1993; Rodgers 2008; Snowden & Boone 2007), but this was generally embedded in predispositions towards one or other orientation as evident in the patterning of suggestions on what should change, how this change can be achieved and the distribution of roles, responsibilities and capacities. The discussion above has touched on many of the issues and the following is intended to summarise the key points emphasised by each group.

Predictive emphasis

’Demand from Cabinet would change how policy is done. If politicians are willing to do things on hunches, it makes it difficult for DGs to invest the time and resources for evidence.’ (Director General)

The following recommendations emerged from group with a predictive emphasis:

  • Use a standardised policy cycle to ensure that best practice is identified and implemented through effective monitoring and management.
  • Specify uniform requirements and standards for the use of evidence and for policy making in general using best practice internationally.
  • Centralise accountability and management mechanisms for ensuring and enforcing compliance with standards.
  • Build capacity at the centre to detail and manage compliance but actual policy development should be done by external experts.
  • Build capacity of political leadership to oversee compliance and use expert advice to set the policy agenda effectively.
  • Achieve intergovernmental co-ordination of policy through centralised management systems.
Formative emphasis

‘This [EBPD] is more likely to lead to change if the methodology is right –participation will get learning throughout the system. You can’t ignore how the problem looks from different perspectives without a big risk of resistance or defiance. We exclude others’ views on the basis that they are self-interested but then let powerful lobbies determine policy …’ (Deputy Director General)

Recommendations emerging from the formative group were:

  • Build improved responsibility and learning throughout the system through a more decentralised approach appropriate to the high levels of complexity and dynamism of the policy context.
  • Build commitment to change through an inclusive analysis of reasons for current policy development practice, the problem and needs.
  • Enable culture and behaviour change through partnership and active and informed support through involving role players.
  • Build oversight by involving beneficiaries and their representatives in the legislatures.
  • Agree a standardised policy cycle enabling transparency and ongoing improvement of policy outcomes through reliable evidence-based learning.
  • EBPM should not be used to pre-empt analysis of local needs and identification of a relevant policy response.
  • Build strong capacity for evidence-informed policy development, monitoring and evaluation inside the public service, including to effectively manage consultation and partnerships and draw in, and on, available expertise and bodies of knowledge.
  • Political leadership require capacity to ensure that evidence is available and used throughout the policy cycle to inform effective political decision-making.
  • Make reputations count – peer review is an important tool for improving policy.
Conclusion

‘The question is “why don’t we use evidence?” We know we should.’ (Director General)

This was an exploratory study which interviewed 54 senior managers. The data is necessarily limited but does present an indicative picture of how evidence and EBPM is viewed by South African policy-makers. The results have proved very useful, for example in running training in EBPM for senior managers in government. Three cycles of a course for the two top levels of managers in the public service have now been run which has drawn on these results.

There was almost overwhelming agreement on the seriousness of the impact of current practice on policy outcomes and about the urgency and importance of change. It is also significant that the majority of respondents were officials from transversal departments responsible for creating an enabling environment for the effective functioning of the public service. The need for change is already recognised by these crucial role players.

This study points to a number of institutional issues that should be addressed, particularly role clarification between political and administrative leadership, the development of a standardised policy cycle and increasing capacity within government to develop and implement policy. Although limited, the snapshot points to the need to provide a reliable understanding of why, as officials noted, even available evidence is not used adequately to inform policy decisions. The range of barriers and issues identified suggest that simply transplanting EBPM reforms applied by other countries is not likely to have the kind of impact required to improve policy outcomes.

The interconnectedness of the range of issues and role players suggests that further study is necessary to understand the problems of policy development in South African and in the context of different disciplines, sectors and spheres. This is likely to require a systemic, whole-of-government approach that addresses all the interconnected factors (OECD 2004).

However, it is also clear that there are significant and relatively deep-seated differences about what constitutes optimal use in this limited sample of officials. These differences inevitably affect perceptions on problems, what needs to change and what enabling conditions would be needed to support such change. This study itself suggests that norms and assumptions about what is important directly influence the identification and assessment of ‘evidence’. A wider diversity of views is likely when the experiences, attitudes and understanding of a range of other key role players from different spheres, sectors, disciplines as well as role categories such as executive, administrative and political leadership and oversight bodies are added.

The public sector reform literature, general literature on organisational change and the literature on complexity and programme theory (Freiberg & Carson 2010; Mintzberg 1994; Mulgan 2008; Pawson 2006; Rogers 2008) suggest that reforms that require culture and behaviour change involving relatively high levels of understanding and cooperation from a wide range of actors cannot be secured simply through detailed instruction and control. This study suggests that an initiative to improve policy would need to optimally include all the relevant role players, including the political leadership, in examining available evidence in order to build understanding and agreement on the current problems, what to aim for, what would need to change, how this might vary from context to context and how to achieve it. This process in itself could contribute to building institutional and individual capacity, commitment and energy for change.

The Planning Commission provides a relevant warning on quick fixes and borrowed ‘solutions’:

Many of the problems with public sector performance have to do with deeply rooted systemic issues, and there is no ‘quick fix’ substitute for a long-term and strategic approach to enhancing institutional capacity. (National Planning Commission Diagnostic Review 2010:22)

Acknowledgements

Competing interests

The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Authors’ contributions

The paper was written by Gemma Paine Cronin, based on a study she undertook in 2011. Mastoera Sadan, Director of PSPPD, who commissioned the study, and Ian Goldman, Department of Performance Monitoring and Evaluation in the Presidency, provided comment and support in the conceptualisation, conduct and completion of the study. Philip Davies, who conducted the UK research also contributed advice and support to the original study.

References

Amara, N., Ouimet, M. & Landry, R., 2004, ‘New evidence on instrumental, conceptual and symbolic utilization of university research in government agencies’, Science Communication 26(1), 75–106. http://dx.doi.org/10.1177/1075547004267491

Booth, D., 2011, ‘Working with the grain and swimming against the tide: Barriers to uptake of research findings on governance and public services in low-income Africa’, Working Paper 18 published on behalf of the Africa Power and Politics Programme (APPP) by the Overseas Development Institute, London.

Cable, V., 2003, ‘Does evidence matter?’, Overseas Development Institute, May 2003, London.

Campbell, S., 2007, ‘Analysis for policy: Evidence-based policy in practice’, Government Research Unit, H M Treasury.

Carden, F., 2009, Knowledge to policy: Making the most of development research, Sage, New Delhi.

Datta, A., Jones, H., Febriany, V., Harris, D., Kumala Dewi, R., Wild, L. etal., 2011, ‘The political economy of policy-making in Indonesia: Opportunities for improving the demand and use of knowledge’, Overseas Development Institute, July 2011, London.

Davies, P., 2004, ‘Is evidence-based government possible?’ JerryLee Lecture, presented at the 4th Annual Campbell Collaboration Colloquium, Washington, DC.

Davies, P., 2011, ‘The state of evidence-based policy evaluation and its role in policy formation’, paper prepared for the National Institute Economic Review’s Special Issue: ‘Still Evidence-Based? The Role of Policy Evaluation in Recession’, Sage, London.

Freiberg, A. & Carson, W.G., 2010, ‘The limits to evidence-based policy: Evidence, emotion and criminal justice, in Australian Journal of Public Administration, 62/2, June 2010, pp. 152–164. (Wiley-Blackwell on behalf of Institute of Public Administration Australia).

Hayes, W., 2002, The public policy cycle, School of Social Science and Human Service, Ramapo College, NJ.

Lomas, J., Culyer, T., McCutcheon, C., McAuley, L. & Law, S., 2005, Final report: Conceptualizing and combining evidence for health system guidance, Canadian Health Services Research Foundation, Ottawa.

Mintzberg, H., 1993, Structure in fives: Designing effective organisations, Prentice Hall, Englewood Cliffs, NJ.

Mintzberg, H., 1994, The rise and fall of strategic planning: Reconceiving roles for planning, plans and planners, The Free Press, New York, NY.

Mulgan, G., 2003, Facing the Future Conference, Canberra, Australia

Mulgan, G., 2008, The art of public strategy: Mobilising power and knowledge for the common good, Oxford University Press, Oxford.

National Planning Commission, 2010, ‘Diagnostic review’, Pretoria.

National Planning Commission, 2011, ‘National Development Plan: A vision for 2030, 11 November 2011.

OECD, 2004, ‘Public sector modernisation: Changing organisational structures’, OECD Policy Brief, September 2004, Paris.

Quimet, M., Landry, R., Ziam, S. & Bédard, P., 2009, ‘The absorption of research knowledge by public civil servants’, Evidence & Policy 5(4), 331–350.

Pawson, R., 2006, ‘Assessing the quality of evidence in evidence-based policy: Why, how and when? ESRC Research Methods Programme, Working Paper No 1, University of Southampton, Southampton.

Rogers, P.J., 2008, ‘Using programme theory to evaluate complicated and complex aspects of interventions’, Evaluation 14(1), 29–48. http://dx.doi.org/10.1177/1356389007084674

Segone, M. (ed.), 2004, Bridging the gap: The role of monitoring and evaluation in evidence-based policy making, Evaluation Office, New York.

Segone, M., 2005, ‘Moving from policies to results by developing national capacities for country-led monitoring and evaluation systems’, in M. Segone (ed.), From policies to results: Developing capacities for country monitoring and evalation systems, UNICEF Evaluation Office, New York, NY.

Segone, M (Ed), 2008, Country-led monitoring and evaluation systems: Better evidence, better policies, better development results, UNICEF Evaluation Office, New York.

Snowden, D.J. & Boone, M.E., 2007, ‘A leader’s framework for decision making’, Harvard Business Review 85(11), 68–76. PMID: 18159787.

Sutcliffe, S. & Court, J., 2006, A toolkit for progressive policy makers in developing countries, Research in Policy Development, ODI Toolkit.

Sutton, R., 1999, ‘The policy process, an overview’, ODI Working Paper 118.

UK Cabinet Office, 1999a, ‘White Paper on modernising government’ White Paper, Cabinet Office, London.

UK Cabinet Office, 1999b, Professional policy making for the 21st Century, Cabinet Office, London.

UK National Audit Office, n.d., Magenta Book, Guidance Notes on Policy Evaluation, London.

UK National Audit Office, 2001, Modern policy-making: Ensuring policies deliver value for money, National Audit Office, London.

UK Treasury, 2000, ‘Adding it up’, London.

UNICEF, 2005, ‘From policies to results: Developing capacities for country monitoring and evaluation systems’, UNICEF, Paris.

Footnotes

1. The PSPPD is a partnership between the Presidency South Africa and the European Union. The main objective of the Programme is to increase the use of research and other evidence in the policy making and implementation process.

2. National Planning Commission, Public Service Commission, Deputy President’s Office, Departments of Monitoring and Evaluation, Public Service and Administration, Cooperative Governance, Economic Development and Finance.

3. Established in 2010 by the Presidency.

4. Officials were asked to indicate how they would characterise the knowledge base in their policy area and how this influences policy making using descriptors for simple, complicated and complex (Glouberman & Zimmerman, in Rogers 2008; Mulgan 2003; Mintzberg 1994).

5. Officials were asked to identify the current levels of involvement of three groups in providing, collecting or interpreting evidence using a continuum of levels of participation adapted from DFID.


 

Crossref Citations

1. How evidence, implementation, policy, and politics come together within evidence systems: Lessons from South Africa
Ruth Stewart
Development Policy Review  vol: 41  issue: 2  year: 2023  
doi: 10.1111/dpr.12657