About the Author(s)


Ian Goldman Email symbol
Department of Planning, Monitoring and Evaluation (DPME), Pretoria, South Africa

Centre for Learning on Evaluation and Results, Anglophone Africa, Johannesburg, South Africa

Carol N. Deliwe symbol
Department of Basic Education, Pretoria, South Africa

Stephen Taylor symbol
Department of Basic Education, Pretoria, South Africa

Zeenat Ishmail symbol
Strategic Management Information, Western Cape Provincial Government, Cape Town, South Africa

Laila Smith symbol
Centre for Learning on Evaluation and Results, Anglophone Africa, Johannesburg, South Africa

Thokozile Masangu symbol
Evaluation and Research, Department of Rural Development and Land Reform, Pretoria, South Africa

Christopher Adams symbol
Provincial Budget Analysis, Intergovernmental Relations, National Treasury, Pretoria, South Africa

Gillian Wilson symbol
National Treasury, Pretoria, South Africa

Dugan Fraser symbol
Centre for Learning on Evaluation and Results, Anglophone Africa, Johannesburg, South Africa

Annette Griessel symbol
Policy, Stakeholder Coordination and Knowledge Management, Department of Women, Pretoria, South Africa

Cara Waller symbol
Twende Mbele – an African M&E government partnership, Johannesburg, South Africa

Siphesihle Dumisa symbol
Department of Planning, Monitoring and Evaluation (DPME), Pretoria, South Africa

Alyna Wyatt symbol
Evaluation for Development, Genesis Analytics, Johannesburg, South Africa

Jamie Robertsen symbol
Evaluation for Development, Genesis Analytics, Johannesburg, South Africa

Citation


Goldman, I., Deliwe, C.N., Taylor, S., Ishmail, Z., Smith, L., Masangu, T. et al., 2019, ‘Evaluation2 – Evaluating the national evaluation system in South Africa: What has been achieved in the first 5 years?’, African Evaluation Journal 7(1), a400. https://doi.org/10.4102/aej.v7i1.400

Original Research

Evaluation2 – Evaluating the national evaluation system in South Africa: What has been achieved in the first 5 years?

Ian Goldman, Carol N. Deliwe, Stephen Taylor, Zeenat Ishmail, Laila Smith, Thokozile Masangu, Christopher Adams, Gillian Wilson, Dugan Fraser, Annette Griessel, Cara Waller, Siphesihle Dumisa, Alyna Wyatt, Jamie Robertsen

Received: 03 Apr. 2019; Accepted: 15 May 2019; Published: 28 Aug. 2019

Copyright: © 2019. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: South Africa has pioneered national evaluation systems (NESs) along with Canada, Mexico, Colombia, Chile, Uganda and Benin. South Africa’s National Evaluation Policy Framework (NEPF) was approved by Cabinet in November 2011. An evaluation of the NES started in September 2016.

Objectives: The purpose of the evaluation was to assess whether the NES had had an impact on the programmes and policies evaluated, the departments involved and other key stakeholders; and to determine how the system needs to be strengthened.

Method: The evaluation used a theory-based approach, including international benchmarking, five national and four provincial case studies, 112 key informant interviews, a survey with 86 responses and a cost-benefit analysis of a sample of evaluations.

Results: Since 2011, 67 national evaluations have been completed or are underway within the NES, covering over $10 billion of government expenditure. Seven of South Africa’s nine provinces have provincial evaluation plans and 68 of 155 national and provincial departments have departmental evaluation plans. Hence, the system has spread widely but there are issues of quality and the time it takes to do evaluations. It was difficult to assess use but from the case studies it did appear that instrumental and process use were widespread. There appears to be a high return on evaluations of between R7 and R10 per rand invested.

Conclusion: The NES evaluation recommendations on strengthening the system ranged from legislation to strengthen the mandate, greater resources for the NES, strengthening capacity development, communication and the tracking of use.

Keywords: evaluation; national evaluation system; evaluation system; South Africa; cost-effectiveness; M&E, evaluation capacity development; institutionalisation; evaluation use; evidence use.

Introduction

The South African Cabinet approved a Government-Wide Monitoring and Evaluation System in November 2007. The three domains included were programme performance information, socio-economic and demographic statistics and evaluation. The first two frameworks were produced in 2007 by National Treasury and Statistics South Africa, respectively. The Department of Performance (later Planning), Monitoring & Evaluation (DPME) was established in the Presidency in South Africa in January 2010. In November 2011, Cabinet adopted a National Evaluation Policy Framework (NEPF) (DPME 2011) providing the last of these three domains. An Evaluation and Research Unit (ERU) was established in DPME in September 2011 to operationalise the National Evaluation System (NES).

The DPME was envisioned as the ‘champion’ of government-wide monitoring and evaluation (M&E) in South Africa with its primary goal being to improve government’s performance and impact on society through a strategic M&E approach in managing government’s priority outcomes (Phillips et al. 2014). Since 2011, the NEPF has guided government efforts towards building a formal and integrated NES and evaluation has since taken root across national and provincial spheres. In a special edition of the African Evaluation Journal in 2015 on the South African NES, Goldman et al. (2015) introduced the establishment of the NES and other papers in the special edition address specific components of the NES or the first evaluations. By the time of this evaluation, DPME had over 300 staff and the ERU had 15 to 16 staff.

Five years into implementing the NES, DPME started an evaluation of the entire system. The purpose of the evaluation was to assess whether implementation of the NES since 2011/12 is having an impact on the programmes and policies evaluated, the departments involved and other key stakeholders; and to determine how the system needs to be strengthened.

South Africa has become an important policy experiment internationally in the establishment of a NES and so the results of this evaluation are important to share. This article briefly summarises the methodology for the NES evaluation, lessons learned from establishing the NES and presents key findings and recommendations from the evaluation of the NES. It gives an overview of the achievements, breakdown analysis of the system using Holvoet and Renard’s characteristics (see below) and then draws this together looking at the institutionalisation of the system, use of evaluations, and finally recommendations for changes. The article draws from the full evaluation report (DPME 2018c) as well as the summary report (DPME 2018b).1

Research methods and design

The evaluation questions included:

Relevance, effectiveness and efficiency: (1) How is the evaluation system working as a whole as well as the specific components and how they can be strengthened? (2) What is the value for money in establishing the NES? (3) Are there other evaluation mechanisms that need to be included to maximise the benefits accrued to the government?

Impact: (1) Is there initial evidence of symbolic,2 conceptual3 or instrumental4 outcomes from evaluations? If not, why? (2) What is the evidence available in relation to evaluations contributing to planning, budgeting, improved accountability, decision-making and knowledge?

Sustainability and upscaling: (1) How should the balance between internal and external evaluations be managed going forward? (2) What changes should be made to the NEPF and the evaluation support system to improve the quality of evaluations and expand the system?

The evaluation was undertaken by an independent service provider, Genesis Analytics. It was guided by a theory-based and case study approach informed by a literature review and international benchmarking. A survey was conducted with 86 responses and there were 112 key informant interviews. Respondents included stakeholders of the NES from national and provincial departments; international partners and stakeholders that are not directly involved in the NES, such as parliament and the South African M&E Association (SAMEA); and service providers that have undertaken evaluations within the NES. The theory-based approach was used to effectively unpack the strategies, causal mechanisms and dependencies for the establishment and implementation of the NES. The benefit of a theory-based approach is that theories of change systematically depict key objectives and the steps required to achieve these and any inherent assumptions (Brousselle & Buregeya 2018). The initial version of the theory of change was developed based on the ERU operational log frame produced in 2010 whilst two stakeholder workshops refined the theory of change and allowed the Steering Committee to come to a consensus on their conception and understanding of the NES (DPME 2018b:8).

The literature review uncovered a useful theoretical framework to characterise NESs in developing countries in Holvoet and Renard’s six characteristics of emerging NESs, that is, policy, methodology, organisation, capacity, participation, and use (Holvoet & Renard 2007). In this framework, use is the identified purpose of an NES where the system is one that uses information collection, analysis and feedback for results-based budgeting and management; iterative learning and evidence-based priority setting and policymaking. The analytical framework and the results of the evaluation were framed using the evaluation questions and the theory of change. The international benchmarking against Uganda and Benin has been shared in Goldman et al. (2018). Table 1 summarises Holvoet and Renard’s Framework as adapted for the evaluation (DPME 2018c:4–5) and where the different elements are covered in the article.

TABLE 1: Six descriptive characteristics of a National Evaluation System linked to the evaluation questions.

Case studies of sector departments and provinces were included to assess in more depth how different institutional structures within the NES fit together in specific cases and how these different elements contribute towards the desired objectives captured in the NES theory of change. The5 national departments6 and provinces7 selected were representative of different levels of engagement with the NES, both early and later adopters of evaluation (DPME 2018b:9–11).

In order to assess the value for money in the NES, three indicative evaluations were analysed: two from National Evaluation Plan (NEP) evaluations with national departments and a provincial evaluation. These were selected because detailed cost and benefit data were available.8 A cost-benefit ratio was derived in each case to convey the value of conducting evaluations in these cases (DPME 2018b:17).

Findings on how the National Evaluation System is working

How the National Evaluation System is working overall

This section is structured essentially according to Holvoet and Renard’s characteristics (see Table 1). In terms of their use of policy, the NEPF proposes the strategic selection of priority evaluations, recognising that there is not the capacity or resources to evaluate all programmes or policies, and the selection is expressed through national, provincial and departmental evaluation plans. There have been annual National Evaluation Plans since 2012. Western Cape and Gauteng have provincial evaluation plans from 2013 and early adopter departments from a similar period.

In total, 73 evaluations have been selected in NEPs, covering around US$ 10 billion of government expenditure with 67 taken forward (DPME 2018b). Seven provinces have had provincial evaluation plans, with 102 evaluations identified in these plans (Minister of DPME’s budget speech 2018). As of October 2017, 68 of 155 national and provincial departments had departmental evaluation plans, with over 300 evaluations in those plans (Minister of DPME’s budget speech 2018). A set of evaluations in sectors such as Human Settlements, Social Development as well as Agriculture and Rural Development have been completed and offer the opportunity to synthesise evaluation findings to provide a higher level view at policy/sector level. Therefore, great strides have been made in terms of breadth. Early adopters such as the Western Cape Province (see Box 1), the Department of Trade and Industry (DTI) and Department of Basic Education (DBE) have internalised systems and they ensure that evaluations are aligned to departmental priorities. Box 1 provides an example of one of the case studies that is used to give a view of how at organisational level the evaluation system has evolved.

BOX 1: Case study of the Western Cape province.

So the NES has widened from national to provincial and later to departmental evaluations and it is operating at significant scale. As departments have undertaken their own evaluations, those selected in NEPs have become more strategic.

The evaluation of the NES found that the bulk of the DPME’s budget for evaluation (77% in 2016/17 and 83% in 2015/16) is spent on funding evaluations, which typically cost R2–3 million, whilst proportionally less is spent on institutionalisation activities such as capacity building (0% in 2016/17 and 8% in 2015/16) and communication (1% in 2016/17 and 0.3% in 2015/16). The British Government’s Department for International Development (DFID) supported capacity building between 2012 and 2015 (DPME 2018b). To support institutionalisation, the evaluation recommended the spread of the budget needs to be more even. Respondents on both the supply side (evaluators) and the demand side (departments and provinces) noted that the evaluation process is a lengthy process which requires a considerable investment in time (DPME 2018b:16). The length of time taken is resulting in pressure to do evaluations internally and to do more rapid evaluations. A considerable amount of time is spent on pre-design and design, developing improvement plans and the communication of results. Increasing bureaucracy in the supply chain process to procure evaluations is actually lengthening the pre-evaluation process.

Organisation

Holvoet and Renard’s framework proposes a centrally located unit to manage the evaluations. In South Africa’s case, at national level this is the ERU in DPME which supports all NEP evaluations and coordinates the NES across the government. Provincial offices of the premier (OTPs) play a similar role in provinces. The OTPs provide support and technical advice to departments as they progress through the evaluation ‘journey’, although in some provinces such as Kwazulu-Natal, they have struggled to do this. There are also decentralised M&E units in departments and agencies but few of these have significant capacity to support evaluations with only a few national departments such as DBE, Social Development and Rural Development and Land Reform having specialist evaluation staff. At national level an Evaluation Technical Working Group (ETWG) of national and provincial evaluation champions was established to support the NES. Some offices of the premier such as in Western Cape have established a provincial Evaluation Technical Working Group to support the provincial system (DPME 2018b). The OTP also coordinates progress reporting to DPME, assistance with improvement plans and provincial evaluation capacity building (see Box 1).9

Methodology

Evaluations focus on policies, programmes or systems. Historically, at national level and in some provinces, there has been a call for proposals for evaluations to departments, resulting in a mix of bottom-up proposals from departments and strategic proposals from DPME and National Treasury at national level, and OTPs at provincial level (DPME 2018b:ix). These then go through a prioritisation/selection process.

The NES has characterised evaluations as design (analysing the design of the programme), diagnostic (analysing the problem, root causes, i.e., ex ante), implementation (looking at how the activities are translating into outputs and outcomes), impact (at outcome or impact levels), economic or synthesis (DPME 2011:9). Most evaluations undertaken are implementation evaluations as they have more rapid feedback into policy (see Figure 1 drawn from the evaluation report). The evaluation found that the guidelines are utilised for standardisation of processes.10

FIGURE 1: National Evaluation Plan Evaluations by Type, 2012/13 to 2017/18.10

Capacity building

South Africa has defined the set of competencies required for evaluators and government staff who manage evaluations (Podems, Goldman & Jacob 2013) (DPME 2014). However, the capacity to undertake and to manage evaluations is weak.

Capacity building within the NES has included learning-by-doing (e.g., support by DPME evaluation staff or OTPs), the development of guidelines and templates by DPME, promotion of learning networks and forums, short courses and developing an evaluation management standard to drive establishment of evaluation capacity in departments. Evaluation courses have been developed and they have been rolled out in collaboration with the Centre for Learning on Evaluation and Results for Anglophone Africa (CLEAR-AA) at the University of Witwatersrand and recently with the National School of Government (NSG).

The evaluation found that templates and guidelines are considered helpful to both departments and provinces. Departments and provinces with less experience in evaluations found some of the guidelines were difficult to implement without further support whilst those who had been in the NES for a longer period recommended adding more flexibility to existing templates and guidelines to better suit their varied contexts. Respondents also expressed a need for additional guidelines for more complex evaluations. (DPME 2018b:18)

Between 2012/13 and 2016/17, 1989 government staff undertook training (mostly either programme managers or M&E staff) on courses ranging from theory of change to introduction to evaluation. The investment in capacity building has been limited because DFID funding ended in 2015 and with the constraints of moving the training function to the NSG. However, a set of trainings was rolled out with the NSG at the end of 2017/early 2018 (DPME training records quoted in DPME 2018b:19).

The courses have been well received with respondents acknowledging the importance of ‘on-the-job’ training. Some of them have wished to deepen their training with more advanced courses (DPME 2018:19). An innovation has been a partnership with the University of Cape Town to train over 330 members of the top three levels of the public service in the importance of evidence (Goldman et al. 2018). This training for senior officials was seen by interviewees as helpful in advocating for evidence and evaluation but it was suggested that training of programme managers is also needed. In fact, such a course was piloted in 2018.

Some provinces and departments highlighted the need for additional staff to cover the evaluation function (DPME 2018b:19). However, fiscal constraints have led to a cap on appointing new staff and the limited funding is affecting capacity building overall and not just human capacity.

Quality assurance

The quality assurance mechanisms of the NES are important for the credibility of evaluations coming out of the NES.

The wide range of tools used to promote quality assurance include: design clinics, where the initial theories of change for the programme and outline terms of reference (TORs) for NEP evaluations for the following year are developed; steering committees; support by DPME evaluation directors; guidelines; peer reviews; and quality assessment and scoring of completed evaluations through a contract with an independent service provider. There was general consensus from respondents that the system was useful, contributing to the production of better quality evaluations. Suggestions to improve its working in practice included using peer reviewers from the beginning of an evaluation and to strengthen communication between reviewers, programme managers and the steering committee (DPME 2018b:20).

The quality assessment scores are reported when evaluations are tabled in Cabinet11 giving ministers an idea of the validity and reliability of evaluation findings. The NES evaluation noted some evidence of questionable assessment scores. It is important that the quality assessment is credible and there is a view that government officials should conduct the assessments.

With regard to evaluation standards, it is recognised that when conducting evaluations, there has to be shared norms and standards, and a common understanding of exactly what we aim to improve and how to measure it. This is crucial to ensure that there are good standards to benchmark against. The NES evaluation recommends provincial departments work closely with DPME to strengthen alignment with national policy frameworks and M&E guidelines and standards (DPME 2018c:163).

Communications

Stakeholders ranging from academics, think tanks, political parties and so on need access to the guidelines, evaluation reports and improvement plans. The evaluation found there had been a considerable investment in communication. DPME has an evaluation communication strategy and it has communicated results through the media and its electronic newsletter (Evaluation Update12), publications such as policy briefs and annual reports.13 It has also presented widely on the NESs at conferences and other forums. DPME sends completed evaluations to parliamentary portfolio committees and most years does a presentation on the NES to the chairs of portfolio committees. There have also been occasional presentations to parliamentary researchers (DPME 2018b:21–22; Goldman et al. 2015).

There is an evaluation section on the DPME website including a publically accessible national evaluation repository.14 In the Western Cape, evaluations can be accessed through the provincial project management system (Biz Brain), which also houses related documents such as the evaluation update, dictionary and guidelines. M&E officials in the Western Cape reported that having access to the evaluations conducted by other departments was very useful as it showcased the work completed, its benefits, successes and lessons learnt.

Whilst a lot of work has been performed in terms of communication, respondents noted that a number of areas should be strengthened. These include more use of the media (e.g., radio) and more formal and informal sharing of learnings within the public sector (DPME 2018b:21–22).

Participation of other stakeholders – The broader ecosystem

One of the key elements of Holvoet and Renard’s (2007) framework is the participation of other actors outside of government as part of the broader ecosystem. The architects of the NES have worked to build an ecosystem to support the NES. It has been challenging to determine the boundary of the ecosystem and identify who are the actors within the system and their relationship with one another. Boundary partners include related government structures such as the Public Service Commission, whilst non-government organisations include universities, SAMEA, CLEAR-AA, donors, civil society organisations, parliamentarians, private evaluators and so forth (Goldman et al. 2015).15

SAMEA and CLEAR-AA have played key roles in the development of the NES from the development of the original NEPF to their participation in the Steering Committee for the evaluation of the NES. Universities deliver capacity development work and they may also bid for undertaking evaluations. Academic institutions also play an important learning function in the development of evaluation culture and practices and some also offer M&E courses (Goldman et al. 2015). In provinces, universities are important in providing peer reviewers and supplementing evaluation budgets by co-funding evaluations or in some cases using student theses to undertake evaluations. The evaluation found more work is needed to bring evidence brokers such as think tanks on board (DPME 2018b:30).

A view that came out strongly during the interviews is that the roles of DPME and other actors in the evaluation space are not always clear and there is not always a shared vision for the NES across centre of government institutions. The evaluation found that the role of DPME needs to be clarified whilst the roles of the Department of Public Service and Administration (DPSA), National Treasury and the NSG need to be strengthened (DPME 2018b:15).

The NES evaluation also offers important lessons for stakeholders beyond the principal audience of government institutions. Most important amongst these lessons is that the NES has an opportunity to play a role in strengthening accountability and deepening democracy in South Africa. South Africa is characterised by deep divisions and profound exclusion. In a noisy and tumultuous market place, approaches such as Deliberative Democratic Evaluation seek to make evaluation ‘an institution that stands apart, reliable in the accuracy and integrity of its claims’ (House & Howe 2000:4) but inclusive to enable dialogue and facilitate deliberation. Such dialogue could be promoted by drawing on the evidence base that is being constructed through the NES and which can inform ongoing and sustained engagements between the state and non-state actors in both formal, institutionalised civil society and in less structured formations. The evaluation of the NES found that ‘civil society is under-utilised and under-engaged in the NES’ (DPME 2018b:15).

Some departments also have good connections to private sector partners such as farmer organisations, who participate in evaluations by forming part of steering committees and commenting on the TORs for the evaluations (DPME 2018c:177).

Institutionalisation of the National Evaluation System

The reporting requirements from the Auditor General and National Treasury meant that the 2007 Government-Wide Monitoring and Evaluation System was largely compliance driven. After the establishment of DPME in 2010 the NEPF retained some of this accountability emphasis whilst seeking to promote a performance-oriented approach (Goldman et al. 2015). DPME avoided a legislative route that would make the M&E system mandatory. In terms of the NES, this meant a voluntary approach aimed at creating an enabling environment to encourage departments to engage proactively in the system. The broader intent was to shift behaviour of senior management away from a compliance orientation towards recognising the value of learning from evaluation findings to improve programme performance.

We have referred earlier to the extension of evaluations across the state, to provinces and departments, and the establishment of a range of systems to support evaluations. Lazaro (2015:110) in reviewing institutionalisation in Europe and Latin America mentions that ‘a certain degree of formal institutionalisation may help also to foster a predictable and regular evaluation practice that adheres to suitable quality standards that promote its effective use’.

Growing institutionalisation of evaluation can be seen in the complex system of plans, guidelines, standards and so on; and the fact that all evaluations in the NEP are debated in Cabinet. However, the system does not carry the weight of legislation which, for example, underlies Treasury processes. To address this, a draft Planning Bill has been presented to Cabinet which includes some elements of M&E. However, extensive revisions are required and will be delayed beyond the May 2019 elections.

As mentioned earlier, over 330 of the top three levels of the public service have been trained in evidence. The intent was to create a more conducive environment for using M&E evidence. There are examples of where this has had an effect with National Treasury, the Department of Justice and Constitutional Development and Home Affairs undertaking evaluations as a result of participation in the training.

Findings on use, impact and cost-effectiveness of the National Evaluation System

Use of evaluations

The NES has a focus on ensuring the use of evaluations. Findings are discussed with stakeholders and senior management. There is a process of dissemination through policy briefs and thematic workshops. Results of evaluations are presented to Cabinet which gives weight to implementation. There is a formal follow-up process through management responses and improvement plans, and there are meant to be six monthly progress reports for two years, all available on the repository (link previously provided) (Goldman et al. 2015).

Key in evaluating the NES is looking at the extent to which evaluations are being used (the wider outcome in the theory of change). Enablers of use were found to be the improvement plan system and the work of DPME. Respondents noted that DPME enhances the credibility of the evaluation and pushes the improvement plan forward. By late 2017, when the evaluation was underway, there were 25 NEP evaluations with an improvement plan in place. In some cases, it has been problematic to get the six monthly progress reports; therefore, the use component is not reflected adequately in DPME’s tracking system (DPME 2018b:22).

Table 2 summarises the instances of use16 observed in the departmental case studies and Table 3 in the provincial case studies. The preliminary evidence for use of evaluations is encouraging. Instrumental use can be seen in seven out of nine cases. The second highest recorded use is process use at five out of nine. This implies that respondents felt that they gained value from being part of an evaluation process. The evaluation found that departments and provinces appear to understand the value of evaluations and attempt to use them to inform decisions. However, this use has not yet been captured accurately through the system for monitoring of improvement plans because of erratic progress reports and therefore it is not possible to reliably assert the extent of use. In order to support greater value, the improvement plan process requires greater adherence and a stronger system to track evaluation improvement plans. M&E legislation will assist.

TABLE 2: Impact and cost-effectiveness of the National Evaluation System within five case study national departments.
TABLE 3: Summary of types of use within four case study provinces.

Some respondents did signal the challenge of resourcing recommendations in evaluation improvement plans. For example, the very first evaluation on Early Childhood Development (ECD) recommended expansion of focus to include the first 1000 days from conception and a broader basket of services, both of which have significant financial implications (Davids et al. 2015). As a result, a costing exercise had to be performed to identify the costs of services and these had to be prioritised.

To respond to this issue the evaluation categorised recommendations from a selection of improvement plans in terms of whether they had financial implications or not. For example, recommendations that relate to setting up a new unit, developing a new system, hiring additional staff and increasing funding have financial implications. Of the 400 recommendations in the 24 evaluations selected, 22% had financial implications. The bulk (70%) have non-financial recommendations which largely relate to strategy, policy and operations. This data suggests that financial constraints should only impact the implementation of 22% of the evaluation recommendations that were assessed (DPME 2018b:22). Capacity constraints, on the other hand, would impact a far larger proportion of recommendations which do have capacity implications in terms of staff and time. These are the greatest in evaluations dealing with the economy.

The institutionalisation of evaluation should be a key factor to drive budgets and efficiencies and effectiveness of service delivery. At national level evaluations have influenced budgets, for example the ECD evaluation resulted in a new ECD policy which was approved by Cabinet and an ECD conditional grant (DPME 2017:19). However, a formal link with the planning and budget process is only starting. In 2016, evaluation results were used for the first time in the national budget process. Whilst a number of evaluations are being conducted at provincial level, evaluation evidence has not surfaced in budget conversations nor has it been mentioned in pre-budget conversations when the fiscal framework for the provincial sphere is being discussed,17 with the exception of the Western Cape. A key recommendation18 in the evaluation is that evaluation informs the budget process so that departments have to engage vigorously with performance outcomes and cost-benefit analyses.

Unintended benefits of the National Evaluation System

The key unintended benefits of the NES came from process use. The broader unintended benefits reported by the departments and provinces who participated in this study were: (1) An improved strategic vision as a result of using theories of change, (2) The use of ‘good practice’ in internal research after having been exposed to external evaluations, (3) An enhanced use of evaluative thinking and (4) the need to harmonise learning across structures (DPME 2018b:23).

Cost-effectiveness

The evaluators calculated the cost-benefit ratios for three sample evaluations selected based on the completeness of the data available on the costs and benefits of the evaluations. The ratios were 1:719, 1:1020 and 1:13.21 In these instances, the cost of evaluation is heavily outweighed by the benefits and implies that investing in evaluation is very beneficial for government. The evaluators concluded that whilst there is certainly value in the system, tracking the costs and benefits of the system as a whole and of individual evaluations needs to be performed more systematically so that the value of the system can be accurately assessed (DPME 2018b:18).

Conclusions and recommendations

Recommendations and improvement plan

The evaluation found that considerable progress has been made in terms of establishing the NES particularly through evaluation plans, capacity building, quality assurance mechanisms and communication. However, there is room for improvement. The recommendations from the evaluation cover five main areas – strengthening the evaluation mandate, making sure budgets are available for evaluations, capacity development, managing and tracking evaluations, and strengthening use of evaluation results through communication and improvement plans.

Evaluation mandate (DPME 2018b:xi)

The evaluation suggests that evaluation should be embedded in legislation as a mandatory component of public management and improvement. It is recommended that planning and budgeting must systemically draw from the results of monitoring and evaluation; and some key government systems are suggested for embedding this, for example, in performance agreements of senior managers.

Another recommendation is to ensure that evaluations fit into policy and programme lifecycles with new phases of programmes not funded until an evaluation of the previous phase is completed and impact evaluations designed into a policy or programme from the start.

In addition, there is some confusion on mandates and a recommendation was made that the roles of key stakeholders in the evaluation ecosystem are clarified.

Budgeting for evaluative processes (DPME 2018b:xii)

From a cost perspective, the bulk of DPME’s evaluation budget has been allocated to conducting evaluations. Budgeting for evaluations is a challenge. Treasury has indicated that there will be no separate budget for evaluations and they should be funded from programme budgets. Hence, it is recommended that programmes allocate a percentage of programme budgets for evaluation or M&E. Typically this should be in the range 0.5 % – 5% depending on the size of the programme.

One way to make evaluations less costly is to conduct rapid evaluative exercises internally or to share the costs. A recommendation is for DPME to develop guidelines for rapid exercises and for DPME/national departments to share evaluation plans across spheres of government so that evaluation resources can be pooled across government departments for evaluations that examine similar programmes.

Another challenge is having staff dedicated to evaluation. M&E units should have at least one evaluation specialist. The recommendation is for the DPSA, with technical input from DPME, to develop competences and job descriptions for specific evaluation posts in standard M&E units.

Challenges have been experienced with the supply chain process when trying to procure service providers; this is often beyond the control of the commissioning departments and can cause major delays.

Capacity development (DPME 2018b:xii)

People competency and capacity also need to be strengthened with particular reference on how to manage and commission external evaluations. The role of DPME in institutionalising evaluation across government is clear in the NEPF. However, in general DPME has not been clear on its role in institutionalisation of M&E which was why it proved difficult to resource capacity development with internal funds. The draft Planning Bill does require institutionalisation. As a result, it is recommended that DPME strengthen its investment in capacity development, including working with Treasury and the Public Sector Education and Training Authority (PSETA) to ensure that a budget is available for courses/learnership with additional dedicated staff time in DPME to focus on capacity development.

For the public service to have evaluation specialists, courses in evaluation must be available at universities and not just generic M&E. The recommendation in the NES evaluation is for DPME to work with the National School of Government, DPSA, SAMEA and universities to ensure that suitable post-graduate courses and continuous professional development opportunities are available for evaluation professionals within the public sector, and to work with stakeholders to establish a community of practice for learning and sharing around evaluation for government. There is also the potential of using rapid internally conducted evaluations to build evaluation capacity in government whilst bearing in mind the need for independence for major evaluations.

A key challenge is to get a more diverse (black) group of evaluators and DPME needs to use both capacity development and procurement tools to ensure that black evaluators are brought into the system and encourage a broader variety of universities to participate in the system.

Managing and tracking evaluations (DPME 2018b:xii)

One challenge raised in the evaluation is that the quality of foundational documents and TORs needs to be strengthened. This requires expanding the training, refinements to DPME’s TOR guideline and more consistency in application of the guideline.

The management information system is the ‘backbone’ of the NES and a recommendation is that it should be used across all evaluations in government, not only for the NEP. This will allow transparent monitoring of the state of the system as well as extraction of status reports. This can help DPME to use the tracking system to ensure that departments are following up on improvement plans, reporting to Cabinet and that they can help DPME to name and shame departments who are not doing so. However, this requires ensuring that all national and provincial departments do follow up on improvement plans, which will take a lot of work to ensure compliance.

Strengthening use through communication and improvement plans (DPME 2018b:xii–xiii)

Ultimately, the test of the system is that evaluation findings and recommendations are used. The reporting on improvement plans should enable this but some departments are reluctant to report and, as mentioned previously, it is suggested that DPME must name and shame these departments. The evaluation also recommended that tracking should be beyond the two years reporting which is in the guideline.

Apart from the formal route to Cabinet, evaluations need to inform wider societal processes. This depends on stakeholders knowing about and being able to access evaluation reports. The recommendation is for DPME, provinces and departments to allocate significant financial and human resources for evaluation communication, both financial and human to ensure stakeholders are aware of the findings and that full value is obtained from the investment.

The National Evaluation System improvement plan

A workshop with stakeholders was held on 26/27 March 2018 to develop an improvement plan for the NES. This takes a slightly different structure to the recommendations with the following four improvement objectives:

  • The Planning Bill incorporates evaluations as a mandatory component of the public management system to enable institutionalisation of evaluations in the public sector and state-owned enterprises through streamlining the NES with planning and budgeting processes. The NEPF is revised in line with this.
  • Improve the quality of evaluations through consistent application of strengthened processes, guidelines and tools across spheres of government and state-owned enterprises.
  • Improved capacity in managing and undertaking evaluations through formal training and informal learning opportunities, including empowerment measures within the procurement system.
  • Evaluation improvement plans are implemented and tracked, and the evaluation reports used as reliable sources of evidence and communication to inform planning and decision-making in and outside of government (DPME 2018e).

The evaluation report and improvement plan need to be formally submitted by DPME to Cabinet and the first six month progress report was due 12 months after the reports were approved, in February 2019.

Implications for wider stakeholders

The evaluations offer tremendous opportunities for interest groups to hold government to account and to push for improvements in certain specific areas based on the evidence derived from these evaluations. However, stakeholders outside of government are often unaware of this opportunity and they are, therefore, unlikely to use it.

The evaluation of the NES makes it clear that the DPME needs to broaden and deepen its view of the operations of the governance system, how policy is made and how bias and interests impact on the use of evidence. Recent thinking on how evidence is ‘governed’ suggests that DPME should think more deeply about how the evidence being generated by the NES is used in the policymaking and public domains (Parkhurst 2017). In this regard, the recommendation that the role of civil society and of think tanks in particular be clarified is especially pertinent.

A number of the other recommendations made in the evaluation report raise important issues from a system-strengthening perspective. The NES evaluation highlights the need for DPME and SAMEA to clarify their respective roles in building and growing the national evaluation ecosystem. This is an important, long-term endeavour that will need sustained engagement particularly around the twin imperatives of transforming the pool of professional evaluators and strengthening the whole evidence-utilisation pipeline from policy and strategy development through to performance improvement.

Lessons learned about evaluating a National Evaluation System
Implications for the relationship between evidence and policy

The impact of the NES in decision-making is still work in progress. If we see institutionalisation of evaluation as the distinct elements below outlined by Gaarder and Briceño (2010:17) with trade-offs of independence and policy influence then the first five years of the NES has accomplished more in terms of institutionalisation from an M&E perspective, which has contributed to making the results of evaluations more policy-influential (Table 4).

TABLE 4: Tracking performance of government based M&E systems (Gaarder & Briceño 2010).

Whilst progress has been made, the predominant culture in the public service is still very compliance driven with the fear of the Auditor General holding sway and there is some way to go to get the generation and use of evidence seen as a high priority (Paine Cronin & Sadan 2015; Umlaw & Chitepo 2015). However, without a strong evaluation culture that speaks to how evaluation systems are a function of values, practices and institutions, and the involvement of wider society, the level of institutionalisation will be limited (Lazaro 2015). Without legislation to safeguard the system, it is still subject to changes of ministers and of management, and its sustainability potentially at risk.

Acknowledgements

Since the beginning of the national evaluation system, the Department of Planning, Monitoring and Evaluation has had support from the UK’s Department for International Development, GIZ, the Programme to Support Pro-Poor Policy Development, Twende Mbele African M&E partnership, CLEAR Anglophone Africa and the International Initiative for Impact Evaluation (3ie).

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

I.G. (formerly Department of Planning, Monitoring and Evaluation, DPME, now CLEAR-AA) led the South African NES from inception to mid-2018, as well as development of this article. S.D. supported the evaluation from DPME. C.N.D. and S.T. have led the evaluation system in DBE, and they have been the founding partners in Twende Mbele. Z.I. has led the development of the provincial evaluation system in the Western Cape and A.G. in Gauteng Province but later joined the Department of Women. L.S. has led CLEAR Anglophone Africa (AA) from 2015 to 2019 and has been involved in the NES from that time. T.M. is responsible for the evaluation system in the Department of Rural Development and Land Reform. All of the above are members of the National Evaluation Technical Working Group and they were nominated to be on the evaluation steering committee. C.A. and G.W. were nominated by National Treasury to be on the steering committee. D.F. is the outgoing chair of SAMEA and now Director of CLEAR-AA. C.W. has been sharing the experience of South Africa with our partners in Benin and Uganda. A.W. and J.R. were the lead staff from Genesis on the evaluation. All of them have contributed to the article with I.G. undertaking overall editing.

Ethical considerations

This article followed all ethical standards for research without direct contact with human or animal subjects. This reported on an evaluation conducted under South Africa’s national evaluation system.

Funding

The evaluation was funded by the Department of Planning, Monitoring and Evaluation.

Data availability statement

The evaluation report will in due course be available on the DPME website. The data itself are held by DPME.

Disclaimer

The research findings come directly from the evaluation report. The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

Brousselle, A. & Buregeya, J.-M., 2018, ‘Theory-based evaluations: Framing the existence of a new theory in evaluation and the rise of the 5th generation’, Evaluation 24, 153–168. https://doi.org/10.1177/1356389018765487

Davids, M., Samuels, M.-L., September, R., Moeng, T.L., Richter, L., Mabogoane, T.W. et al., 2015, ‘The pilot evaluation for the National Evaluation System in South Africa – A diagnostic review of early childhood development’, African Evaluation Journal 3(1), 7, https://doi.org/10.4102/aej.v3i1.141

Department of Performance Monitoring and Evaluation (DPME), 2011, National evaluation policy framework, Department of Performance Monitoring and Evaluation, DPME, Pretoria.

Department of Performance Monitoring and Evaluation (DPME), 2013, Implementation evaluation of the business process services incentive scheme programme: Final report, Department of Performance Monitoring and Evaluation, Pretoria, viewed 31 May 2019, from https://evaluations.dpme.gov.za/evaluations/301.

Department of Performance Monitoring and Evaluation (DPME), 2014, Evaluation competency framework for government, version 2, Department of Performance Monitoring and Evaluation, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2016, Funza Lushaka Bursary programme implementation evaluation; Policy summary, executive summary and summary report, Department of Planning, Monitoring and Evaluation, Pretoria, viewed 31 May 2019, from https://evaluations.dpme.gov.za/evaluations/514.

Department of Planning, Monitoring and Evaluation (DPME), 2017, Annual report on national evaluation system 2016–17, Department of Planning, Monitoring and Evaluation, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2018a, Annual Report on the NES 2016–17, Department of Planning, Monitoring and Evaluation, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2018b, Report on the evaluation of the national evaluation system – Summary report, Department of Planning, Monitoring and Evaluation, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2018c, Report on the evaluation of the national evaluation system – Full report, Department of Planning, Monitoring and Evaluation, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2018d, Presentation on the evaluation of the national evaluation system, Department of Planning, Monitoring and Evaluation, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2018e, Improvement plan for national evaluation system, Department of Planning, Monitoring and Evaluation, Pretoria.

Gaarder, M. & Briceño, B., 2010, ‘Institutionalisation of government evaluation: Balancing trade-offs’, 3ie Working Paper Series, pp. 1–22.

Goldman, I., Mathe, J.E., Jacob, C., Hercules, A., Amisi, M., Buthelezi, T. et al., 2015, ‘Developing South Africa’s national evaluation policy and system: First lessons learned’, African Evaluation Journal 3(1), Art. #107, 9. http://doi.org/10.4102/aej.v3i1.107

Goldman, I., Byamugisha, A., Gounou, A., Smith, L.R., Ntakumba, S., Lubanga, T. et al., 2018, ‘The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa’, African Evaluation Journal 6 (1), Art #253, https://doi.org/10.4102/aej.v6i1.253

Holvoet, N. & Renard, R., 2007, ‘Monitoring and evaluation under the PRSP: Solid rock or quick sand?’ Evaluation and Program Planning 30, 66–81. https://doi.org/10.1016/j.evalprogplan.2006.09.002

House, E. & Howe, K., 2000, ‘Deliberative democratic evaluation’, in New directions in evaluation, No. 85 Spring 2000, Jossey-Bass Publishers, San Francisco, CA.

Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F. & Volkov, B., 2009, ‘Research on evaluation use: A review of the empirical literature from 1986 to 2005’, American Journal of Evaluation 30, 377–410. https://doi.org/10.1177/1098214009341660

Lazaro, B., 2015, Comparative study on the institutionalisation of evaluation in Europe and Latin America, Programme for Social Cohesion in Latin America, EuroSocial Programme, Madrid.

Ledermann, S., 2012, ‘Exploring the necessary conditions for evaluation use in program change’, American Journal of Evaluation 33, 159–178. https://doi.org/10.1177/1098214011411573

Leslie, M., Moodley, N., Goldman, I., Jacob, C., Podems, D., Everett, M. et al., 2015, ‘Developing evaluation standards and assessing evaluation quality’, African Evaluation Journal 3(1), Art. #112, 13. https://doi.org/10.4102/aej.v3i1.112

Minister of DPME, 2018, Budget Vote Address By Dr Nkosazana Dlamini-Zuma, MP, Minister In The Presidency for Planning, Monitoring And Evaluation, 11 May 2018, viewed 13 August 2018, from http://www.dpme.gov.za/news/Pages/Budget-Vote-Address-By-Dr-Nkosazana-Dlamini-Zuma,-MP,-Minister-In-The-Presidency-Planning,-Monitoring-And-Evaluation.aspx.

Paine Cronin, G. & Sadan, M., 2015, ‘Use of evidence in policy making in South Africa: An exploratory study of attitudes of senior government officials’, African Evaluation Journal 3(1), a145. https://doi.org/10.4102/aej.v3i1.145

Parkhurst, J., 2017, The politics of evidence: From evidence-based policy to the good governance of evidence, Routledge Studies in Governance and Public Policy, Routledge, London.

Phillips, S., Goldman, I., Gasa, N., Akhalwaya, I. & Leon, B., 2014, ‘A focus on M&E of results: An example from the Presidency, South Africa’, Journal of Development Effectiveness 6(4), 1–21. https://doi.org/10.1080/19439342.2014.966453

Podems, D., Goldman, I. & Jacob, C., 2013, ‘Evaluator competencies: The South African Government experience’, Canadian Journal of Program Evaluation 28, 71–85.

Troskie, D., 2017, ‘Strengthening government through evaluation: The evaluation journey of a provincial agriculture department’, in Democratic evaluation and democracy: Exploring the reality, p. 243, Information Age Publishing Inc, Charlotte, NC.

Umlaw, F. & Chitepo, N., 2015, ‘State and use of monitoring and evaluation systems in national and provincial departments’, African Evaluation Journal 3(1), 1–15. https://doi.org/10.4102/aej.v3i1.134

WCDoA, 2014, Evaluation of the impact of agricultural learnership in the Western Cape, Western Cape Department of Agriculture, Elsenburg, viewed 31 May 2019, from https://evaluations.dpme.gov.za/evaluations/437.

Footnotes

1. These are not yet public but will be available from https://evaluations.dpme.gov.za/evaluations.aspx

2. Symbolic use refers to examples when a person uses the mere existence of an evaluation, rather than any aspect of its results, to persuade or to convince (Johnson et al. 2009).

3. Conceptual use is the type of use where an evaluation results in an improved understanding of the intervention and its context or a change in the conception of the evaluand (Ledermann 2012).

4. When evaluations are used instrumentally, the recommendations and findings generated could inform decision-making and lead to changes in the intervention (Ledermann 2012).

5. Shows to the sections of the article.

6. Department of Basic Education, Department of Human Settlements, Department of Justice and Constitutional Development, Department of Social Development and the Department of Trade and Industry.

7. Provincial: Eastern Cape (early majority); Gauteng (early adopter); Limpopo (early majority); and Western Cape (innovator).

8. The names and references are provided later.

9. For example, see https://www.westerncape.gov.za/sites/www.westerncape.gov.za/files/provincial_evaluation_plan_2017_2018_final.pdf.

10. Note synthesis only happened late where a synthesis has been done of existing evaluations in a sector. Two of these have been done: one on the Human Settlements sector and two on support to Smallholder Farmers. These are not public yet.

11. Score is out of 5, where 3 is considered a reliable score. Only 3 evaluation reports have scored lower than this. Leslie et al. (2015) discuss the quality assessment system.

12. Some of these are available at https://evaluations.dpme.gov.za/newsletters.aspx. Thirty-one editions of this two–three monthly newsletter have been published (author’s own records).

13. DPME produced annual reports from 2013/14–2016/7, with the primary purpose to share evaluation findings, as well as to update stakeholders on development of the system. These are available on DPME’s website.

14. https://evaluations.dpme.gov.za/evaluations.aspx.

15. A stakeholder map was produced in the evaluation with ratings of their degree of involvement (DPME 2018b:14).

16. See definitions of use in Section 2.

17. Christopher Adams, National Treasury, personal communication).

18. Senior member of the Chief Operations Unit at the Department of Social Development.

19. Evaluation of the Impact of Agricultural Learnerships in the Western Cape.

20. Evaluation of the Funza Lushaka Scheme for DBE (DPME 2016).

21. Evaluation of the BPS Programme for the dti (DPME 2013).


 

Crossref Citations

1. An overview of the provincial evaluation system of the Western Cape Government of South Africa as a response to the evaluation of the National Evaluation System
Zeenat Ishmail, Victoria L. Tully
African Evaluation Journal  vol: 8  issue: 1  year: 2020  
doi: 10.4102/aej.v8i1.425