About the Author(s)


Ian Goldman Email
Department of Planning, Monitoring and Evaluation, South Africa

Albert Byamugisha
Office of the Prime Minister, Uganda

Abdoulaye Gounou
Bureau of Public Policies Evaluation and Government Action Analysis, Presidency, Benin

Laila R. Smith
CLEAR Anglophone Africa, South Africa

Stanley Ntakumba
Department of Planning, Monitoring and Evaluation, South Africa

Timothy Lubanga
Office of the Prime Minister, Uganda

Damase Sossou
Public Policies Evaluation, Presidency, Benin

Karen Rot-Munstermann
Independent Development Evaluation, African Development Bank, Côte d’Ivoire


Citation


Goldman, I., Byamugisha, A., Gounou, A., Smith, L.R., Ntakumba, S., Lubanga, T., Sossou, D. & Rot-Munstermann, K., 2018, ‘The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa’, African Evaluation Journal 6(1), a253. https://doi.org/10.4102/aej.v6i1.253

Original Research

The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa

Ian Goldman, Albert Byamugisha, Abdoulaye Gounou, Laila R. Smith, Stanley Ntakumba, Timothy Lubanga, Damase Sossou, Karen Rot-Munstermann

Received: 18 July 2017; Accepted: 23 Oct. 2017; Published: 29 Mar. 2018

Copyright: © 2018. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Evaluation is not widespread in Africa, particularly evaluations instigated by governments rather than donors. However since 2007 an important policy experiment is emerging in South Africa, Benin and Uganda, which have all implemented national evaluation systems. These three countries, along with the Centre for Learning on Evaluation and Results (CLEAR) Anglophone Africa and the African Development Bank, are partners in a pioneering African partnership called Twende Mbele, funded by the United Kingdom’s Department for International Development (DFID) and Hewlett Foundation, aiming to jointly strengthen monitoring and evaluation (M&E) systems and work with other countries to develop M&E capacity and share experiences.

Objectives: This article documents the experience of these three countries and summarises the progress made in deepening and widening their national evaluation systems and some of the cross-cutting lessons emerging at an early stage of the Twende Mbele partnership.

Method: The article draws from reports from each of the countries, as well as work undertaken for the evaluation of the South African national evaluation system.

Results and conclusions: Initial lessons include the importance of a central unit to drive the evaluation system, developing a national evaluation policy, prioritising evaluations through an evaluation agenda or plan and taking evaluation to subnational levels. The countries are exploring the role of non-state actors, and there are increasing moves to involve Parliament. Key challenges include difficulty of getting a learning approach in government, capacity issues and ensuring follow-up. These lessons are being used to support other countries seeking to establish national evaluation systems, such as Ghana, Kenya and Niger.

Introduction

Monitoring and evaluation has been seen as an important way to improve the performance of the public sector. Evaluation in particular has been seen as a route to strengthening effectiveness, efficiency and impact, as well as accountability of government policies and programmes (Department of Performance [later Planning], Monitoring and Evaluation [DPME] 2011a). We define a national evaluation system (NES) as a national system which guides how evaluations are selected, implemented and used. NESs have developed in advanced economies since the 1980s, in Latin America since the 1990s and in Africa from 2007 (Benin), and Uganda and South Africa in 2011.

In March 2012 South Africa’s DPME organised a workshop with the Centre for Learning on Evaluation and Results for Anglophone Africa (CLEAR AA) that included seven countries to explore their national monitoring and evaluation (M&E) systems (CLEAR/DPME 2012). As a result, Benin, Uganda and South Africa realised they were on common trajectories in developing a NES, and an informal partnership and dialogue emerged where examples of policies, guidelines, etc., were exchanged, and partners attended each other’s events such as evaluation weeks, trainings, etc. Meanwhile, CLEAR AA was establishing itself in the region to strengthen M&E practice, and the African Development Bank (AfDB) was working with Ethiopia and Tanzania to build evaluation systems.

The Department for International Development (DFID) agreed to support these countries through a peer learning programme, Twende Mbele, to promote the use of M&E as a tool for improving government performance and accountability in Africa. The initial Twende Mbele partners include Benin, Uganda, South Africa, CLEAR AA and Independent Development Evaluation at the AfDB. This partnership formally started in January 2016 and involves collaboration, capacity development and sharing of experience with other African countries. In 2017, the Hewlett Foundation also started funding Twende Mbele.

This article seeks to document the situation with M&E in these countries at an early stage of the partnership, and the lessons emerging at this stage.

An emerging analytical framework

The literature on NES remains predominantly authored by Western scholars. For example, the International Atlas of Evaluation (Furubo, Rist & Sandahl 2002) largely refers to Organisation of Economic Cooperation and Development (OECD) countries. A recent review commissioned by CLEAR AA of evaluations across 12 African countries over the past 10 years confirms this trend (Mouton & Wildschut 2017). This creates a challenge to find useful frameworks within the existing literature that speaks to emerging evaluation trends in an African context. One such framework discussed by Holvoet and Renard (2007) applies to a developing country context where poverty-reduction efforts are the focus. The framework illustrates the key features of an effective NES and is centred on the dimensions of state construction as these systems in Africa tend to still be fairly nascent in their development. This framework has been used for the current evaluation of South Africa’s NES (Genesis 2016); Figure 1 shows the adapted version that we will use in this article.

FIGURE 1: Six characteristics of an effective national evaluation system (Genesis 2016).

The first element in Holvoet and Renard’s framework is policy, and the elements suggested included the presence of an evaluation plan, clear differentiation between the roles of monitoring versus evaluation, a system that seeks to ensure autonomy and impartiality of the evaluators and one that puts feedback into use and into planning and budgeting.

The framework is wider than evaluation and so methodology includes selection of indicators, how evaluations are selected and priorities established, whether there is some form of programme theory or causal chain in the programmes, and in the evaluations, which methodologies are used and how data are collected.

In terms of organisation, the factor suggested includes coordination of the system, the role of the national statistical office, line ministries and decentralised levels of government and how the system links with projects.

In terms of capacity, factors include acknowledgement of the problem and a capacity building plan.

A key differentiator is what participation of actors outside government there is in the system, including Parliament, civil society and donors. This is important for accountability and, where donors are important, integrating donors into the system.

The quality of the product and process are important, and how these feeds into use internally and externally.

This framework is used to describe the development of the evaluation systems in each country in the sections ‘Development of the Benin national evaluation system’, ‘Development of the Ugandan national evaluation system’ and ‘Development of the South African national evaluation system’, and then the cross-cutting issues drawn out in ‘Emerging findings regarding national evaluation systems’ section. The key elements of the framework in each country are shown in italics.

Development of the Benin national evaluation system

Benin has two levels of government, with 20 national departments (such as Ministry of Planning and Development) and 77 local governments and/or municipalities. Historically, public policies and programmes in Benin have not been evidence-based and there has been no culture of accountability except to fulfil conditionalities imposed by development partners. This has resulted in policies, programmes and projects that have either not been successful or their effectiveness has not been assessed.

To improve government performance, in 2007 Benin initiated a process of evaluation of public programmes under the aegis of the Presidency. In terms of organisational characteristics shown in Figure 1, in 2008, a Bureau of Public Policies Evaluation (BEPP) was established (now the Bureau of Public Policies Evaluation and Government Action Analysis, BEPPAAG), hosted by the General Secretariat of the Presidency. Its role is to establish and lead the NES, ensure evaluation becomes a strategic management tool for development and commission evaluations whether demanded by donors, national government or by the local government. BEPPAAG had only four staff as of February 2017, so its capacity is limited. In addition, M&E units exist in both national departments and municipalities.

Table 1 summarises the timeline of emergence of the system. A ministry was created in 2007 with an evaluation mandate but evaluations only started in 2010. Approximately 15 evaluations have been started since 2010 and 14 have been completed, of which one is an impact evaluation and the others are implementation and/or process evaluations. These include evaluations of sectoral projects, multisectoral programmes and public policies in decentralisation, power, agriculture, health, water and energy.

TABLE 1: Timeline for the development of Benin’s national evaluation system.

In 2011, the Government of Benin adopted a 10-year (2012–2021) national evaluation policy (see policy characteristics in Figure 1) to promote learning towards improving management and decision-making, and to ensure that government is accountable for its actions. It directs the evaluation system in government and stipulates the roles of stakeholders in evaluation at central and local government levels (BEPP 2012).

An institutional framework has been established defining the mechanisms for conducting evaluations including guidance on selecting evaluations and structures, engagement of stakeholders, dissemination of results and the monitoring of implementation of recommendations. To assist with impartiality, independent service providers undertake the evaluations, whether universities or consulting companies. A steering committee is established to support the evaluation from terms of reference (ToR) to report. To ensure the quality (characteristic six in Figure 1) of evaluations, a national evaluation guideline was finalised in April 2016.

In terms of capacity (characteristic four in Figure 1), a national evaluation board (NEB) has been created to promote capacity building and evaluation practice. Biennial capacity building events include the Benin Evaluation Days in 2010, 2012, 2014 and 2016, which brought together national and local stakeholders, with international evaluation experts from partner countries such as Uganda, South Africa and Canada, and from international bodies such as UNDP. These provided training during the evaluation days and over 200 staff in central, regional and local bodies have been trained in evaluation.

The NEB also promotes participation (characteristic five in Figure 1) as it has nine members representing government bodies, universities and civil society, including Ministries of Planning and Development, Economy and Finance; Scientific Council of National Universities; Benin Network of Monitoring and Evaluation and a counter corruption association (Front des Organisations Nationales contre la Corruption, FONAC). It advises the government on the development of evaluation at national, departmental and municipal level. It is instrumental in ensuring the independence and institutionalisation of the evaluation process. Parliament is not represented in the NEB for now, but it is being restructured to involve Members of Parliament (MPs). Parliament has the power to request and to receive evaluation reports.

A study was undertaken by BEPPAAG on quality and use (characteristic six in Figure 1) of evaluations commissioned from 2010 to 2014, focusing on nine evaluations. One of the key findings was good ownership of the recommendations by implementing agencies. Approximately 80% of the recommendations (from all nine evaluations) have led to the development of implementation plans. Approximately 82% of the recommendations led to specific changes (49% policy review, 10% institutional change, 10% new projects and 15 other short-term measures). However, it is an ongoing challenge to ensure the use of evaluation findings for policy improvement and better implementation.

Development of the Ugandan national evaluation system

Uganda has a two-level government structure with 22 central government ministries and 124 agencies such as the National Planning Agency, 115 district local governments and one city, Kampala.1 Service delivery is largely decentralised with most frontline services such as agriculture, education or health delivered through lower level local governments. The introduction of a National Integrated Monitoring and Evaluation Strategy (NIMES) in 2005 sought to strengthen performance assessment in the public sector (Office of the Prime Minister [OPM] 2006). After 2007, a key focus became achieving results through the efficient and effective delivery of key public services which required effective measurement and analysis of their performance. Before 2010, the routine monitoring of spending and results was not well embedded across the public service; management information systems existed in few ministries; annual sector reviews covered less than one-third of the sectors; and the utilisation of data to strengthen performance and accountability was generally weak. There was little regular evaluation of public policies and programmes, with the majority of evaluations commissioned and managed by development partners (OPM 2009) and only 10% of public investments evaluated (OPM 2010). This suggested that lessons were not being learned about which investments were successful and which were not, hence not informing policy making.

The overall M&E policy (characteristic one in Figure 1) updated the NIMES of 2006 (Government of Uganda 2013). Evaluations are governed under a policy developed and approved in 2012/2013. The main actors include Office of the President, Office of the Prime Minister, Ministry of Finance, National Planning Authority, Auditor General, Ministry of Local Government, Ministry of Public Service and local governments.

In terms of organisation (characteristic three in Figure 1), a Department of Monitoring and Evaluation was created in the OPM in 1998 with two staff, rising to 15 by 2011, and recently upgraded to a Directorate headed by a Director with 25 staff in three departments and one division. The Directorate of Monitoring and Evaluation will promote efficiency and effectiveness in the wider public service by keeping track of performance for all government policies, projects and programmes in local governments; and government ministries, departments, agencies and other public institutions that draw resources from the government budget; and provide information that will guide strategic decision-making. It is responsible for coordinating M&E across government. A Government Evaluation Facility (GEF) was established in OPM in 2013 to (1) design, conduct, commission and disseminate evaluations on public policies and major public investments and (2) to oversee improvements in the quality and utility of evaluations conducted across government at a decentralised level. The GEF is financed through OPM’s annual budget with financing for evaluations supported by an increased allocation from the Treasury and with matching funds from development partners, such as DFID. The GEF (see Table 2) has a 3-year rolling agenda (plan) of topics for evaluation, approved by the Cabinet every 2–3 years; a virtual fund to finance public policy and investment evaluations (government and donor resources); and standards and guidelines for guiding the design, implementation and dissemination of evaluation findings.

TABLE 2: Timeline for development of Uganda’s national evaluation system.

Table 2 shows that the first M&E policy was approved in 2012/2013, with a government evaluation agenda or plan including eight evaluations. An implementation plan was developed in 2013/2014, along with the first evaluations through the facility.

Evaluations can be led by the OPM, a sector ministry or agency. While the lead institution is responsible for the design, management and dissemination of evaluation, the evaluation is undertaken by independent external evaluators, some Ugandan, some international, to ensure the integrity and independence of the evaluation. Evaluation standards were developed in 2012 with the Uganda Evaluation Association to guide the design, conduct, management and dissemination of key national evaluations. These preserve this independence and ensure that the findings and recommendations of all evaluations are unaltered by the lead institution or other members of the Evaluation Sub-Committee (ESC).

To ensure wider participation, an ESC was established whose role is to provide management and oversight support in the implementation of the GEF. The Committee includes a range of key state actors, non-state actors from academia, civil society development partners and government-financed research institutions. This collaboration was shown to have contributed to the effectiveness of the system (Rider, Nuwamaya & Nabbumba 2010).

In terms of use (characteristic six in Figure 1), the Prime Minister’s Office is required to provide 6-monthly briefings to the Cabinet or a designated Cabinet Sub-Committee on the status of evaluations underway and findings of evaluations when they are complete. The National Policy on Public Sector Monitoring and Evaluation provides a clear framework for strengthening the coverage, and timeliness of the assessment of public interventions. It requires government ministries to respond to the findings and recommendations of independent evaluations. A government response and implementation tracking mechanism developed by the OPM has been put in place to establish how many recommendations of the evaluation findings have been implemented. So far a follow-up by OPM has found that about 30% of recommendations are being implemented.

Development of the South African national evaluation system

South Africa has a semi-federal system, with a national government including 46 national departments, 9 provincial governments and 276 local governments. It is semi-federal in that there are autonomous provincial governments and local governments, but they do not have the powers of, for example, states in Canada or the United States

In South Africa, dissatisfaction with the delivery of services to poor people led to political tension in the mid-2000s. In 2009, a new administration entered office that saw M&E as a way to improve service delivery. In terms of organisation (characteristic three in Figure 1), a Ministry and Department of Performance (later, Planning) M&E (DPME) were established in 2009 and 2010, respectively. DPME has implemented a variety of monitoring and evaluation systems to support implementation ranging from monitoring of national priority outcomes, monitoring quality of management practices, to unannounced visits to frontline facilities. In 2011, an Evaluation and Research Unit (ERU) was established in DPME to develop and run the evaluation system, which as of December 2016 had 16 staff. Phillips et al. (2014) discuss the development of DPME, and Goldman et al. (2015) the emergence of the NES from 2011.

The National Evaluation Policy Framework (NEPF) (see characteristic one in Figure 1) was approved by the Cabinet in November 2011 (DPME 2011a). It foresaw a focus on priority national evaluations through a National Evaluation Plan (NEP), later widening to provinces with provincial evaluation plans and even later departmental evaluation plans. Table 3 summarises the timeline of emergence of the system following approval of the NEPF. Goldman et al. (2015) describes this development.

TABLE 3: Timeline from 2012 for the development of South Africa’s national evaluation system.

Table 3 gives the timeline of key elements of South Africa’s NES, following from adoption of the NEPF in 2011. The first pilot evaluation started in parallel to developing the NEPF (Davids et al. 2015) with from 8 to 15 evaluations conducted per year under NEPs. The first provincial evaluation plans were piloted in 2012/2013 and 2013/2014 and the system has gradually widened.

From the beginning, South Africa has taken an utilisation approach for the system to ensure that evaluation findings and recommendations are used. DPME provides approximately half the funding for NEP evaluations and the department responsible for the policy or programme being evaluated the remainder. An improvement plan must be completed after the evaluation, with some detailed planning of improvements.

From a Policy perspective (characteristic one in Figure 1), there are evaluation plans at three levels. The ERU supports all NEP evaluations and coordinates the NES across the government with provincial offices of the Premier playing a similar role in provinces. These offices are similar to DPME in being offices working directly under the Premier of the province. Departments have M&E units which take responsibility for departmental evaluations following the NES. In 2016, evaluation results were used for the first time in the national budget process, with a section in the sector budget papers on learnings from evaluations, and the implications of these for budgets (e.g. needing more for a specific function, or where efficiencies could be made).

Capacity (characteristic four in Figure 1) is a problem. Monitoring has been a role in government for many years but evaluation is a new function to government and so in practice most so-called M&E officials and M&E units focus and have experience in monitoring, not evaluation. Evaluation competences have been developed (DPME 2014). Four evaluation courses have been developed, as well as a course in evidence for the top three levels of the public service, and over 1000 government staff have been trained.

In terms of participation of wider stakeholders, the evaluation system is mainly focused on government functions. The structures for government evaluation coordination include an Evaluation Technical Working Group (ETWG) which provides a coalition to support the system comprising national and provincial M&E officials,2 and CLEAR AA, as well as sector experts in DPME who play a key role in proposing and championing evaluations.

Civil society involvement is limited to evaluation steering committees, frequent involvement in feedback workshops with stakeholders on evaluations and an active partnership with the South African Monitoring and Evaluation Association (SAMEA).3 There have been regular presentations to parliamentary portfolio committees to encourage use of evaluations as part of their oversight function. There has been some support by donors, particularly supporting development of the NES in its early stages, including the EU, GIZ and DFID.

Table 3 shows the number of evaluations started and those with an improvement plan. These improvement plans address typically four to five areas resulting from the evaluation, for example, legislation, services, M&E, etc., with an improvement objective, outputs, activities, targets, etc. Some improvement plans have been largely implemented, and others less so, partly reflecting the degree of ownership by the departments over the evaluation. Beyond these national evaluations, some 102 provincial evaluations are underway, and as of 2016, 68 national and provincial departments had departmental evaluation plans (up from 29 in 2015), indicating evaluations should be implemented this year, mostly independent ones. An evaluation of the evaluation system started in November 2016 will highlight how the system should be strengthened. This is analysing performance of the NES overall, as well as the various components of the system, and recommending how it can be strengthened.

Emerging findings regarding national evaluation systems

Overview of evaluations undertaken

‘Development of the Benin national evaluation system’, ‘Development of the Ugandan national evaluation system’ and ‘Development of the South African national evaluation system’ sections show that all three countries have developed responses in most of the six elements of an NES. Table 4 shows a summary of the evaluations undertaken in each country. South Africa has a larger number of evaluations, reflecting the greater ability of the government to fund evaluations. Further, the scope of the evaluations differs. Benin’s evaluations in particular are at the policy rather than the programme level, therefore covering a broader scope, but in less depth. In all three countries, the government is playing a strong role in leading the evaluation system. There is capacity in all three countries to conduct evaluations, but a limited pool of evaluation organisations to draw from, and more work is needed to strengthen the number of evaluators and evaluation organisations, as well as their quality.

TABLE 4: Key features of each country’s national evaluation system (January 2017).
Policy

The first element in Helnart and Renard’s framework is a national evaluation policy which sets out the approach and principles that justify the need for evaluations. It may be complemented by an evaluation plan or agenda which outlines how evaluations are selected. Key characteristics in policies are (1) how autonomy and impartiality or independence are ensured, whether appropriate structures are in place allowing for feedback, particularly from the line departments where programmes and projects are being evaluated and (2) alignment to planning and budgeting.

All three countries have a national evaluation policy (see Table 5). In South Africa, this was developed prior to developing the NES; in Benin and Uganda both countries were in the process of implementing a national system before ‘developing a policy’. This shows that both routes are possible, an important lesson for new countries taking forward the process of evaluation. All three have developed an evaluation agenda or plan to prioritise evaluations for each year. The evaluation system is differentiated from monitoring and in all three cases mechanisms for promoting autonomy and impartiality have been developed, including the important role of the central unit in managing the interface between supply (undertaking quality evaluations) and demand from central policy units. All use independent service providers for reasons of independence and/or impartiality, as well as lack of capacity in government to actually undertake evaluations. All countries have a system for dissemination, but this is still relatively technocratic and can be enhanced to widen knowledge of evaluation results, in government, Parliament and by the public. There is still a challenge to build the links between evaluation with planning and budgeting, although in 2016 the budget papers in South Africa included a section on the results of evaluations, an important move forward.

TABLE 5: Comparing policy elements across the three countries.
Methodology

Table 6 summarises the methodology of the NES, that is, the main architecture for the system – how the evaluation plan will be operationalised, how programmes are selected for evaluation and which evaluation methods might be appropriate for which moment in the life-cycle of a project or programme. Selection in all cases is around government priority. In some cases, this is a top-down decision (e.g. Benin, Uganda), whereas in South Africa there is a mix of bottom-up proposals from departments and strategic proposals from the DPME and National Treasury. Programme theory is not well developed in all cases, but evaluations are bringing more rigour in this regard. In all three countries, most evaluations undertaken are process and/or implementation evaluations, which have a more rapid feedback into policy, rather than impact evaluations. In most cases, theories of change and logframes are being developed retrospectively against which the evaluation is conducted. A variety of methodologies are used, in some cases with guidelines. In South Africa, this is most elaborated with six types of evaluations defined, and guidelines for observation.

TABLE 6: Comparing methodology elements across the three countries.
Organisation

Holvoet and Renard’s (2007) framework includes an organisational structure in order to lead, advocate for, implement and use evaluations (see Table 7). At the initial stages of institutionalisation, there will typically be a central body that promotes the practice of evaluation and manages the system. If this function is not centralised, the system will be fragmented, without standardised systems (Genesis 2016). International pioneers including Mexico (CONEVAL) and Colombia (Department of National Planning) have centrally located units to manage the evaluation system (DPME/PSPPD 2011b). All three Twende Mbele partner countries also have centrally located units within the Presidency or Prime Minister’s Office. There are decentralised M&E units in departments and agencies in South Africa and Benin, but these are emergent at ministry level in Uganda’s case. Few of these departmental M&E Units have the capacity to support evaluations and ways are being sought to take forward evaluations where there is limited capacity. There are cases in South Africa where some national departments have evaluation directorates, but this is an exception.

TABLE 7: Organisational elements.

In terms of coordination with donor M&E, donors are integrated with the system in Uganda, contributing to a basket of funding, and in Benin many evaluations are funded by donors. In South Africa, donors are rarer, and so not integral to the national M&E system.

Capacity

Table 8 compares issues around capacity in the three countries. Capacity systems refer to being able to identify M&E capacity weaknesses in the system, and then to address them. There are evaluator capacity needs (supply side) and in government staff managing evaluations (demand side). Then, there have to be systems or partnerships to provide capacity development.

TABLE 8: Comparing capacity elements across the three countries.

All countries have capacity weaknesses, and Benin (BEPP 2010) and Uganda have undertaken some skills assessment of their respective technical staffs, but not South Africa. South Africa has developed competences for evaluators and government staff who manage evaluations. There has not been a wider diagnostic of the supply and demand for evaluators, as well as specific training needs, but capacity needs have emerged through practice. In South Africa and Uganda, specific evaluation courses have been developed (GIZ 2014), but in South Africa, after some initial donor support, financial challenges have limited roll out. One emerging way to address the capacity challenges are peer learning forums to build broader communities of practice for sharing of the ‘how to’ and ‘what works’, and all three countries have run annual and/or biannual evaluation weeks to share experiences and build capacity.

Another capacity issue is that of policy-makers and their ability to use evaluation reports. As part of an advocacy campaign to promote use of M&E, South Africa has run seven, 3-day courses for the top three levels of the public service and trained 250 officials (DPME 2017b). In all cases, there could be greater formalisation of the identification of capacity needs (Table 8) and this is planned in the Twende Mbele programme.

Participation of actors outside the executive

Table 9 covers participation by stakeholders, including legislators, civil society, evaluation associations and strategic partners such as donors. This can be as contributors to the system (e.g. in selection of evaluation priorities), as users of evaluation results or as contributors in the evaluation process. There is also a key role for the different parts of the evaluation or evidence ecosystem, (e.g. universities who train evaluators and evaluation associations). For example, SAMEA and the Centre for Learning on Evaluation and Results (CLEAR) Anglophone Africa are participating in the steering committee for the evaluation of the NES in South Africa. In Benin, civil society is part of the NEB.

TABLE 9: Comparing participation by stakeholders across the three countries.

In South Africa and Uganda, there is a systematic engagement with Parliament on the results of evaluations (see Table 9). This is starting in Benin. In Uganda and Benin, unlike in South Africa, Civil Society Organisations (CSOs) are involved through relevant committees in the selection and oversight of evaluations. In all three countries, there is a partnership with the local evaluation association, although in Benin the association is weak and needs to be strengthened. Donors are financing evaluations in Uganda and Benin, and are directly involved in the oversight mechanisms in Uganda. Other key actors include universities who deliver some of the capacity development work, and may also bid for undertaking evaluations. More work is needed to bring in evidence brokers such as think tanks which will help in widening the pool of evaluators as well as widen dissemination using these important knowledge brokers.

Benin and Uganda have a much stronger role played by donors, and both have managed to incorporate them into the evaluation system, Uganda in the most structured way through the GEF.

Quality and use

All three countries have a focus on ensuring the use of evaluations (Table 10), findings of evaluations are discussed in workshops with stakeholders and senior management and a formal process of submitting the recommendations to ministries. There is some process of dissemination, and in South Africa this includes policy briefs, and in some cases thematic workshops. The follow-up process is better defined in South Africa, but both other countries are looking at how to strengthen this aspect. Table 10 shows that in the three countries there is a significant implementation of evaluation findings in most of the evaluations.

TABLE 10: Comparing focus on use across the three countries.

In all three countries, the results of evaluations are presented to Cabinet, which gives weight to implementation. There is a formal follow-up process in Uganda and South Africa with some form of improvement plan generated after the evaluation to indicate how recommendations will be implemented, and in South Africa’s case six monthly progress reports from departments for 2 years. This is being considered in Benin. In South Africa, there can be a tension with these improvement plans being treated by departments as a compliance exercise. It is most important that departments want to do the evaluations, and so want to develop and implement improvement plans. Work is going on currently in all three countries to strengthen this aspect.

South Africa has started to institutionalise using evaluation information to inform the budget process, with this having happened for the 2017/2018 and 2018/2019 budgets. This is emergent work and a methodology is being developed but needs testing to be most useful. The evaluations were not designed to answer budget questions but rather performance questions, and some changes may be needed to strengthen this link.

Emerging lessons

As evaluation widens in Africa, including through the support of Twende Mbele, there are emerging lessons which can be harnessed in supporting this roll out.

A key feature of the three countries is that a central unit in the Presidency or OPM was given the mandate to lead the evaluation system and so has the authority to take the system forward. This ensures significant political will to make an M&E or evaluation system work with support from political as well as technical champions even though there may still be contestation in government on some of those roles. Even where there are a few staff in the central unit (Benin), there has been an ability to leverage resources to get evaluations happening and an NES in place.

Having the policy in advance (as South Africa did, but Uganda and Benin did not) does not seem necessary although there needs to be some definition of how the system will work, how it will provide for impartiality, etc. Other countries such as Niger, Kenya and Ghana have developed or are developing national evaluation policies and can learn from the experience of South Africa, Benin and Uganda in doing so. In fact, a special session was being organised by Twende Mbele at the SAMEA 2017 conference on national evaluation policies to do precisely this.

Given limited resources and capacity, all three countries have started their respective NESs with priority national level evaluations expressed in an evaluation plan or agenda using donor resources where needed but driving the agenda themselves. This is important if evaluation is to become part of countries’ strategic agendas, not just imposed by donors.

Evaluation systems are extending beyond the national level; in South Africa to provincial level and Benin is keen to extend its evaluation system to municipal levels. South Africa now has some outstanding examples of provincial evaluation plans, in Gauteng, Limpopo and Western Cape, with the latter alone having done 23 provincial evaluations, and one provincial department 16 evaluations. The progressive development of systems seems essential, demonstrating what can be done and building interest in governments and with wider stakeholders. The development of suitable and probably simpler models of evaluation for local government is needed, which Twende Mbele may take forward.

In terms of evaluation type, the current fashion is for impact evaluations which provide powerful tools to input into policy. While in some cases there may be an emphasis on impact evaluations (particularly through donor influence), in practice all countries are predominantly using implementation evaluations looking at what is working or not and why. These are less technically complex and can be done with local capacity, and quicker to undertake and feed back into policy or future programmes.

In terms of methodology, theory-based evaluation is one way that evaluation can be undertaken even where there are data deficiencies, assisting where the underlying programme logic may well not have been well designed but clarified in the evaluation process. The three countries are moving to using theory of change and logic models as a core element of the process. In this way, countries are adapting Western models of evaluation to local realities.

Participation of actors outside government differs. The three countries are seeking to build a broader ecosystem which is essential to institutionalise evaluation, working with the evaluation association, universities, etc., to provide the support system which will enable evaluation to flourish, notably around evaluation capacity development. In South Africa, the resources of government are much greater than in the non-government sector, and there has been more of a focus on the executive, rather than Parliament or non-state actors. In the other two countries, the role of NGOs and donors in promoting civil society has been stronger with established seats at the table in selecting evaluations and in overseeing the system. In all cases, the main involvement of the private sector is as consultants undertaking commissioned evaluation, or in some cases being part of evaluation steering committees. Twende Mbele is starting some research on where involvement of CSOs can strengthen national M&E systems which is likely to result in some pilots of specific interventions. This can be important in providing different viewpoints, enhancing accountability and keeping pressure on implementation of the recommendations of evaluations.

Uganda and South Africa are seeking to involve Parliament which will likely strengthen their use of M&E in their oversight roles. This is important in holding government to account, for example, ensuring that departments do implement improvement plans and thereby enhancing the democratic process. The African Parliamentary Network on Development Evaluation (APNODE) has a potentially important role in stimulating the demand of African parliaments around the use of M&E, and Twende Mbele will also be funding training of Parliamentarians and the development of oversight tools.

The key challenges facing these three NESs are:

  • A stronger focus on monitoring than on evaluation, and a lack of acceptance of and resistance to evaluation. Evaluation is often seen as an accountability tool rather than as a tool for learning. This compliance approach is reinforced by national audit offices which tend to take a punitive approach, and there can be a tension with a more learning approach from evaluation.
  • A funding challenge because evaluation is seen as secondary to programme implementation and monitoring.
  • The lack of evaluator and government staff evaluation capacity.
  • Ensuring that evaluations are followed-up and recommendations are implemented.

Constrained budgets are a key challenge. Countries like Benin and Uganda, for example, show that when government budgets are very constrained, donor resources can be harnessed in ways where the agenda is set by government, even if the predominant funding for the evaluations themselves comes from donors. Uganda’s use of a ‘basket of funding’ from donors and government also means that there is not necessarily one donor having influence on one evaluation.

A big challenge faced by all countries is capacity – the capacity of evaluators in the country to conduct evaluations and the capacity in government to commission, undertake, manage and use evaluations. Until training in evaluation becomes more widespread, this will be a major constraint. This is a key role that CLEAR AA is playing in the region and a major area of intervention of Twende Mbele.

Another challenge is follow-up. The central agencies such as OPM in Uganda play a big role in ensuring that evaluations are implemented successfully. However, responsibility shifts to the implementing departments during the implementation phase. All three countries are seeking some way to hold these departments to account for implementing the recommendations, but much work is still needed on how to ensure that the intrinsic motivation is in place to address the findings, that suitable mechanisms are in place to track implementation and to engage in conversations about how to ensure effective implementation. There is an important role for Parliaments and CSOs in holding departments to account for implementing these improvement plans.

Conclusions

Countries such as Chile and Colombia have had NESs since the 1990s, and Mexico since the 2000s (Mackay 2009). Since the 2010s, Benin, Uganda and South Africa have undertaken a significant effort to mainstream evaluations in the work of government, in very differing political situations and with differing resource constraints. Systems are emerging with a wide variety of components – policies, plans, standards, governance structures, etc., which involve a wide range of stakeholders in the evaluation ecosystem. These have to reflect local realities and challenges as mentioned above. There is considerable local innovation in how to establish these systems, and adaptive management as these systems develop – an example of ‘Made in Africa’ rather than mimicry of the West. In terms of use, there is evidence of a significant portion of evaluations having recommendations implemented and we are beginning to see examples of integration with the budget process. We see an emerging process of innovation and piloting, building capacity and with an ongoing need for political will to ensure use of evaluation findings. The peer learning approach has already enhanced these systems, and the resources being made available through the Twende Mbele programme provide an opportunity to deepen this and to expand evaluation to other countries in Africa.

Acknowledgements

The Department for International Development and Hewlett Foundation have funded the Twende Mbele programme, which is permitting the formalisation of the country/CLEAR AA/AfDB partnership, including the opportunity to participate at AFREA. We also wish to thank Aristide Djidjoho and Oswald Agbadome who founded the Benin evaluation system and were initial founder members of Twende Mbele but have now left for other pastures.

Competing interests

We declare that we have no financial or personal relationships which may have inappropriately influenced us in writing this article.

Authors’ contributions

I.G. has led the South African national evaluation system and the Twende Mbele programme during its inception, as well as the development of this article. A.B. and T.L. have led the Ugandan evaluation systems from the beginning, and have been founding partners in Twende Mbele, with T.L the current Chair. A.G. and D.S. joined the Benin evaluation system in 2014 and have been involved in Twende Mbele since then. L.R.S. is a founder member and has led CLEAR AA since 2015. K.R.-M. has been an active member of Twende Mbele since 2016. All have contributed to the article, with I.G. undertaking overall editing.

References

Beney, T., Mathe, J., Ntakumba, S., Basson, R., Naidu, V. & Leslie, M., 2015, ‘A reflection on the partnership between government and the South African Monitoring and Evaluation Association’, African Evaluation Journal 3(1), Art. #164, 1–6. https://doi.org/10.4102/aej.v3i1.164

Bureau d’Evaluation des Politiques Publiques (BEPP), 2010, Etude diagnostique des capacités nationales en evaluation, 84 p. Bureau d’Evaluation des Politiques Publiques, Cotonou.

Bureau d’Evaluation des Politiques Publiques, (BEPP), 2012, Politique Nationale d’Evaluation 2012–2021, 90 p. Bureau d’Evaluation des Politiques Publiques, Cotonou.

CLEAR/DPME, 2012, ‘African monitoring and evaluation systems workshop report; Reflection and learning amongst peers’, Premier Hotel Pretoria, CLEAR, Johannesburg, 26–29 March.

Davids, M., Samuels, M.-L., September, R., Moeng, T.L., Richter, L., Mabogoane, T.W. et al., 2015, ‘The pilot evaluation for the National Evaluation System in South Africa – A diagnostic review of early childhood development’, African Evaluation Journal 3(1), Art. #141, 1–7. https://doi.org/10.4102/aej.v3i1.141

Department of Performance Monitoring and Evaluation (DPME), 2011a, National evaluation policy framework, DPME, Pretoria.

Department of Performance Monitoring and Evaluation (DPME)/PSPPD, 2011b, Report on Study Tour to Mexico, Colombia, Brazil and the US, 25 June to 12 July 2011, DPME/PSPPD, Pretoria.

Department of Performance Monitoring and Evaluation (DPME), 2014, Evaluation competency framework for government, version 2, DPME, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2016, Internal report on implementation of Improvement Plans, November 2016, DPME, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2017a, Presentation to National/Provincial M&E Forum, DPME, Pretoria.

Department of Planning, Monitoring and Evaluation (DPME), 2017b, Annual Report on the NES 2015–16, DPME, Pretoria.

Direction Générale de l’Evaluation, 2016a, Guide Méthodologique Nationale d’Evaluation, BEPPAG, Cotonou, p. 107.

Direction Générale de l’Evaluation, 2016b, Rapport de suivi de l’utilisation des résultats des evaluations realisées au cours de la période 2010–2013, BEPPAG, Cotonou, p. 73.

Furubo, J., Rist, R. & Sandahl, R., 2002, International atlas of evaluation, Transaction Publishers, New Brunswick.

Genesis, 2016, Evaluation of the National Evaluation System in South Africa. International Benchmarking and literature review, Draft report, DPME, Pretoria.

GIZ, 2014, Evaluation capacity development, GIZ, Kampala.

Goldman, I. & Mathe, J., 2014, ‘Institutionalisation philosophy and approach underlying the GWM&ES in South Africa’, in F. Cloete, B. Rabie & C. De Coning (eds.), Evaluation management in South Africa and Africa, pp. 554–571, African Sun Media, Stellenbosch.

Goldman, I., Mathe, J.E., Jacob, C., Hercules, A., Amisi, M., Buthelezi, T. et al., 2015, ‘Developing South Africa’s national evaluation policy and system: First lessons learned’, African Evaluation Journal 3(1), Art. #107, 1–9. https://doi.org/10.4102/aej.v3i1.107

Government of Uganda, 2013, National policy on public sector monitoring and evaluation., Government of Ugamda, Kampala.

Holvoet, N. & Renard, R., 2007, ‘Monitoring and evaluation under the PRSP: Solid rock or quick sand?’, Evaluation and Program Planning 30, 66–81. https://doi.org/10.1016/j.evalprogplan.2006.09.002

Mackay, K., 2009, Building monitoring and evaluation systems to improve government performance, Good practices in Country-led M&E Systems – Part 2, pp. 175–186, UNICEF, New York.

Mouton, J. & Wildshut, L., 2017, Evaluation in Africa: Database and Survey Report, CLEAR AA, Johannesburg.

OPM, 2006, National Integrated Monitoring and Evaluation Strategy (NIMES), Office of Prime Minister, Kampala.

OPM, 2009, Mapping evaluation demand, practices and capacities, Office of Prime Minister, Kampala.

OPM, 2010, National policy on public sector monitoring and evaluation discussion paper, Office of Prime Minister, Kampala.

Phillips, S., Goldman, I., Gasa, N., Akhalwaya, I. & Leon, B., 2014, ‘A focus on M&E of results: An example from the Presidency, South Africa’, Journal of Development Effectiveness 6(4), 1–21. https://doi.org/10.1080/19439342.2014.966453

Rider Smith, D., Nuwamanya Kanengere, J. & Nabbumba Nayenga, R., 2010, Policies, Institutions and Personalities: Lessons for Uganda’s experience in Monitoring and Evaluation in From Policies to Results: Developing capacities for country monitoring and evaluation systems, UNICEF, New York.

Footnotes

1. https://www.cia.gov/library/publications/resources/the-world-factbook/geos/ug.html – accessed 8 March 2017.

2. Goldman and Mathe (2014) discuss the change management/institutionalisation process.

3. The relationship with SAMEA is explored in Beney et al. (2015).


 

Crossref Citations

1. How evidence, implementation, policy, and politics come together within evidence systems: Lessons from South Africa
Ruth Stewart
Development Policy Review  vol: 41  issue: 2  year: 2023  
doi: 10.1111/dpr.12657