Article Information

Authors:
Stephen Porter1,2
Ian Goldman3

Affiliations:
1Centre for Learning on Evaluation and Results for Anglophone Africa

2Graduate School of Public and Development Management, University of the Witwatersrand, South Africa

3Department of Performance Monitoring and Evaluation of the Presidency of South Africa

Correspondence to:
Stephen Porter

Postal address:
2 St David’s Place, Parktown, South Africa

Dates:
Received: 09 Apr. 2013
Accepted: 29 July 2013
Published: 18 Sept. 2013

How to cite this article: Porter, S., Goldman, I., 2013, ‘A Growing Demand for Monitoring and Evaluation in Africa’, African Evaluation Journal 2013;1(1), Art. #25, 9 pages. http://dx.doi.org/10.4102/
aej.v1i1.25

Copyright Notice:
© 2013. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A Growing Demand for Monitoring and Evaluation in Africa
In This Original Research...
Open Access
Abstract
Abstrait
Introduction
Ethical considerations
Concepts review
Analysis of cases
   • Institutional design
   • Monitoring systems
   • Evaluation systems
Conclusion
   • Further diagnosis to improve demand and utilisation
Acknowledgements
   • Competing interests
   • Authors’ contributions
References
Abstract

When decision-makers want to use evidence from monitoring and evaluation (M&E) systems to assist them in making choices, there is a demand for M&E. When there is great capacity to supply M&E information, but low capacity to demand quality evidence, there is a mismatch between supply and demand. In this context, as Picciotto (2009) observed, ‘monitoring masquerades as evaluation’. This article applies this observation, using six case studies of African M&E systems, by asking: What evidence is there that African governments are developing stronger endogenous demand for evidence generated from M&E systems?

The argument presented here is that demand for evidence is increasing, leading to further development of M&E systems, with monitoring being dominant. As part of this dominance there are attempts to align monitoring systems to emerging local demand, whilst donor demands are still important in several countries. There is also evidence of increasing demand through government-led evaluation systems in South Africa, Uganda and Benin. One of the main issues that this article notes is that the M&E systems are not yet conceptualised within a reform effort to introduce a comprehensive results-based orientation to the public services of these countries. Results concepts are not yet consistently applied throughout the M&E systems in the case countries. In addition, the results-based notions that are applied appear to be generating perverse incentives that reinforce upward compliance and contrôle to the detriment of more developmental uses of M&E evidence.

Abstrait

Une demande croissante de suivi et d'évaluation en Afrique

Lorsque des décideurs souhaitent s'appuyer sur des preuves émanant de systèmes de suivi et évaluation (S&E) afin de les aider à faire des choix, il existe une demande de S&E. Lorsqu'il existe une capacité importante à fournir des informations de S&E, mais une faible capacité à demander des preuves de qualité, il existe une inadéquation entre l'offre et la demande. Dans ce contexte, comme l'a observé Picciotto (2009), « le suivi se fait passer pour une évaluation ». Cet article applique cette observation, en s'appuyant sur six études de cas des systèmes de S&E africains, et en se demandant: Quelles preuves montrent que les gouvernements africains développent une demande endogène plus robuste de preuves générées par des systèmes de S&E ?

L'argument présenté ici est que la demande de preuves augmente, entraînant un développement plus important des systèmes de S&E, le suivi étant dominant. Dans le cadre de cette domination, il existe des tentatives pour aligner les systèmes de suivi sur la demande locale émergente, alors que les demandes des donateurs sont toujours importantes dans plusieurs pays. Il existe également des preuves d'une demande croissante au travers de systèmes d'évaluation gouvernementaux en Afrique du Sud, en Ouganda et au Bénin. L'un des principaux problèmes que cet article note est l'absence de conceptualisation des systèmes de S&E dans le cadre d'un effort de réforme visant à introduire une perspective détaillée basée sur des résultats eu égard aux services publics de ces pays. Les concepts de résultats ne sont pas encore uniformément appliqués dans tous les systèmes de S&E dans les pays étudiés. De plus, les notions basées sur les résultats appliquées semblent générer des incitations perverses à renforcer la conformité et le contrôle à la hausse, au détriment d'utilisations plus développementales des preuves de S&E.

Introduction

Robert Picciotto (2009) asked: ‘What happens when you have low demand and high supply? This is when monitoring takes over evaluation and monitoring masquerades as evaluation.’ In other words, when monitoring is the dominant part of a government monitoring and evaluation (M&E) system, this indicates that there is weak demand from decision-makers for evidence. This appears to be a key issue in African government M&E systems. The supply of M&E in Africa has to a large extent been influenced by donor demands that have stimulated the development of M&E practice, in the absence of national government demand. Even in South Africa, where development aid and donor influence is not very important in terms of GDP, many evaluators have been trained in a donor-orientated milieu, due to the strength of demand from donors and the limited government system. The donor-driven orientation of M&E practice has been recognised by the African Evaluation Association (AfrEA 2007) and within the Paris Declaration on Aid Effectiveness (OECD 2005).

However, times are changing. With increasing wealth and expectations, there are increasing demands being placed upon government for accountability, for example service delivery protests in South Africa, the election loss of an incumbent president in Senegal, and the requirements in the new constitution in Kenya. Changes in demands for accountability are affecting the kind of information government requires. Given this changing context the key question that this article sought to address is: What evidence is there that African governments are developing stronger endogenous demand for evidence generated from M&E systems?

In answering this question the article finds that monitoring is still dominant, but there is evidence of emerging endogenous demand from African governments for evidence. This demand is sometimes being filled by country-led monitoring systems, and in some countries (notably South Africa, Uganda and Benin), evaluation that supplies deeper analysis are being developed. What is striking in the case studies is, firstly, the merging of donor-driven and country-led demands and, secondly, the narrow interpretation of results-based management that focuses on accounting or contrôle and less so on development. Put differently, demand for in-depth evidence is still forming and there are growing pains in demand for evaluation. This argument is based on a review of six country case studies presented at an African M&E Systems Workshop held in March 2012, organised by the Centre for Learning on Evaluation and Results for Anglophone Africa (CLEAR) and the Department of Performance Monitoring and Evaluation (DPME) in South Africa. The workshop brought together government agencies from Benin, Burundi, Ghana, Kenya, Senegal, South Africa and Uganda who are mandated to lead the implementation of M&E systems across their governments to varying degrees (CLEAR & DPME 2012a, 2012b).

This article should be seen as a rapid review of the case studies, summarising initial lessons and identifying areas for further diagnosis in each of the countries, whilst focusing on issues related to demand. As such it is important to view this article as part of a cycle of action, reflection, learning and planning rather than as a finalised analysis. Although we overview the government M&E system in this article, the analysis stops short of understanding the entire national M&E system. The case studies tend to represent the perspective of the specific centre of government agency involved in the study. Consequently the case studies do not consistently discuss links to line ministries, national statistics agencies or the role of the Auditor General. This makes it difficult to draw consistent comparisons, so these relationships are put to one side. The article focuses on demand rather than supply, that is the capacity of the evaluation profession. In developing the arguments above, this article discusses the concepts of results and demand, examines the six country case studies in relation to trends emerging in institutional design, monitoring, and evaluation systems, and draws some overall conclusions.

Ethical considerations

No vulnerable groups were interviewed as part of this study. All participants in the study were willing. Informed consent was given verbally in the underlying case research. No direct quotations from informants have been included in this study.

Concepts review

In this article, M&E is viewed as a key element in the transformation of the public sector to be efficient, effective and responsive to citizens and parliament. In order for M&E systems to make this contribution there needs to be increased capacity by governments to demand results-orientated monitoring (tracking what they have planned to do), and also to ask deeper questions of why and how, through evaluations of policies and programmes. Two key concepts that require further definition in this section are results and demand. The definitions provided below are concise; the concepts are expanded upon in the main analysis that follows.

Firstly, what do we mean by results? Government M&E systems in Africa operate in a complex terrain. There are forces, which push for the appropriation of the benefits of government by rent-seekers (in or outside government). However, at the same time there are forces pushing to improve performance, and there are strategic opportunities for taking forward a results-orientated reform agenda, using evidence to support improvements in delivery. In most countries there is no one ‘truth’ and there are ebbs and flows of these forces, across government and across time. In this article, a results orientation requires government planning, budgeting and M&E to support improvements in people’s lives, especially for the vulnerable. In effect, managing for results means that government focuses the tools of governance to the needs of citizenry, rather than to the internal logic of the bureaucracy (Behn 2003; Benington & Moore 2011; OECD 2005; Perrin 1998; Pollitt et al. 2009).

Secondly, when decision-makers, whether political or bureaucratic, want to use evidence from M&E systems to assist them in making decisions, there is a demand for M&E. For the M&E system to be used and sustainable it is important that demand is endogenous to the governance context in which it is operating, as opposed to arising from structures external to the system, such as donors (exogenous). This argument appears in a variety of forms in evaluation and capacity development literature (Bemelmans-Videc et al. 2003; Boyle & Lemarie 1999; Chelimsky 2006; Lopes & Theisohn 2003; Mackay 2007; Picciotto 1995; Plaatjies & Porter 2011; Pollitt et al. 2009; Toulemonde 1999; Vedung 2003; Weisner 2011).

Analysis of cases

The analysis in this article focuses on three dimensions of the M&E system: institutional design; monitoring and evaluation systems. These are selected as reflective of the content of the system (monitoring and/or evaluation), and as a framework for understanding the parameters of demand.

Institutional design
The institutional design of government M&E systems is important, including the systems for capturing, processing, storing and communicating M&E information. Monitoring helps managers and policymakers to understand what the money invested is producing and whether plans are being followed. Evaluation helps to establish what difference is being made, why the level of performance is being achieved, what is being learned from activities, and whether and how to strengthen implementation of a programme or policy. (These are all elements that can be included in evaluation policies; see, for example, DPME 2011.) In each of the six countries studied, there are policies and frameworks that codify how the M&E system operates. In some countries policy is backed up by the constitution, for example Uganda. In others the centre of government M&E system operates based upon executive prerogative, as in South Africa.

In all the six countries, monitoring is mainly undertaken through line ministries (in some countries sector agencies are referred to as ministries, in others as departments; both these words are used in the article) with an agency or other related institution in the centre of government collating the information (see Table 4 and Table 5). The government ministries where M&E agencies can be located include planning, finance and the Office of the President or Prime Minister. Units within these ministries have specific roles within M&E. There are often tensions between the government ministries in relation to the M&E system, both in the way the ministries relate to the rest of government, and also the way they relate to each other on the vision of reform they are promoting, as mandates can overlap. For M&E evidence to have a stronger influence on decision-making and the political allocation of resources, there needs to be coherence between the mandates and efforts of these crosscutting ministries. Consequently, any serious public reform effort that focuses on results requires an institutional design in which results information is used in planning and budgeting and so affects resource allocation and decision-making.

Table 1 outlines the central agencies that were involved in the case studies and their mandates. In most cases they were the main champions for M&E in the respective countries, and play an important role, such as coordinating the flow of information in the system. Some agencies move beyond coordination, to information generation through evaluation, for example Uganda, Benin and South Africa.

TABLE 1: Mandates of units that supported the case studies (Source: CLEAR & DPME, 2012a).

Only three of the units have mandates that were established before 2001, and five of the seven units have been established in the past five years. This indicates that the M&E function in Africa’s governments is youthful. Further, this table shows differing institutional locations of this champion: three are within the Office of the President or Prime Minister and two in planning ministries. Senegal is an outlier in the case studies as there is no formal centralised M&E function, and currently the Directorate of National Planning in the Ministry of Finance leads on evaluation by virtue of their role in the public investment programme and proximity to the management of expenditure (and by extension planning). In several countries there is also a ministry responsible for public service reform, which also plays a significant role in M&E, for example the Délégation à la Réforme de l’Etat et à l’Assistance Technique (DREAT) in Senegal. With the advent of the new government in Senegal in March 2012 there is now a stated demand to have a Commission for the evaluation and monitoring of public policies and programmes attached to the presidency. Currently Ghana has two central units with overlapping mandates, one based in the Office of the President (involved in the case research) and the National Development Planning Commission (NDPC). This is commented upon later in the analysis of monitoring systems. Unless stated otherwise, the agencies below coordinated the case study and participated in the workshop.

Table 2 shows the human and financial resources of the agencies as of March 2012, and so is indicative of the capacities of the agencies. Each country operates within its own unique circumstances. For example, the DPME in South Africa is larger than all of the other agencies combined, but also has to work with a public service of over one million people in a country that produces around one-third of Africa’s GDP.

TABLE 2: Budget and resource allocation.

Table 3 highlights the dispersed roles and responsibilities between the government ministries. In each country there are policies and frameworks that codify the operation of the M&E system. In some countries policy relating to the M&E agency is supported by the constitution, for example in Uganda and Ghana. In other cases key parts of the M&E system operate based upon executive prerogative, as with the DPME in South Africa. In Table 3 the items in bold represent the key functions of the agencies that attended the African M&E Workshop and guided the research.

TABLE 3: Location of functions in the government M&E systems.

Table 3 summarises the institutional arrangements for the M&E units, and highlights three challenges in the harmonisation in the implementation of the M&E system.

Firstly, there are challenges around the fragmentation of the M&E functions amongst government departments. To provide a strong and coherent message across government the Planning, Finance and President or Prime Minister’s offices need to operate in a manner that is consistent around the rules and incentives of the M&E system and standards for M&E, and they must communicate effectively between them. In practice the case studies showed a multiplicity of units with M&E mandates, and a wide variety of monitoring processes and instruments. The lack of coordination around these mandates and diverse systems places additional strains upon the rest of government. Examples in each country include:

• In Ghana, there is a dual institutional mandate: performance monitoring reports are requested from both the NDPC and the Office of the President. Interestingly, the dual reporting system in Ghana could be an indicator of demand for evidence. Dissatisfied with the current mechanisms, the administration in Ghana introduced a new unit to meet their accounting requirements.

• In South Africa, DPME collates performance monitoring reports around government’s priority outcomes, whilst quarterly performance and financial monitoring of departments is reported to the Treasury.

• In Kenya, monitoring data is required to feed into the Vision Delivery Secretariat, Performance Contracts of the Office of the Prime Minister, the Public Expenditure Review and the Annual Progress Report. All of these functions require slightly different data and involve a different set of role players, so although there is a National Integrated Monitoring and Evaluation System (NIMES), in practice there is still multiplication of effort.

• In Uganda there are multiple donor reporting systems at project level, and one of the key recommendations of the case study is to reduce, harmonise and minimise duplication of monitoring functions.

• In Benin there are different agencies for monitoring, programme and public policy evaluations and impact evaluation.

• In Senegal the institutional design of a coherent M&E system is still in its early stages. A more results-based orientation is emerging in the donor orientated report Annuel sur l’Absorption des Ressources Extérieures (RARE).

As a result of these multiple reporting lines, a common refrain is that unnecessary time and effort is expended in line departments on reporting upwards, and sometimes in duplicate reporting. This can cause frustration, reinforce monitoring for compliance and reduce the space for effective use of M&E for reflection, feedback and learning. Multiple lines of reporting can lead to gaming by line departments, reporting what it is perceived management want to hear, rather than actual results, and not using the results for performance improvement (Pollitt et al. 2009). This issue has also been noted in South Africa (Engela & Ajam, 2010).

Secondly, in all countries there is no shared vision across the key central agencies of the role of M&E within a broader approach to public service reform, which can be seen in differing approaches to managing for results across these agencies. In South Africa, where legislation points towards performance budgeting, currently it is difficult to link the performance monitoring approach of the DPME to the programmes that appear within the budget. In Kenya, there are disconnects between the aspirations of the Results for Kenya programme and the indicators of the NIMES, which mainly monitors outputs. Interestingly, in Kenya, although there are examples of monitoring information feeding into budgeting, the system is not necessarily results orientated. Potentially this means the possibility of high spending on outputs that may well not be sufficient to achieve the targeted outcomes. In Uganda it was noted that due to their positional power, the Finance Ministry could inadvertently undermine the M&E goals of the Office of the Prime Minister.

Michael Barber (2008:78) observed that changing government is difficult; what is often not given enough attention is a shared vision: underestimating the power of visions, under-communicating the vision or permitting obstacles to block the new vision. If the underlying paradigm and vision for change is not shared then government will tend to develop instruments that do not cohere, for example the differing instruments developed in DPME and the Treasury in South Africa. If governments have a genuine commitment to a results-focused public service then they require consistent messaging and language by champions that can straddle institutional silos, an effective system of planning to develop solid theories of change (plans with strong logic models) as well as practical mechanisms that enable results information to influence resource allocation. The examples above demonstrate that there are institutional disjunctures between the aspirations of government M&E systems and a shared drive towards performance. This means the sum is less that the parts, and the collective impact of results-based M&E is reduced.

Finally, the merging of donor and country-led demands can provide additional tensions. In Benin within the Prime Minister’s office there are four separate M&E functions, two related to evaluation and two related to monitoring of two different forms of development frameworks. In Kenya, related M&E functions are spread across ministries without a cohesive institutional framework. In both countries, some of these functions have their root in donor-led accountability demands, whilst others are country-led systems. In Benin, the M&E included in the National Policy for Development Assistance is predominantly a form of donor reporting, whilst the evaluation processes are country led, meaning that linking these has challenges. In Kenya, whilst performance monitoring may originally have roots in and still be linked to donor reporting, there is country-led demand by the Vision Council to use information from NIMES. The example from Kenya points towards a merging of donor-driven and country-led demands. Similar patterns of development and alignment can be identified in the other case countries. The overlapping of donor-driven and country-led demands has both promise and peril. The promise is that different demands can be met from the same M&E base and that the donor demands help to resource and enrich the systems. The peril is that without a focus on harmonisation the demands will continue to compete for attention and weaken further already under-capacitated systems.

In summary, the case countries demonstrate that M&E structures and systems and their demands on government are in a process of development, and are not yet coherent. In all countries there are mandated agencies that act as institutional champions for M&E. The championship role is emergent as all agencies are still recently established, compared with for example Colombia, where the systems have been in place since the early 1990s. All of the case countries, except South Africa, are finding it a challenge to harmonise the roles of the different government ministries and donors, in terms of articulation of a common vision for results-orientated reform, linking M&E systems to planning and resource allocation, and with clearly differentiated but complementary roles and responsibilities.

Monitoring systems
Monitoring is a management function focused on tracking if you are doing what you intended, whether at the programme level or for higher level national goals. Monitoring helps you to know how you are progressing compared with the plan, what is being produced, and what evaluative questions to ask. Monitoring data does not enable you to understand why something is happening. When evaluative conclusions are drawn at the apex of government from monitoring evidence alone, there are likely to be errors: claiming an effect when there is none, claiming no effect when there is one, or a lack of understanding of what is causing what. In all countries except South Africa and Senegal, the target of the monitoring system is tracking progress against a national plan.

In all the six case countries, monitoring is the oldest and best-resourced part of the M&E system, demonstrated in Ghana, Kenya and Benin by the extensive reporting mechanisms in place. Large amounts of time and budget have been focused on developing supply of monitoring reports without necessarily producing evaluative evidence. For example, 70% of the M&E budget in Ghana for 2010 is reported as being spent on monitoring activities alone (NDPC 2011).

Annual progress reports are arguably the main products of Kenya and Ghana’s M&E systems. Kenya produces two types of cross-government annual report: the Annual Progress Report against indicators within the NIMES and the Public Expenditure Review. Benin and Uganda also produce annual reports, although with the emergence of government-led evaluations the annual reports are only one output of the system. In Benin, quite elaborate systems have been constructed around the two main plans, the poverty reduction strategy and the development assistance strategy. South Africa has quarterly performance reporting by departments and against priority outcomes, as well as departmental annual reports. Uganda has also moved to more regular monitoring systems linked to six-monthly reporting directed at politicians.

Senegal is an outlier. As there is no overall mandated lead agency the monitoring function is dispersed amongst a number of structures, most of which are under the responsibility of the Ministry of Economy and Finance (including the National Statistical Office). The Senegal case study argues that until now monitoring has been carried out more as a control or oversight function on programmes rather than as a tracking and management tool for improved performance. In addition to the donor-orientated RARE, there are also important progress reports resulting from the annual reviews of the poverty reduction strategy and the government’s main sectoral programmes, notably in health, education, justice and water and sanitation.

In South Africa, Uganda, Ghana, Benin and Kenya, the lead agency collates information from other departments and so is dependent upon the capacity of other agencies to produce quality information. Monitoring reports are generally widely disseminated and in all cases considerable human and financial resources are put into their development.

Monitoring systems that respond to political demand for reporting on performance against key national targets are being put in place in some countries. In South Africa, there is quarterly reporting to Cabinet on the performance of departments against 12 priority outcomes. Reports are linked to the delivery agreements (plans) for each outcome, and the performance agreements of ministers. In addition, after two years a critical Mid-Term Review report summarised the emerging achievements and challenges for each outcome; this has been released as a public document. The existence of these outcome reports provides an opportunity for the strategic agenda to be reflected upon regularly within Cabinet. However, there is limited naming and shaming of ministers and infrequent performance reviews of ministers by the president based upon performance reports.

In Uganda, there is a system of biannual Cabinet retreats to review the performance of the government. The prime minister, ministers and top public servants attend the retreat. The retreats review reports and may issue recommendations to inform budgeting processes. In this way, both in South Africa and Uganda, there are emerging mechanisms to institutionalise monitoring to feed into executive decision-making processes. However, what remain unclear are the consequences of poor performance and the rigour of the evaluative decisions that are taken.

The monitoring systems of the case study countries continue to mature. For most of the countries, the availability on the Internet of relatively recent data on government performance reflects a dividend on building the supply side of M&E. However, issues with capacity for monitoring, data quality and the timeliness of reporting continue to be highlighted in all cases.

As Table 4 shows, most countries aspire to have a comprehensive system in which they implement monitoring across all ministries, departments and agencies at all tiers of government. This is a difficult task. In Mexico only after over 10 years of implementing federal M&E systems has substantive work been undertaken at the sub-national levels. These seven countries may benefit from a review of the scope of the monitoring system relative to their capabilities and the demand being expressed by champions and politicians, and perhaps limit the scope to one tier of government in a few targeted cross-cutting programmes.

TABLE 4: Monitoring systems across the case countries.

TABLE 5: State of Evaluation System (Source: CLEAR and DPME, 2012a).

In summary, there is evidence of improving endogenous demand for monitoring evidence. In Uganda and South Africa there is increasing performance reporting in Cabinet with the reports discussed on a regular basis, showing a high level demand for M&E evidence. In Senegal, one can be guardedly optimistic about the prospects for improved coherence and more systematic use of existing initiatives. In Benin, drawing up the various monitoring systems could strengthen the evaluation function. Monitoring dominates the M&E systems in all cases, and there are issues with the results orientation, scope and quality of data of the monitoring systems.

Evaluation systems
Evaluation helps you to understand change, both anticipated and unanticipated, and plan for what happens next. It does this by establishing why the level of performance is being achieved, what difference is being made, what has been learned, and what to do next in the implementation of a policy or programme. Evaluation can be applied to design, clarify, develop and summarise programmes across their development cycle (Owen 2007). Evaluation can helpfully distinguish between implementation failure – not doing things well – and theory failure – doing things well but not getting the desired result (Chen 2005; Funnell & Rogers 2011). Evaluation is different from research as it seeks to support the development of utilisation-focused answers to stakeholders’ questions (Patton 2008). Monitoring operates during implementation and only really answers questions on what is happening, but not why. Good evaluation helps us to deepen our analysis by offering in-depth evidence-based guidance for improving interventions.

In three of the six countries, government evaluation systems are actively being taken forward (Benin, Uganda and South Africa), but the systems are less than three years old, with Benin having the oldest evaluation system. The three other countries (Burundi, Ghana and Senegal) are yet to develop national evaluation systems. This does not mean that evaluation is not being undertaken in the latter countries, or that there is no capacity, just that there is no national system. In all countries, key challenges for implementing evaluation include invoking demand from politicians, and developing adapted endogenous systems that can draw on in-country quality evaluation capacity.

In the three countries with a national evaluation system, a core list of evaluations of national importance is defined to focus on (a national evaluation agenda or plan). In Benin there are two agencies undertaking evaluations: the Bureau d’Evaluation des Politiques Publiques (BEPP) and the Observatoire du Changement Social (OCS). The BEPP undertakes evaluations of public policies mainly at sector level, whilst the OCS undertakes evaluations of the impact of the poverty reduction strategy. The BEPP with its four staff members has initiated eight sector evaluations in the areas of agriculture, education, rural electrification, budget decentralisation, health, tourism and public administration performance. Of the eight evaluations five have been completed. In Uganda, there is a two-year rolling evaluation agenda, mainly donor funded and overseen by an M&E technical working group. The Government Evaluation Facility (GEF) is run by a small secretariat in the Office of the Prime Minister, which provides technical support for evaluations and the evaluation system. In Uganda, one national evaluation has been completed. In South Africa, Cabinet has approved two annual national evaluation plans, for 2012/13 and 2013/14. The eight evaluations in the 2012/13 plan are in progress, and the 15 in the 2013/14 plan are currently being scoped. The system is supported by an Evaluation and Research unit in the DPME, which currently has five technical staff supporting evaluations.

Table 5 shows the implementation status of National Evaluation Systems in the case studies. The South African system has defined six different forms of evaluation that range across the policy and programme development cycle. In contrast, Benin and Uganda are more focused on implementation and impact or summative forms of evaluations. The units in Benin, South Africa and Uganda are endeavouring to set standards across government for evaluation and attempting to invoke demand for evaluation by introducing a range of tools to increase commitment by Cabinet, the president or prime minister and sector departments. The specific tools being applied to support this include mechanisms such as departments proposing evaluations, development of a national evaluation agenda or plan, development of improvement plans to address recommendations (South Africa), and making the reports publicly available.

In Ghana, Kenya and Senegal, there is evaluation capacity in the country which is applied to evaluations of government projects, but without a national system. In Ghana’s case, evaluation predominantly remains a practice undertaken outside of government. For example, evaluation accounted for less than 3% of the overall spending on M&E in 2010/2011 (NDPC 2011:4). In Kenya, there is a large amount of evaluation experience to draw upon, for example the 36 randomised control trials that have been carried out by the Jameel Poverty Action Lab (JPAL). Many other evaluations are undertaken in Kenya with donor support, to the extent that organisations within Kenya have a good level of capacity, for example the Alliance for a Green Revolution in Africa (AGRA). The Monitoring and Evaluation Directorate (MED) in Kenya could potentially draw upon this experience to develop their own evaluation commissioning capacity should their mandate be expanded. In Senegal, it is reported that evaluations are undertaken in alignment with donor project cycles and appear to be undertaken mainly to fulfil the routine evaluation requirements of those donors.

In all case countries, whether they are currently conducting evaluations or not, there do appear to be some quick wins in enhancing the supply of evaluation and invoking demand. There is high quality evaluation expertise in most of the countries. Development of evaluation norms and standards can help government to place demands on the evaluation profession that will raise the overall quality of practice. Further local capacity can receive preference in commissioning evaluation, rather than relying upon international expertise. In this way, government can improve the quality of the provision of a public good (evaluation), through developing and regulating the market. In the longer term this can help to enhance local and contextually relevant capacity for both monitoring and evaluation.

The work of the nascent evaluation units in the three countries is emerging. A common challenge is that impact evaluation of programmes is desired, but this has not been designed from the outset (so a counterfactual is a challenge). Consequently, innovative methodologies are needed, the skills for which may be lacking. In all countries, the implications of controversial or unpopular evaluation findings are yet to be faced. There is a risk that the broader political and economic environment could impact on evaluation systems, as in Uganda, for example, where donor funding was withdrawn from the Office of the Prime Minister due to corruption. The development of mechanisms that help to invoke key questions from decision-makers is an important area and the countries are experimenting with different processes; for example South Africa brought in international experts for an evaluation design clinic. Likewise, development of the quality of the supply of evaluations is important, so that decision-makers are assured of the quality of the product they are receiving. In this way the government can become more confident that evaluation helps them to understand issues and directs the public service towards results, rather than as a compliance ‘stick’ or something that exposes them to criticism. It is very easy to blame the messenger if the results are unpopular.

Conclusion

Further diagnosis to improve demand and utilisation
The key question that this synthesis article on the six country case studies sought to address is: What evidence is there that African governments are developing stronger endogenous demand for evidence generated from M&E systems? By analysing the case countries in terms of their institutional design, monitoring and evaluation systems we are able to offer some preliminary answers to this question, and to some challenges in the emerging systems.

Monitoring is very dominant in all the six countries. There are attempts to refine monitoring systems to respond to country-led demands, whilst still responding to donor needs. In several countries, monitoring information is all that is available through government systems, and so there is a danger of ‘monitoring masquerading as evaluation’.

There is evidence of an emerging demand for evaluation in three countries (Uganda, Benin and South Africa); in other countries there is local evaluation capacity but no national system. The only other country in Africa that the authors are aware of investing significantly in evaluation is Morocco, which is beginning to develop the use of evaluation for parliament. These countries are therefore the exception on the continent, but nevertheless do provide an example that other countries are showing lots of interest in. It is still early days in these countries to see where this will lead, how seriously evaluation findings are taken, and how much they influence planning and budget decisions.

The institutional arrangements for M&E are not currently harmonised in any of the countries and there is a challenge in all cases of the streamlining of M&E without a coherent system across government. This is acting to reduce the power of the M&E system, cause frustration and promote perverse incentives. Ostensibly powerful lead agencies have been mandated in the Offices of the Prime Minister or President or Planning which are driving demand, including through the development of reporting mechanisms to political decision-making bodies. An issue that remains is that the results orientation is not coherent across government, and the planning, budget and M&E systems do not link effectively.

While donor influence is strong in most of the countries, although somewhat less in South Africa, there is evidence of a growing endogenous demand for M&E evidence. However, there are still challenges of effective integration of donor and in-country systems, and ways to make sure that these are built into an integrated local system are critical. Work is still needed on determining how that demand is satisfied and the form and way M&E evidence is used.

This article has built on some of the first comparative work undertaken across Africa on M&E systems, building not from a donor demand, but in-country demand to share experience (the project was a partnership between DPME and CLEAR). As a result, there are emerging pan-African efforts to build on each other’s experience, with an active exchange programme happening between South Africa, Benin and Uganda around evaluation. International organisations like CLEAR, 3ie and donors are actively supporting this sharing. This should help to reduce donor dominance, both in terms of concepts and instruments, help to reinforce in-country capacity to develop M&E systems, and build local confidence.

Further in-depth work is needed, which will both help to deepen the analysis and also lead to more in-depth sharing across countries. Some fruitful follow-up work could be undertaken from four perspectives: (1) citizens, (2) line ministries, (3) parliaments and (4) the profession of evaluation. In this analysis, there is a gap in knowledge of how citizen demands for development spur government demands for evaluation. Filling this gap would be important given the appearance of an increasingly active citizenry on the continent. An investigation of line ministries would give a deeper political perspective on how the centralised rules and incentives play out in practice. The use of M&E information by parliaments provides an opportunity for increased demand and use of M&E information for accountability. Parliaments are locations of latent demand for evaluation, where there is space for contestation around evidence. Finally, deeper analysis of the profession of evaluation would give an indication of the gaps between government demand and the current supply as governments start to regulate the markets they generate as they commission evaluation.

Acknowledgements

The authors would like to acknowledge the inputs of Salim Latib and Anne McLennan for valuable comments on earlier versions of this article, as well as the two blind reviewers. Funding for the cases underlying this article was received from GIZ and the Centers for Learning on Evaluation and Results trust fund housed by the World Bank.

Competing interests
The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article.

Authors’ contributions
S.P. (Centre for Learning on Evaluation and Results for Anglophone Africa and University of the Witwatersrand) was lead author and co-manager of the case research. I.G. (Department of Performance Monitoring and Evaluation of the Presidency of South Africa) was chief sponsor in the DPME of the case research, supporting development, writing and editing of the article.

References

AfrEA, 2007, ‘Making evaluation our own: Strengthening the foundations for Africa-rooted and Africa led M&E: Summary of a special conference stream and recommendations to AfrEA’, in A. Abandoh-Sam et al. (eds.), Evaluate development, Develop Evaluation: A Pathway to Africa’s future, Author, Niamey.

Barber, M., 2008, Instruction to Deliver: Fighting to Transform Britain’s Public Services, Methuen Pub. Ltd., London.

Behn, R.D., 2003, ‘Why measure performance? Different purposes require different measures’, Public Administration Review 63(5), 586–606.

Bemelmans-Videc, M.L., Rist, R.C. & Vedung, E., 2003, Carrots, sticks, and sermons: Policy instruments and their evaluation, Transaction, New Brunswick.

Benington, J. & Moore, M.H., 2011, ‘Public value in complex and changing times’, in J. Benington & M.H. Moore (eds.), Public Value: Theory and Practice, Palgrave Macmillan, London.

Boyle, R. & Lemarie, D., 1999, Building Effective Evaluation Capacity: Lessons From Practice, Transaction Publishers, London.

Chelimsky, E., 2006, ‘The purpose of evaluation in a democratic society’, in I. Shaw, J.C. Green & M.M. Mark (eds.), The Sage Handbook of Evaluation, pp. 33–55, Sage, Thousand Oaks, CA.

Chen, H., 2005, Practical Program Evaluation: Assessing and Improving Planning, Implementation and Effectiveness, Sage, Newbury Park, CA.

Centre for Learning on Evaluation and Results (CLEAR) & The Presidency Republic of South Africa Department: Performance Monitoring and Evaluation (DPME), 2012a, African Monitoring and Evaluation Systems: Exploratory Case Studies, University of the Witwatersrand, Johannesburg.

CLEAR & DPME, 2012b, African Monitoring and Evaluation Systems: Workshop Report, University of the Witwatersrand, Johannesburg.

DPME, 2011, National Evaluation Policy Framework, Author, Pretoria.

Engela, R. & Ajam, T., 2010, ‘Implementing a Government-wide Monitoring and Evaluation System in South Africa’, in Independent Evaluation Group (ed.), Evaluation Capacity Development, Working Paper Series 21, World Bank, Washington.

Funnell, S.C. & Rogers, P.J., 2011, Purposeful Program Theory: Effective use of Theories of change and Logic Models, Jossey-Bass, San Francisco.

Lopes, C. & Theisohn, T., 2003, Ownership, Leadership and Transformation: Can We Do Better for Capacity Development?, Earthscan/James & James, New York.

Mackay, K., 2007, How to Build M&E Systems to Better Support Government, World Bank, Washington.

Ministry of Finance and Economic Planning (MoFEP), 2011, The Budget Statement and Economic Policy of the Government Of Ghana for the 2012 Financial Year, Author, Accra.

National Development and Planning Commission (NDPC), 2011, Resources spent on M&E and Statistics Final Report, Author, Accra.

NDPC, 2012, National Development and Planning Commission Staff, viewed 28 August 2012 from http://www.ndpc.gov.gh/DG’s Office.html%3E

National Treasury, 2012, Estimates of National Expenditure, Author, Pretoria.

Organisation for Economic Co-operation and Development (OECD), 2005, Paris Declaration on Aid Effectiveness and the Accra Agenda for Action, Author, Paris.

Owen, J.M., 2007, Program Evaluation: Forms and Approaches, 3rd edn., Guilford Press, New York.

Patton, M.Q., 2008, Utilization-focused evaluation, 4th edn., SAGE, London.

Perrin, B., 1998, ‘Effective Use and Misuse of Performance Measurement’, American Journal of Evaluation 19(3), 367–79.

Picciotto, R., 1995, ‘Introduction: Evaluation and Development’, New Directions for Evaluation 1995(67), 13–23.

Picciotto, R., 2012, ‘Country-Led M&E Systems -Robert Picciotto’ Part 2, video, Youtube, viewed 20 April 2012 from http://www.youtube.com/watch?v=UfNakPTODBs%3E.

Plaatjies, D. & Porter, S., 2011, ‘Delivering on the Promise of Performance Monitoring and Evaluation’, in D. Plaatjies (ed.), The Future Inheritance: Building State Capacity in Democratic South Africa, pp. 292-312, Jacana, Johannesburg.

Pollitt, C., Bouckaert, G. & Van Dooren, W., 2009, Measuring Government Activity, OECD, Paris.

Ramkolowan, Y. & Stern, M., 2009, The Developmental Effectiveness Of Untied Aid: Evaluation Of The Implementation Of The Paris Declaration And Of The 2001 DAC Recommendation On Untying ODA To The LDCS, Development Network Africa, Pretoria.

Toulemonde, J., 1999, ‘Incentives, Constraints and Culutre-building as Instruments for the Development of Evaluation Demand’, in R. Boyle & D. Lemaire (eds.), Building Effective Evaluation Capacity: Lessons from Practice, pp. 153–176, Transaction Publishers, New Brunswick.

Vedung, E., 2003, ‘Policy instruments: typologies and theories’, in M.L. Bemelmans-Videc, R.C. Rist & E. Vedung (eds.), Carrots, sticks, and sermons: Policy instruments and their evaluation, pp. 21–59, Transaction, New Brunswick.

Wiesner, E., 2011, ‘The Evaluation of Macroeconomic Institutional Arrangements in Latin America’, in R.C. Rist, M. Boily & F. Martin (eds.), Influencing change: evaluation and capacity building driving good practice in development and evaluation, pp. 23–40, World Bank, Washington.


 

Crossref Citations

1. Examining stakeholder involvement in the evaluation process for program improvement
Edwin Ochieng Okul, Raphael Ondeko Nyonje
International Journal of Research in Business and Social Science (2147- 4478)  vol: 9  issue: 5  first page: 179  year: 2020  
doi: 10.20525/ijrbs.v9i5.835