Abstract
Background: The need to demonstrate development results has prompted governments across Africa to build systems to generate, supply and use evaluative evidence for policy-informed decision-making, budgeting and programming. National evaluation systems (NESs) are being set up across Africa, together with the processes and other monitoring and evaluation (M&E) infrastructure for efficient and effective functioning.
Objectives: This article seeks to document comparative developments in the growth of systems in Anglophone African countries, and provide an understanding of these systems for capacity-development interventions in these countries. It also aims to contribute to the public debate on the development of national M&E systems, institutionalisation of evaluation, and use of M&E evidence in the larger African context.
Methods: This article uses four key dimensions as the conceptual framework of a national monitoring and evaluation system, including monitoring and evaluation systems in the executive; the functioning of parliamentary M&E systems; professionalisation of evaluation and existence of an enabling environment. A questionnaire was used to collect information based on the key dimensions from government and non-governmental personnel. The Mo Ibrahim index of 2018 was used to collect information on enabling environment.
Results: Findings indicate that all systems have stakeholders with different roles and contexts and are designed according to the state architecture, prevailing resources and capacities.
Conclusions: This article concludes that the findings can be used as different entry points for developing and strengthening M&E capacities in countries studied.
Keywords: Africa; evaluation systems; evidence informed policy making; monitoring and evaluation; national evaluations systems; evaluation stakeholders.
Introduction and purpose
The rapid growth of National Evaluation Systems (NESs) in Africa has created a range of questions with respect to how these systems are best positioned within public sector bureaucracies; how they align to existing monitoring and evaluation (M&E) related functions; what policy environment underpins them, and how their effectiveness will be assessed. A NES is usually formalised by a national evaluation policy (NEP). National evaluation systems have developed in advanced economies since the 1980s; in Latin America since the 1990s and in Africa from 2007, mainly in Benin, Uganda and South Africa in 2011, whilst in Ghana, Kenya, Rwanda and Zambia there are M&E systems. These highlighted NESs take different forms, and the levels of institutionalisation vary depending on the context within which the systems were created. Much of the literature, as well as national documentation, fluctuate between conflating M&E systems and NESs, making a distinction, or terming something NES where it is only an M&E system.
In 2017, the Centre for Learning on Evaluation and Results-Anglophone Africa (CLEAR-AA) began tracking and codifying the development of M&E systems in 11 countries, namely Benin, Botswana, Cote d’Ivoire, Ghana, Kenya, Niger, South Africa, Tanzania, Uganda, Zambia and Zimbabwe. Four key dimensions of interest that were tracked were government-wide monitoring and evaluation systems; parliamentary monitoring and evaluation systems; evaluation as an emerging profession; and enabling environment. In 2018, a second iteration of tracking the M&E systems in Anglophone Africa was undertaken, focusing on the six Anglophone countries that are either Twende Mbele members or countries in which the CLEAR-AA carried out diagnostic studies in 2018, namely Ghana, Kenya, Rwanda, South Africa, Uganda and Zambia. The studies are not published, but available on CLEAR-AA’s website. Twende Mbele is a partnership of counties that are learning from each other on how effective M&E systems at all levels of government can strengthen government performance.
This article seeks to document developments in NESs in the selected countries of Ghana, Kenya, Rwanda, South Africa, Uganda and Zambia, to provide an understanding of M&E systems and planning for capacity development interventions in these countries, as well as contribute to the public debate on the development of national M&E systems, the institutionalisation of evaluation and the use of M&E evidence in the larger African context. The empirical findings suggest that M&E practice is maturing across the studied countries and adding to the growing body of literature around M&E systems and policies in Africa.
Conceptualising national evaluation systems
Scholars and practitioners have defined a NES in a wide range of ways. As a general trend, the focus has been more on institutional structures than on performance (Leeuw & Furubo 2008; Porter & Goldman 2013; Rugg 2016). The definitions of these systems vary significantly, based on the positionality of the scholar or study being undertaken, the purpose of the evaluation system, and the global governance process framing discussions (Vallejo 2017). A NES refers to a system that defines the commissioning, undertaking and use of evaluations and provides guidance around institutional arrangements. National evaluation systems guide how evaluations are selected, implemented and used (Goldman et al. 2018). Furubo, Rist and Sandahl (2008) and Lazaro (2015) argue that an evaluation system exists when:
Evaluation is a regular part of the life cycle of public policies and programmes, it is conducted in a methodologically rigorous and systematic manner in which its results are used by political decision-makers and managers, and those results are also made available to the public.
Lazaro (2015:16) further points out that intertwined in NESs are values, practices and institutions associated with a particular political and administrative system. In other words, evaluation systems are not separate from the administrative systems, and organisational culture that host them, whether in government, civil society organisations (CSOs), or international development agencies.
The literature on NESs globally is dominated by scholars from Global North, although an African body of research on evaluation systems is emerging (Blaser Mapitsa, Tirivanhu & Pophiwa 2019). Much of the existing literature around M&E and evaluation systems is based on European, North American and more recently Asian and Latin American theory and practice, with comparatively little written about African M&E systems and NESs. For example, a study by the Organisation for Economic Cooperation and Development (OECD) (2016) explores evaluation systems in development cooperation, focusing on 37 members of the Development Assistance Committee (DAC), Network on Development Evaluation (EvalNet) and nine multilateral organisations, including six development banks, the European Commission, the International Monetary Fund (IMF) and the United Nations Development Programme (UNDP). Another example is of Rosenstein’s (2015) Global Mapping report on the Status of National Evaluation Policies in South Asia. This creates a challenge in finding useful frameworks within the contemporary literature that relate to African NESs and M&E systems.
This article adopts four key dimensions which provide an in-depth insight into analysing and understanding NESs in the six countries. These are based around stakeholders who play anchor roles in these systems. They are outlined in the ensuing conceptual framework.
A conceptual framework for understanding the national evaluation system
The four key dimensions of an NES are government-wide monitoring and evaluation system; the functioning of parliament; professionalisation of evaluation and existence of an enabling environment. Each dimension is described against its own sub-themes.
The role each stakeholder plays in the NES, and how this is measured, is summarised in Figure 1. The effectiveness of government-wide monitoring and evaluation systems is measured through seven variables, including evaluation use, M&E capacity across ministries and CSO engagement. The functioning of the parliament dimension is critical to democratising the M&E processes and supports the use of M&E findings, and is measured against three variables. Monitoring and evaluation systems as a practice is growing on the continent and there are questions around how to standardise and professionalise the M&E profession. This dimension is measured with seven variables. Monitoring and evaluation systems require an enabling environment for it to function effectively, and support evidence-informed decision and policymaking. This is analysed using four variables.
|
FIGURE 1: Conceptual framework for understanding the stakeholders of national evaluation systems. |
|
Methodology
Data were drawn from both primary and secondary sources. Primary data sources included a structured self-administered questionnaire in 2018 completed by 48 key informants (8 in each country) that were purposively sampled because of their experience and knowledge of their country’s M&E system and evaluation systems. The key informants were M&E experts, government officials, representatives from voluntary organisations for professional evaluators (VOPEs) and CSOs, and parliamentarians in the six countries. The questionnaire was organised around the dimensions of government-wide M&E systems, parliamentary capacity and systems, professionalisation of evaluation, M&E capacity building and enabling environment. To ensure the reliability of the information gathered through the self-administered questionnaire, another key informant within the relevant central government agencies or ministries was appointed to validate the information. Secondary data sources included five situational analyses conducted by the CLEAR-AA team in 2018 on the evaluation systems in Ghana, Kenya, Rwanda, Uganda and Zambia. No situational analysis was conducted in South Africa as a substantial amount of literature on the country’s NES has been published already, and was reviewed. Other secondary data sources used were the 2018 Ibrahim Index of African Governance (IIAG) to measure the enabling environment dimension, and research performed by Twende Mbele on M&E culture in South Africa and Uganda. Data for South Africa was also drawn from the Management Performance Assessment Tool (MPAT) which is administered by the Department of Planning, Monitoring and Evaluation (DPME) across the entire public service.
The national evaluation and M&E systems in the six countries are at different stages and coordinated by a diverse range of institutions, mainly ministries or a department in a ministry. None have a single entity with an exhaustive set of data on the performance of the M&E system/NES or wider evaluation ecosystem. Even where there are useful indicators to assess the functioning of an M&E/NES, or developments in the evaluation ecosystem, it is not always possible to obtain the data, and therefore this article is limited to the indicators where data is currently available. Some of the variables are measured using the perceptions of key informants who responded to the self-administered questionnaire or participated in the situation analyses. Moreover, countries define and constitute their systems differently and there is no single development path or ideal prototype which they are compared against. In reading this article, it is therefore not always useful to compare countries to each other, but rather to see how each country’s M&E is developing and explore opportunities that exist in each. Despite these challenges, there are some indicators that are comparable and this article offers a cross-country examination of M&E developments without necessarily comparing because of the difference in how M&E and evaluation systems are constituted.
Findings
Government wide-monitoring and evaluation systems
Governments are progressively investing in M&E as a practice and its infrastructure. This study reveals that the surveyed governments have M&E policies or other policies that guide the M&E practice across all the government and state agencies. In the case of South Africa, a policy framework was published to guide the overarching government-wide M&E System (Presidency 2007). Policies are mechanisms to facilitate M&E practices within surveyed countries. In reality, the results indicate that governments with M&E policies, for example, South Africa, Uganda and Zambia tend to focus on monitoring and less on evaluation. The findings concur with Holvoet and Renard (2007) who argued that the emphasis put on monitoring is quite unbalanced. However, there is evidence suggesting that these M&E policies are influencing the undertaking of evaluation. Effective M&E and evaluation systems are dependent on M&E and evaluation policies for framing the purpose, responsibilities and institutional arrangements for the public sector evaluation function in a particular country Bamberger, Segone and Reddy (2015). Table 1 shows the number of evaluations conducted since the establishment of the NESs in Benin, South Africa and Uganda. The policies and systems of South Africa and Benin have been evaluated.
TABLE 1: Number of evaluation influenced by an evaluation system. |
The South African National Evaluation Policy Framework is one policy amongst those surveyed that defined the evaluation practice across the South African public sector. Whilst a distinct evaluation policy can play a role in building a system, M&E can also be supported through other country policies as long as they provide sufficient support for institutionalisation, systematisation and use of M&E evidence. Despite having an M&E policy in place and creating M&E units in ministries, departments and agencies, financing M&E activities and sourcing adequately skilled technical experts remain major challenges. Notwithstanding efforts made by governments, the infrastructure is geared primarily towards producing monitoring data for performance management and accountability. The accountability and over-emphasis on monitoring have led to a culture of malicious compliance (CLEAR-AA 2012). Malicious compliance is a situation where reporting is not conducted as a means of measuring progress, but rather undertaken to merely adhere to reporting requirements. Despite that, the growing institutional architecture in countries studied, is laying the foundation for production and use of evaluation within government.
In practice, evaluation findings are not optimally used as a result of lack of capacity within government, a weak enabling environment, or evaluation culture, and a lack of stewardship to use evaluation findings. However, governments have taken steps to improve the use of evaluation results by involving the decision and policymakers (end-users) when commissioning evaluations, establishing steering committees for evaluation, (CLEAR-AA 2012), following up action plans to implement recommendations and conducting related monitoring. These steps have made progress in systematising the use of evaluations, but it is still early to determine whether or not they are addressing issues of organisational culture within government, and political barriers that may exist to implementing evaluations more broadly.
The role of Parliament
M&E provides valuable evidence for parliaments to hold the executive accountable. Without quality evidence, parliaments cannot hold the government to account (Draman et al. 2017). Figure 2 indicates the amount of time spent by parliamentarians in oversight.
|
FIGURE 2: Percentage of time spent on oversight by member of parliaments. |
|
Figure 2 shows Members of Parliament (MPs) in Uganda and South Africa spend relatively more time on oversight (50% of their time) compared to the MPs in Kenya (whose MPs spend 40% of their time on oversight) and Rwanda (Rwandan MPs allocate 40% of their time on oversight activities vis-à-vis the executive). Zambian MPs spend the least amount of time (30%) on oversight activities, relative to MPs in the other four countries. This was initially self-reported by MPs, and then confirmed against records of the heads of Parliamentary administrative services. Parliamentary oversight over the executive arm of the state is an important element of an accountable government. Figure 1 and Figure 2 suggests that parliamentarians are actively using monitoring evidence for oversight vis-à-vis the executive.
In most countries, the executive (in the form of ministries, departments and agencies) is mandated to report on progress made regarding annual ministry work plans to respective Parliamentary portfolio committees as a means of accounting for the resources allocated to ministries. To date, evidence-informed policymaking (EIPM) literature largely focuses on the role of evidence in ministries, departments and agencies (Cislowski & Purwadi 2011; Shaxson 2014; Wills et al. 2016). However, there is now an emerging interest in the complex information landscape of parliaments, and how this links to evidence use and contestation in national democratic landscapes (Broadbent 2012).
Parliamentarians have often not been considered for evidence-use training programmes (Draman et al. 2017), yet parliaments are at the forefront of using evidence. The data collected indicates that MPs come from a wide range of disciplinary backgrounds, and therefore have diverse approaches to evidence use. Furthermore, many MPs only serve one term, depending on the electoral system and turnover rates. Whilst they can still be an important stakeholder in strengthening their respective countries’ M&E systems, it is also important to balance working with individual MPs and strengthening parliamentary institutions in the countries studied (Blaser Mapitsa, Ali & Khumalo 2020). Another mechanism that seeks to improve evidence-use in parliaments is the African Parliamentarians’ Network on Development Evaluation (APNODE). The APNODE was established in March 2014 at the 7th African Evaluation Association (AfrEA) Conference in Yaoundé, Cameroon. The founding mandate of APNODE is to strengthen the capacity of African parliamentarians in the functions of quality oversight, policymaking and national decision-making by promoting awareness, appreciation, demand, and utilisation of evaluations in the day-to-day functions of Parliament (Africa Evaluation Association 2017). Goldman et al. (2018) assert that APNODE has a potentially important role in stimulating the demand for use of M&E evidence in African parliaments.
Parliamentary researchers are another valuable resource for synthesising evidence from both research and evaluations and providing parliamentarians with access to research conducted by other agencies in order to inform parliamentary debates and oversight work. Most MPs in the studied countries are reported to have research capacity, and although M&E may not be part of their job description, M&E evidence is an important resource for supporting parliament in evidence use.
Professionalisation of evaluation
The field of M&E has been influenced by the adoption of Results-Based Management (RBM) as a performance-oriented form of governance by the OECD in the 1990s. The prevailing international political economy of the 1990s was characterised by public sector budget deficits, structural problems, growing competitiveness and globalisation, lack of public confidence in government, growing demands for better and more responsive services, and for more accountability. This illustrates the governance paradigm shift towards RBM by the OECD member nations, induced by the above-mentioned economic, social and political pressures of the last decade of the 20th century (Binnendijk 2002:3).
The shift towards RBM essentially refers to the adoption of a governance principle of performance measurement, which is the process of objectively measuring how well a Ministry, Department or Agency is meeting its stated goals or objectives. RBM typically involves the following performance management processes: articulating and agreeing on objectives; selecting indicators and setting targets; monitoring performance (collecting data on program/policy results) and reporting those results in relation to the pre-determined programme/policy targets. Evaluation is another aspect of RBM: the aim is to assess the cause-effect interface of programmes, thereby explaining how and why a programme/policy has succeeded, or failed in achieving its medium to long-term objectives (United Nations Development Group 2011:3). The M&E field, particularly in Africa, is often situated within the origins of the OECD’s adoption of RBM as a result-oriented form of governance to achieve development objectives more efficiently.
Monitoring and evaluation has grown into ‘an active discipline and practice in South Africa and the African region’ (Levin 2017:136). However, there is growing recognition that evaluation has been overshadowed by monitoring (and performance management), and there is need for a stronger focus on evaluation, leading to the separation of these two functions that are related but different. There is growing debate in Africa around professionalisation of evaluation and establishing various forms of accreditation for professional evaluators. Levin (2017) argues that the consequences of professionalisation might be the differentiation and creation of occupational classes of evaluators and a new division between ‘professionals’ and ‘amateurs’. Monitoring and evaluation is a growing profession across the six countries studied and is probably indicative of shifts in the rest of Anglophone Africa. All the VOPEs, for example, Ghana, Zambia and Kenya other than Uganda and South Africa, reported growth in membership, and a new VOPE was established in Rwanda in 2016 with 70 members (see Figure 3). This may demonstrate the variation in capacity of VOPEs, but also in the general momentum around evaluation within the countries studied. Interesting to note that some VOPEs are M&E (e.g. South African Monitoring and Evaluation Association) and some are evaluation only (e.g. Tanzania Evaluation Association and Zimbabwe Evaluation Association). Also, it is to be noted that these are open to anyone interested, so the proportion of members who are functioning as evaluation professionals might be quite low.
|
FIGURE 3: Voluntary organisations for professional evaluators (VOPEs) and members registered. |
|
Government and donor agencies are drivers of evaluation demand and important players in expanding evaluations in all studied countries. In most countries, civil society and non-governmental organisations are funded by international donors and have a more established evaluation practice compared to other actors (Blaser Mapitsa & Chirau 2019:39) and have stimulated the development of M&E practice, in the absence of a national M&E government department (Goldman & Porter 2013). One of the primary reasons why the civil society sector has a more established evaluation practice is because of donors’ accountability requirements, whereby funding depends on civil society’s ability to demonstrate results to their respective donors (Regional Centers for Learning on Evaluation and Results 2013:10). The influence of donors on M&E practice has been prevalent even in countries where development aid and donor influence is not significantly pronounced in Gross Domestic Product (GDP) terms. South Africa is an exception to the general trend of development aid, accounting for a significant portion of GDP. Even in such unique settings, however, donors have played the role of training many evaluators (Goldman & Porter 2013:2).
Donors have also played an integral role in the formation and/or strengthening of VOPEs. For instance, the EvalPartners Initiative was jointly established in 2012 by the United Nations Children’s Fund (UNICEF) and the International Organization for Cooperation in Evaluation (IOCE). The founding mandate of EvalPartners is to strengthen the capacities of VOPEs to influence policymakers, public opinion and other key stakeholders so that public policies are based on evidence, and incorporate considerations of equity and effectiveness. Through its peer-to-peer (P2P) program, EvalPartners operationalises its mandate of encouraging VOPEs to work together to institutional and evaluative capacities, allowing VOPEs to improve the NES of their respecting countries (Rugh & Segone 2013:9–10). Thus, donors are integral actors that positively influence the enabling environments that induce evaluative culture within countries by supporting national and regional VOPEs.
Whilst it was difficult to track M&E academic offerings in light of it being an emerging, interdisciplinary field, offered in different schools, a number of the countries studied do offer standalone evaluation or M&E qualifications at higher education institutions (HEIs) and other institutions, including Kenya, South Africa, Uganda and Zambia. The findings of this study are similar to a study by the Mouton, Wildschut & Leslie (2018:11) that argues that M&E is not necessarily prioritised as a separate discipline at most universities. As a result, there is insufficient recognition for M&E and current offerings remain deficient.
All the VOPEs and selected universities were reported to be participating in the M&E system of their respective countries. Universities are a key stakeholder in the NES as both training providers and evaluators (Genesis Analytics 2017). However, the level of participation in the system varies from one country to another and the level of effectiveness thereof. The VOPEs play quite a significant role in countries where systems for M&E are better institutionalised and systematised in the government sector. For example, the South African Monitoring and Evaluation Association and the Uganda Evaluation Association play significant roles in the institutionalised M&E systems of South Africa and Uganda. According to Morkel and Mangwiro (2019), the VOPEs are leading on the M&E professionalisation debate in Africa, to make the M&E profession more regulated and to improve the quality of the practice. There is a view that argues that professionalisation of M&E practices will contribute to the establishment of a recognised ‘evaluation profession’, through a professional body and affiliated professionals (Levin 2017). The professionalisation impasse has for long existed, therefore, the transition to professionalisation may present a shift in the way M&E is practiced in Africa. According to Abrahams (2015), there is no consensus on what skills are required for professional evaluators as the discipline is still emerging. However, there has been progress to identify evaluation competencies in Canada and South Africa, and through IDEAS since 2015.
Monitoring and evaluation capacity development have been on the continent’s agenda for some time now in both government and non-governmental organisations. There have been extensive efforts to develop and strengthen M&E capacity. Research and practice show that there is a dearth of common strategies, or models for M&E capacity development in Africa, resulting in fragmentation and piecemeal approaches to capacity development (Basheka 2016:115; CLEAR-AA & Twende Mbele 2016:25; Tarsilla 2014:8). Morkel and Mangwiro (2019) argue that the dearth of such strategies are a result of the absence of agreed standards for M&E education provision (Morkel & Mangwiro 2019), and uncertainties around how to build capacity for evidence use, or evaluation capacity development in general (Denney, Mallet & Benson 2017:1; Preskill 2008:118; Stewart 2015:555).
The supply for M&E capacity is being led currently by the higher education institutions. There is a large demand for admission to study M&E courses in universities across the continent (Tirivanhu et al. 2018). This is particularly so because the institutions of higher education offer accredited qualifications, compared to other institutions providing training which is not recognised by higher education quality assurance bodies. It was reported that where qualifications are offered, they are pitched at different levels (certificate, diploma, post-graduate diploma, masters and doctor of philosophy). Many doctoral and master’s programmes in Africa and across the world are about evaluation (not M&E), which reflects the need to improve evaluation practice but also to improve evaluation teaching and research. The findings of this study indicate that South Africa and Uganda offer the highest number of qualifications. This is a result of the influence of formalisation/institutionalisation of evaluation by government, leading to an increase in demand for trained M&E personnel. Results indicate that besides the qualifications being offered by institutions of higher learning, there are several other entities offering M&E courses.
In countries where supply is limited at certificate level, M&E is generally offered as part of development and management courses. When universities offer accredited training, the focus is often disproportionately on monitoring, with little focus on evaluation (Tirivanhu et al. 2018). Training is also provided in many countries by government training agencies and VOPEs, but these are not normally credit-bearing. In general, there was widespread agreement amongst the interviewees that trainings are disproportionately focused on monitoring, and give insufficient attention to evaluation skills and competencies. This was attributed primarily to the compliance focus of the public sectors in the region. A majority of the six countries are also recipients of international aid, which is conditional on stringent donor reporting requirements, which further highlights the disproportional focus on the accountability, rather than the learning function of M&E.
Existence of an enabling environment
It is presumed that M&E systems support democratic governance, enhance accountability and result in policy and program improvement (Hansson 2006; Mark & Henry 2004; Pollitt 2006; Schwandt 2002; Weiss 1999). There is a dearth of literature on how M&E influences performance in different models of governance (Hanberger 2012), particularly in Africa. However, the results collected indicate that governance in Africa has steadily improved over the past three decades. There are exceptions, where some countries are experiencing violations of human rights, or corruption amongst the political elite without prosecution. The findings point to the fact that the rule of law, transparency and accountability are key pillars of better governance and enabling conditions for M&E to be truly valuable and used by governments to increase the efficiency and effectiveness of policies and programmes. The innate power of M&E is that of making value judgments that have profound implications, which decision makers, policymakers and program implementers should not ignore. Figure 4 illustrates the state of safety and the rule of law in the six countries under study, with the rule of law being a quintessential factor that creates a favourable enabling environment for M&E to be entrenched within governance operations. For the purposes of this study, data from the Mo Ibrahim index was used for the dimension of an enabling environment.
|
FIGURE 4: Measure of safety and the rule of law in the country. |
|
Of the six countries accounted for in Figure 4, only Kenya and Uganda demonstrated incremental improvements regarding safety and the rule of law from 2008 to 2017. Ghana, Rwanda, South Africa and Zambia all recorded a decline in safety and the rule of law. As alluded to above, the rule of law creates a favourable environment that enables M&E to be assimilated into governance processes and procedures. The improvement of the rule of law certainly bodes well for the M&E systems of Kenya and Uganda. Conversely, the incremental decline of the rule of law in Ghana, South Africa, Rwanda and Zambia could have a negative impact on the functionality and effectiveness of the M&E systems of these four countries.
Figure 5 measures the level of public participation and adherence to human rights in the six countries under study.
|
FIGURE 5: Measure of participation and human rights in the country. |
|
Figure 5 measures changes to the level of public anticipation and observation of human rights in the six countries. From 2008 to 2017, there was an increase in the level of public participation and observation of human rights in Kenya, Rwanda and South Africa, and this good governance measure tended to create a favourable environment for M&E to be entrenched in the governance and management processes of countries. In the same period, however, Ghana, Uganda and Zambia recorded a decline in the level of public participation and observation of human rights: an administrative environment that had the potential to constrain the functionality and effectiveness of the country’s M&E system.
The denial of such rights has consequences. Evaluation findings may not be communicated freely in countries that are progressively becoming authoritarian, and monitoring systems of key drivers of public governance performance may become politicised. Therefore, transparency and accountability by public office-bearers can be eroded, defeating the value of M&E, particularly as a mechanism for learning. Hanberger (2012) also argues that actors and organisations use evaluative knowledge for learning, accountability and legitimisation. For example, evaluative evidence can be used to legitimise a policy that is being implemented.
Keane (2008) highlights that monitoring and evaluation should serve democratic governance, and not merely serve the needs of the political and administrative elite. There is political and administrative will to entrench and strengthen government M&E systems in countries studied. The ministries that provide oversight for M&E across the public service can have a variety of institutional locations. Some countries locate this function within the Presidency, as is the case in Ghana and South Africa, or in the Office of the Prime Minister, as in the case of Uganda. Such high profile custodianship of government M&E systems is indicative of political and administrative will to entrench M&E practices across government departments.
Discussion
M&E practice is growing in all six countries and most have approved policies guiding M&E. In some, there is a standalone M&E policy, whilst in others, M&E is embedded in cross-cutting policies, such as public finance management legislation or public service management legislation. Evidence from country studies suggests that there is no single best practice on how a country institutionalises M&E, but there is an emergent practice that is responsive to country context. As Lazaro (2015) argues, M&E systems are not separate from the political and administrative systems within which they exist.
The different M&E systems in the six countries are shaped by state architecture, political administration priorities, government capacity, resources available and other enabling environment factors, such as rule of law, as alluded to in Figure 4. What can be observed is that levels of formalisation are linked to a relative increase in institutions supplying M&E qualifications and services (such as HEIs, research centres, private sector evaluators, VOPEs and donors that finance M&E activities), as well as the demand for evaluation evidence in government. For example, in South Africa, when government formalised evaluations with concomitant financial investment to build the M&E infrastructure, this significantly drove growth of the M&E sector, and specifically evaluation. This highlights the importance of governments shaping evaluation practice in a country by investing resources to build the needed M&E infrastructure, such as M&E units with evaluation functions, policies, guidelines and tools. This impacts the demand for M&E training and the kinds of investments universities and training institutions make in M&E curriculum development and delivery. Other stakeholders that seem to be influencing M&E in the countries studied are development partners and donors and so, when thinking about the Made in Africa Agenda, we need to be conscious of the prominent role of these institutions in M&E in the continent, and the implications this has for the contextual relevance and use of evaluations.
The findings from the country diagnostics also concur with Lazaro’s (2015) and Rosenstein’s (2015) claims that evaluation culture, or culture that supports the practice of evaluation, often precedes the formalisation of evaluation practice. Indeed, evaluation culture is often more important than the technical elements of a system. As Lazaro (2015) asserts, the successful development of evaluation does not so much require a technical or institutional change, rather and above all, it requires a change in organisational culture and values, which also includes changes to the political climate. Evaluators and institutions supporting evaluation systems, or investing in evaluation capacity-building should, therefore, avoid emphasising the establishment of technical and institutional elements of an NES at the risk of countries merely mimicking other countries’ systems or approaches, which may not be relevant to their context. In other words, equal investment is needed to build political will for rigorous reflection on what best serves a country’s development objectives and the conviction regarding the value of evaluation in development.
Conclusion
Evaluators, researchers, practitioners and government leadership in different countries are coming up with new methods and approaches for assessing programme performance in ways that do not currently fit existing theories about evaluation. Further research is required to understand M&E practice, particularly around the emerging evaluative tools, that responds to contexts of constrained financial resources. Related to the issue of a constrained public finances is the need for M&E practice to demonstrate policy and program results (outputs, outcomes and impact) within this context of limited financial resources.
There is currently not enough knowledge regarding the benefit of the institutionalisation of government evaluations, given the limited but growing evidence on the value of an NES as a whole. M&E training within the continent is still in short supply. Linked to this is how to make M&E training both relevant and responsive to the country context in a changing world. This study offers some interesting insights in these areas which can be used to start much-needed dialogues about how to close existing knowledge gaps, improve M&E capacity building offerings and increase the use of M&E evidence in policy and programme implementation.
Acknowledgements
Competing interests
The authors have declared that they have no competing interests.
Authors’ contributions
Over the course of data collection, analysis and drafting of this article, the first three co-authors (T.C., C.B.M and M.A.) have all led the portfolio of work at CLEAR-AA to strengthen national evaluation systems, giving each author a role in conceptually leadership. B.M. and A.D. contributed both to the programmatic work that generated data for the article, as well as drafting. All authors contributed equally to the writing and revisions of the article.
Ethical consideration
This article followed all ethical standards for research without direct contact with human or animal subjects.
Funding information
Funding for this work came from the Centre for Learning on Evaluation and Results, Wits University.
Data availability statement
Data sharing is not applicable to this study.
Disclaimer
The views and opinions expressed in this article are those of the authors alone and do not necessarily reflect the official policy or position of any affiliated organisation of the authors.
References
Abrahams, M.A., 2015, ‘A review of the growth of monitoring and evaluation in South Africa: Monitoring and evaluation as a profession, an industry and a governance tool’, African Evaluation Journal 3(1), 1–8. https://doi.org/10.4102/aej.v3i1.142
Africa Evaluation Association, 2017, ‘Parliamentarians’ corner’, viewed 08 September 2020, from https://afrea.org/parliamentarians-corner.
Bamberger, M., Segone, M. & Reddy, S., 2015, ‘National Evaluation Policies for sustainable and equitable development: How to integrate gender equality and social equity in national evaluation policies and systems’, viewed n.d., from https://www.evalpartners.org/sites/default/files/library/selected/NationalEvaluationPolicies_web-single-color.pdf
Basheka, B.C., Lubega, J.T. & Baguma, R., 2016, ‘Blended-learning approaches and the teaching of monitoring and evaluation programmes in African universities: Unmasking the UTAMU approach’, African Journal of Public Affairs 9(4), 71–88.
Binnendijk, A., 2002, Results-based management in the development co-operation agencies: A review of experience, DAC Working Party on Aid Evaluation, Paris.
Blaser Mapitsa, C., Ali, A.J. & Khumalo, L.S., 2020, ‘From evidence to values-based decision making in African parliaments’, Evaluation Journal of Australasia 20(4), 1–18. http://doi.org/10.1177/1035719X20918370
Blaser Mapitsa, C. & Chirau, T.J., 2019, ‘Institutionalising the evaluation function: A South African study of impartiality, use and cost’, Evaluation and Program Planning 75, 38–42. https://doi.org/10.1016/j.evalprogplan.2019.04.005
Blaser Mapitsa, C., Tirivanhu, P. & Pophiwa, N. (eds.), 2019, Evaluation landscape in Africa, African Sun Media, Stellenbosch.
Broadbent, E., 2012, Politics of research-based evidence in African policy debates, ODI, London, viewed 18 August 2017, from www.odi.org/publications/8757-research-evidence-policy-africa-ebpdn-broadbent
Centre for Learning and Evaluation Results-Anglophone Africa (CLEAR-AA), 2012, African monitoring and evaluation. Graduate School of Public and Development Management, University of the Witwatersrand, Johannesburg.
Cislowski, H. & Purwadi, A., 2011, Study of the role of Indonesian government research units (‘Balitbang’) in bridging research and development policy, AusAID, Jakarta, viewed 18 August 2017, from https://dfat.gov.au/about-us/publications/Documents/indo-ks3-balitbang.pdf
CLEAR-AA & Twende Mbele, 2016, ‘Strengthening monitoring and evaluation education and training in anglophone Africa: A regional workshop’, Unpublished scoping report, Nairobi, Kenya.
Denney, L., Mallett, R. & Benson, M.S., 2017, Service delivery and state capacity: Findings from the secure livelihoods research, Consortium, London.
Draman, R., Titriku, A., Lampo, I., Hyater, E. & Holden, K., 2017, ‘Evidence in African parliaments’, INASP Report, Cambridge Terrace, Oxford OX1 1RR, UK.
Furubo, J., Rist, R. & Sandahl, R., 2002, International atlas of evaluation, Transaction Publishers, New Brunswick, NJ.
Genesis Analytics, 2017, Evaluation of the national evaluation system – International benchmarking and literature review, Genesis Analytics, Johannesburg.
Goldman, I., Byamugisha, A., Gounou, A., Smith, L.R., Ntakumba, S., Lubanga, T. et al., 2018, ‘The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa’, African Evaluation Journal 6(1), a253. https://doi.org/10.4102/aej.v6i1.253
Goldman, I. & Porter, S., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1). https://doi.org/10.4102/aej.v1i1.25
Hanberger, A., 2012, ‘Framework for exploring the interplay of governance and evaluation’, Scandinavian Journal of Public Administration 16(3), 9–28.
Hansson, F., 2006, ‘Organizational use of evaluations: Governance and control in research evaluation’, Evaluation 12(2), 159–178. https://doi.org/10.1177/1356389006066970
Holvoet, N. & Renard, R., 2007, ‘Monitoring and evaluation under the PRSP: Solid rock or quick sand?’, Evaluation and Program Planning 30(1), 66–81. https://doi.org/10.1177/1098214008316896
Keane, J., 2008, ‘Monitory democracy’, Paper prepared for the ESRC Seminar Series, ‘Emergent Publics’, The Open University, Milton Keynes, 13–14 March.
Lazaro, B., 2015, Comparative study on the institutionalisation of evaluation in Europe and Latin America, Programme for Social Cohesion in Latin America, Madrid.
Leeuw, F.L. & Furubo, J.E., 2008, ‘Evaluation systems: What are they and why study them?’, Evaluation 14(2), 157–169. https://doi.org/10.1177/1356389007087537
Levin, R.M., 2017, ‘Professionalising monitoring and evaluation for improved performance and integrity: Opportunities and unintended consequences’, Journal of Public Administration 52(1), 136–149.
Mark, M.M. & Henry, T.H., 2004. ‘The mechanisms and outcomes of evaluation influence’, Evaluation 10(1), 35–57. https://doi.org/10.1177/1356389004042326
Morkel, C. & Mangwiro, N., 2019, ‘Implications of evaluation trends for capacity development’, in C. Mapitsa-Blaser, P. Tirivanhu & N. Pophiwa (eds.), Evaluation landscape in Africa- context, methods and capacity, pp. 192–216, African Sun MeDIA, Stellenbosch.
Mouton, J., Wildschut, L. & Leslie, M., 2018, Monitoring and evaluation capacity: A landscape analysis, viewed 25 November 2020, from https://www.zenexfoundation.org.za/wp-content/uploads/2020/08/Final_Report_on_Landscape_Study_29_August_2018_LD.pdf
Organisation for Economic Co-operation and Development (OECD), 2016, Evaluation Systems in Development Co-operation: 2016 Review, OECD Publishing, Paris. https://doi.org/10.1787/9789264262065-en
Pollitt, C., 2006, ‘Performance information for democracy: The missing link?’, Evaluation 12(1), 33–55. https://doi.org/10.1177/1356389006064191
Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), 9. https://doi.org/10.4102/aej.v1i1.25
Preskill, H., 2008, ‘Evaluation’s second act: A spotlight on learning’, American Journal of Evaluation 29(2), 127–138. https://doi.org/10.1177/1098214008316896
Presidency, 2007, ‘Policy framework for the government-wide monitoring and evaluation system’, 1–28, viewed April 2020, from http://www.dpme.gov.za
Regional Centers for Learning on Evaluation and Results, 2013, ‘Demand and supply: Monitoring, evaluation, and performance management information and services’, in Anglophone sub-Saharan Africa: A synthesis of nine studies, The CLEAR Initiative, Washington, DC.
Rosenstein, B., 2015, Status of national evaluation policies: Global mapping report. Global Parliamentarians forum, EvaluPartners.
Rugg, D., 2016, ‘The role of evaluation at the UN and in the new sustainable development goals: Towards the future we want’, Global Policy 7(3), 426–430. https://doi.org/10.1111/1758-5899.12346
Rugh, J. & Segone, M., 2013, Voluntary organizations for professional evaluation (VOPEs): Learning from Africa, Americas, Asia, Australasia, Europe and Middle East, EvalPartners, New York, NY.
Schwandt, T.A., 2002, Evaluation practice reconsidered, Peter Lang, New York, NY.
Shaxson, L., 2014, Investing in evidence: Lessons from the UK Department for Environment, Food and Rural Affairs, ODI, London.
Stewart, R., 2015, ‘A theory of change for capacity building for the use of research evidence by decision makers in southern Africa’, Evidence & Policy 11(4), 547–557. https://doi.org/10.1332/174426414X1417545274793
Tarsilla, M., 2014, ‘Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward’, African Evaluation Journal 2(1), 1–13. https://doi.org/10.4102/aej.v2i1.89
Tirivanhu, P., Robertson, H., Waller, C. & Chirau, T., 2018, ‘Assessing evaluation education in African tertiary education institutions: Opportunities and reflections’, South African Journal of Higher Education 32(4), 229–244. https://doi.org/10.20853/32-4-2527
United Nations Development Group, 2011, Results-based management handbook: Harmonizing RBM concepts and approaches for improved development results at country level, United Nations, New York, NY.
Vallejo, L., 2017, ‘Insights from national adaptation monitoring and evaluation systems’, Working paper 3: 2016, Climate Change Expert Group, OECD, Paris, France.
Weiss, C.H., 1999, ‘The interface between evaluation and public policy’, Evaluation 5(4), 468–486. https://doi.org/10.1177/135638909900500408
Wills, A., Tshangela, M., Bohler-Muller, N., Datta, A., Funke, N., Godfrey, L. et al., 2016, Evidence and policy in South Africa’s department of environmental affairs, ODI, London, viewed 18 August 2017, from www.odi.org/sites/odi.org.uk/files/resource-documents/11010.pdf
|