About the Author(s)


Panganai F. Makadzange Email symbol
Private, Gaborone, Botswana

Citation


Makadzange, P.F., 2022, ‘A descriptive narrative on the current situation against the gold standards regarding institutionalisation of national monitoring and evaluation system for Botswana and Zimbabwe’, African Evaluation Journal 10(1), a578. https://doi.org/10.4102/aej.v10i1.578

Original Research

A descriptive narrative on the current situation against the gold standards regarding institutionalisation of national monitoring and evaluation system for Botswana and Zimbabwe

Panganai F. Makadzange

Received: 03 Aug. 2021; Accepted: 06 Dec. 2021; Published: 05 Aug. 2022

Copyright: © 2022. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Literature demonstrates that when it comes to institutionalising national monitoring and evaluation (M&E) systems, some African countries such as South Africa, Benin and Uganda are quite advanced. In the new millennium, more countries such as Zimbabwe and Botswana engaged in similar processes. However, there is still little documentation on such processes. This article thus attempts to bridge the documentation gap.

Objectives: To explore the current standing of Zimbabwe and Botswana against the gold standards of institutionalisation of national M&E systems.

Method: An exploratory study design was used to estimate the level of institutionalisation of the two national M&E systems. An International Atlas of Evaluation framework originally developed in year 2002, by three scholars namely Furubo, Rist and Sandahl was adopted as a guiding framework for the research. An online survey method was employed to gather the required quantitative data. Data analysis was carried out through the International Atlas of Evaluation assessment tool and the scores to determine the level of institutionalisation were generated. The output was displayed through graphs and tables.

Results: Overall, while Botswana received a score of 48% on the International Atlas of Evaluation scale, Zimbabwe got 53%. These scores indicate that the two countries have attained a rather average level of institutionalisation and are still lacking in terms of meeting the expected gold standards.

Conclusion: There is significant progress in both countries towards fully institutionalising their national M&E systems. However, more is yet to be realised before attaining the expected gold standards. It is recommended that both countries emulate and leverage on those African countries with much more advanced national M&E systems such as South Africa.

Keywords: domains; framework; monitoring; evaluation; national; institutionalisation; reforms; Zimbabwe; Botswana.

Introduction

Literature suggests that development of national monitoring and evaluation (M&E) systems in advanced economies started around the eighties followed by Latin America around the nineties. In Africa, such developments came in much later in the new millennium (Goldman et al. 2018). Public sector reforms adopted by many African countries accelerated the development of their national M&E systems. The pressure to do so came from the ever-rising expectations from its ordinary citizens to see delivery of government services with higher standards of quality (Chirau, Waller & Mapisa 2018). Other pressures came from the civil society demanding governments to publicly report on performances as well as from international donors’ pressure on governments to measure the outcomes and demonstrate the impact of the donor funded social and economic programmes (Cakici 2016).

Tanzania, South Africa, Uganda, Rwanda, Bennin, Zimbabwe and Botswana are top seven African countries that adopted such public sector reforms (Porter & Goldman 2013). For instance, in 2001 Uganda endeavoured in a reform to improve government performance, accountability and transparency of which the implementation resulted in a rapid growth of the country’s national M&E system. On the other hand, Tanzania built its national M&E system through the implementation of a reform called Mkukuta (Edmunds & Marchant 2008).

According to Sibanda and Makwata (2017), efforts in Zimbabwe to build a national M&E system came in through the implementation of the Zimbabwe Agenda for Sustainable Socio-Economic Transformation (Zim Asset) reform. Efforts in Botswana to develop a national M&E system were made through the implementation of public sector reforms under the Vision 2016 and Vision 2036 transformational agenda (Lahey 2013). Rwanda adopted an array of public sector management reforms aimed at increasing accountability, transparency, and the level of participation in government by its citizens. These reforms also stimulated institutionalisation of the national M&E system (Murindahabi 2016). South Africa developed its national M&E system around reforms such as the new public management reform which resulted in the setting-up of the department of planning, M&E in the presidency and also in the adoption of national evaluation policy framework (Goldman et al. 2019).

The above-mentioned examples demonstrate that to date, many developments have happened on the African continent, although they are not yet extensively documented. This article thus bridges the gap by documenting the institutionalisation of national M&E system that happened so far in Zimbabwe and Botswana.

Research aim and questions

This research established the extent to which Zimbabwe and Botswana have institutionalised their national M&E systems. This aim was reached through answering the following question: What is the current situation for Botswana and Zimbabwe as compared to the gold standards for a fully institutionalised national M&E system?

Background and the conceptual framework

In this section, a conceptual framework used to estimate the level of institutionalisation of the national M&E system for Zimbabwe and Botswana is presented. A background of what is meant by a national M&E system is presented first to provide the context.

Defining a national Monitoring and Evaluation system

In literature, the term national M&E system is used interchangeably with other terms such as government-wide M&E system and a M&E system for government. Generally, like any other national systems, a national M&E system is comprised of people and structures that manage and implement the M&E processes and it is linked to the existing government structures and mechanisms. However, as Mackay (2007) long noted, as with many things in international development, the precise definition of a national M&E system varies and there is no best model of what a national M&E system should look like. In most cases, a national M&E system refers to all the structures, indicators, tools and processes used to measure if government policies and development programmes are being implemented according to the plan and if they have produced the desired result (Leeuw & Furubo 2008).

A national M&E system is defined in the South Africa policy framework for the government-wide M&E systems as a set of organisational structures, management processes, standards, strategies, plans, indicators, information systems, reporting lines and accountability relationships which enable the government to discharge M&E functions (Mackay 2007). Leeuw and Furubo (2008) identified the presence of stakeholders who share a common understanding of the objectives of the M&E system; existence of an organisational structure that typically oversees planning, tendering, implementing, quality-checking and following-up on the evaluations; existence of certain permanence or history in the M&E practice as the critical elements that can denote a national M&E system. Other elements featured in the UNAIDS (2008) framework that can possibly define the structure of a national M&E system include a well-defined organisational structure with written mandates for planning, coordinating, and managing the M&E system. It also includes a network of departments responsible for M&E at the national and sub-national levels, human capacity and adequate skilled human resources at all levels of the M&E system. Other requirements include policies and regulations that govern M&E practices, as well as presence of a platform for data dissemination and utilisation (UNAIDS 2008).

Institutionalisation of a national Monitoring and Evaluation system: Its meaning

Lázaro (2015) understood the institutionalisation of a national M&E system as a process of turning M&E practice into a regular phenomenon in a governmental setting. Similarly, Gaarder and Briceño (2010) understood the institutionalisation of a national M&E system as a process of channelling isolated and spontaneous M&E efforts into formal and systematic approaches, on the presumption that the latter provide a better framework for fully realising the potential of the M&E practice in the government.

The ability to engage with diverse stakeholders and secure their trust, while maintaining the integrity of the M&E process is the acid test for the institutionalisation process (Lázaro 2015). Earlier on, Varone, Jacob and Winter (2005) noted that the process of institutionalising a national M&E system is understood as a ‘systematization’ of the expected, if not compulsory, recourse to M&E, which can also be measured by its level of implementation within public administrations, political bodies and policy networks. This means, to determine the institutionalisation of an M&E system the most obvious approach would be to measure the existence of formal organisations and the constitution of an epistemic community (Varone et al. 2005). Therefore, as it stands, such various opinions stated by scholars which were cited demonstrate that there is no one clear path of understanding the meaning of institutionalising a national M&E system.

Conceptual framework

Some attempts have been made to try to understand the institutionalisation of national M&E systems through the use of conceptual frameworks. One existing framework in the literature is the Furubo et al. (2002) framework called the International Atlas of Evaluation framework. This framework was first developed in 2001 when three scholars (Furubo, Rist & Sandahl) made a claim that for a national M&E system to be said to have fully institutionalised, there must be an arrangement of indicators that demonstrate so. Based on this claim, they came up with a domains-based framework which they used to present what they called an atlas of the present global situation regarding evaluation systems.

To build this framework, the scholars conducted a study covering 21 countries across five continents. Based on the results of the study, they presented the atlas in the form of a book that comprises 21 country-specific chapters in which evaluation systems were described. In 2011, Furubo et al. conducted a replicated study using the same framework, although with some minor modifications. This time around the framework was modified to have indicators measuring nine domains of which all must be fulfilled as a gold standard. Thus, if a nation positions high on a domain, it was given a score of 2, with 1 speaking to a medium worth, and 0 to a low or non-existent level of action. This background demonstrates the merits of a framework that was built over a period of a decade and endured validation test through two major studies, one in 2001 and the other in 2011.

Therefore, leveraging on Furubo et al. (2002) framework, the framework as depicted in Figure 1 was adapted to assess the national M&E system for Zimbabwe and Botswana. The adapted framework consists of 12 domains that need to be fulfilled to meet the gold standards of a fully institutionalised national M&E system. Although presented in a circular manner, these domains are interlinked and can be developed at the same time. The modifications were introduced to make the framework more applicable to the context of developing African countries (Botswana and Zimbabwe) because the earlier framework was predominantly used for developed countries. The 12 domains of the adapted framework are detailed as follows:

  • Domain 1: Pervasiveness of M&E: M&E activities are clearly frequent in most of the public sector and are regarded as an integrated part of the whole public sector. Thus, when M&E becomes a regular phenomenon in the country, it takes place in many sectors such as health, education, and agriculture.
  • Domain 2: Diffusion and pluralism of M&E praxis: There is a supply of M&E practitioners from different academic disciplines who have mastered different evaluation methods attracted to work in the field of M&E and who conduct and provide advice regarding M&E.
  • Domain 3: National discourses/dialogue on M&E: There is a national discourse concerning evaluation in which more general discussions are adjusted to the specific national environment. The country has M&E and performance management issues on its political agenda. The country holds regular national seminars and conferences on M&E.
  • Domain 4: Existence of M&E professional organisations/bodies: M&E stand out as an independent profession in the country with M&E practitioners having their own societies, networks, or frequent attendance at meetings of international societies and at least some discussion concerning evaluation standards or ethics.
  • Domain 5: Existence of national institutional arrangements to support and promote M&E: The country has a permanent oversight body with well-developed structures and processes for conducting and disseminating M&E work at national level. The country has a national development planning system that takes M&E as a regular and integrated feature of the planning process.
  • Domain 6: Existence of institutional arrangements in parliament to support and promote M&E: Members of parliament often adopt provisions, laws and constitutional amendments based on M&E information and that the parliament assesses and debates national development programmes performance using M&E information.
  • Domain 7: Pluralism of M&E institutions and existence of M&E capacity building efforts: This criterion is obviously intended to capture the degree of pluralism. The country has many government institutions and private consultancy firms that provide M&E services and there are efforts by the government, the private sector and non-governmental organisations (NGOs) to strengthen the capacities of M&E personnel.
  • Domain 8: Level of utilisation of M&E information in the country: The country has a culture of using M&E information to guide national planning. Monitoring and Evaluation information from both government and private sector and NGOs is widely disseminated and readily available in the public domain.
  • Domain 9: Existence of policies and regulations that govern M&E practices: The country has a separate law or act or regulation that explicitly reflects or stipulates the requirement to monitor and evaluate public programmes and there is a national M&E policy that promotes the involvement and participation of stakeholders in M&E.
  • Domain 10: Existence of powerful stakeholders in critical institutions supporting M&E efforts: There are clear stakeholders championing the development of a national M&E system in the country. And there are efforts to include oversight agencies and civil societies in the overall capacity building programmes for strengthening M&E practices.
  • Domain 11: Existence of a democratic system that promotes M&E: In the country, there is a regular demand from civil society and the general public for transparency and accountability of decision-makers and M&E practitioners perform their M&E work free from political influence.
  • Domain 12: Promotion of impact and outcome evaluations: In the country, the use of outcome indicators has been popularised by national laws and that impact evaluations have been added to existing M&E practices in the country.
FIGURE 1: Adapted conceptual framework.

The 12 domains described above thus denotes the gold standards for an institutionalised national M&E system which countries should strive to attain. Therefore, this study was designed to explore the current situation in the two countries against the outlined gold standards as described next.

Research design and methodology

An exploratory study design was chosen for this study because there were limited earlier studies focusing on the institutionalisation of national M&E systems that could be referred to or relied upon to guide this study. A survey method was used to collect the quantitative data. The process involved collecting responses from M&E practitioners through a SurveyMonkey based questionnaire. A purposive sampling method that embraces a non-randomised or non-probability sampling technique was used to select the respondents for the survey. A non-probability sampling method was deemed most appropriate for this study because it allowed selection of members of the target population, in this case M&E practitioners with the likelihood to provide the most valuable data addressing the research objectives. The rationale for using this strategy emanate from the fact that the intention of this study was not of making generalisations (i.e. statistical inferences) from that sample to the population of interest, hence there was no need to randomly select units from the population to create a representative sample.

A total sample of 346 M&E practitioners was purposively selected for Zimbabwe and 368 for Botswana. The questionnaire had a demographic section that also included background information question such as how the respondents describe themselves in terms of their professional identity. The questionnaire also had a Likert scale section asking the respondents to express their opinion by indicating the extent to which they agree or disagree with each dimension statements under the 12 atlas domains.

The data was downloaded from the SurveyMonkey system as an excel data set and then uploaded into the International Atlas of Evaluation Assessment tool. A code was then run in the tool to combine the respondents’ scores into an aggregate rating. The tool contained 12 domains with two to three dimensions under each domain. For each dimension, a range of possible scenarios was provided allowing for objective and quantitative rating. The highest score given for a fully institutionalised scenario considered ‘highly adequate’ was 4. The lowest score depicting an un-institutionalised scenario was 0 thus when the situation is regarded as ‘not adequate’ at all in terms of meeting the institutionalisation gold standard. Therefore, for each dimension a sum score was calculated by adding up scores from each respondent, and then dividing the total by the count to generate an average score. This average score was then compared against the maximum possible score to yield a percentage rating score. Table 1 presents the percentage rating scores and the corresponding legend for the colour code and description.

TABLE 1: Legend for colour coding of the scores.

As presented in Table 1, for example, the interpretation is that if a country gets a score more than 80%, which is green, it means the country has met the gold standard in that domain. Therefore, the two countries were rated based on this criterion in Table 1.

Ethical considerations

Because the survey involved collecting information from human subjects, I conducted it in accordance with three basic ethical principles, namely, respect for human beings, beneficence, and justice. To be ethical and to comply, submissions to relevant research ethics committees for review and granting of approval or exemption was done. Ethical clearance to conduct this study was obtained from the Stellenbosch University Ethics Review Committee (reference number: CREST-2017-1754). The research protocol also obtained ethical clearance from the Review Board in Botswana and Zimbabwe. Identities of those who participated in completing the SurveyMonkey survey were properly protected. All participants were informed about their rights of informed consent and refusal. Before the respondents completed the survey, due care was taken to ensure that informed consent is obtained by sending an email letter inviting the respondent to take part in the survey, including explanations about the purpose and objectives of the survey, the procedure to be used, the benefits and risks accruing, the rights of respondents, and reassurance on confidentiality. In the database, the respondents were recognised through unique identification numbers instead of names. All data records were securely kept for safety and confidentiality during the whole study period. The language and words used in this study were neutral, without bias to any person regardless of gender, sexual orientation, racial or ethnic group, disability, or age. The study is thus regarded as a low-risk study and had no explicit risks to the study subjects, but its findings informed processes for further developing national M&E systems in the region.

Results

The focus of the analysis was to measure the level of institutionalisation of the national M&E system for the two countries as defined by the scores given to each domain in comparison to the gold standard. As presented in Figure 2, Zimbabwe has institutionalised its national M&E system up to 53% while Botswana is at 48%.

FIGURE 2: Revised Atlas overall country score for Zimbabwe and Botswana.

The two overall scores presented above suggest that both countries are gradually approaching halfway in their journey to fully institutionalise their national M&E systems. For us to have an in-depth understanding of the countries’ overall scores described above, Figure 3 is presented to demonstrate the levels at which each country is at for each of the domains assessed.

FIGURE 3: Estimates of the levels at which Botswana and Zimbabwe is at for each domain. (a) Summary of Zimbabwe’s revised Atlas scores, and (b) summary of Botswana’s revised Atlas scores.

As presented in Figure 3, the centre score in the diagram demonstrates that both countries have done around 50% of the efforts to fully institutionalise their national M&E system. The domain specific scores circling around highlight existing areas of weaknesses and strengths. For example, Zimbabwe has a total of one domain that is still in the red zone, while Botswana has two. In eight domain areas both countries are doing relatively well. It is only three areas for Zimbabwe and two areas for Botswana that are showing signs of full maturity. Thus, Zimbabwe scored above 60% in three of its domains and 40% – 60% for the majority (9) of the domains, and below 40% for two of the domains. Comparatively, Botswana demonstrated higher levels in two of the domains where it scored above 60%, in eight of the domains the scores ranged from 40% to 60%, and less that 40% in two of the domains.

Discussion

This section presents a qualitative comparative discussion of the gold standards against the current situation as depicted by the quantitative results in Figure 3. This comparison in turn provided a qualitative explanation on why each country was given that rating and ultimately articulates the factors necessary for institutionalising national M&E systems. These factors are vital for African M&E Voluntary Organisations for Professional Evaluation (VOPEs) which are seeking to help build and further strengthen their national M&E systems.

As literature suggest, the process of institutionalising national M&E system is not a one-off exercise, but a journey that progresses over a span of time. Until all the domains outlined in the Atlas framework are fully developed, a national M&E system may not be deemed to have institutionalised. Based on this notion, the fact that Zimbabwe was given a rating of 53% and Botswana a rating of 48% strongly suggest that both countries have not yet fully institutionalised their M&E system. However, such scores can also suggest that both countries are progressing steadily towards that goal.

Current situation in comparison to the gold standards

The observations on the current situation are discussed in the following sub-sections.

Pervasiveness of Monitoring and Evaluation practice

Zimbabwe was given a rating of 73% and Botswana 63% on this domain. The reasons for getting such high scoring is the observation that M&E practice pervade across various sectors in both countries. These sectors include: economic development, social welfare, agriculture, public policy, public health, education, and environment. This is a gold standard when it comes to the institutionalisation of national M&E systems. However, there is an observation that the practice is more intense in education, health, and agriculture. The reason being that such areas have been receiving donor support for quite some time. This finding concurs with that of the study conducted by Jacob, Speer and Furubo (2015), where evaluation practices also came out to be unevenly distributed. They found evaluation activities in the fields of education, health, social policy, education, development aid, to be the most intensively evaluated (Jacob et al. 2015).

Diffusion and pluralism of Monitoring and Evaluation praxis

Zimbabwe was given a rating score of 60% while Botswana got a very low rating score of 34%. An explanation to this disparity is that in Zimbabwe there is a higher supply of M&E practitioners from different disciplinary backgrounds specialising in different methods attracted to work in the field of M&E. However, for Botswana the situation on the ground is that there are still limited number of institutions conducting M&E with rather monolithic perspectives. The gold standard is that a national M&E system can fully institutionalise once the country has built a large pool of trained experts in the field of M&E with diverse backgrounds.

National dialogue in Monitoring and Evaluation

Under this domain, Zimbabwe was given a rating score of 50% and Botswana 48%. The reason being that in both countries national dialogue on M&E is indeed happening, although not at the gold standard levels. The gold standard is that countries should hold regular dialogue on M&E at national level. Countries such as South Africa do make such attempts mainly through The South African Monitoring and Evaluation Association (SAMEA) biannual conferences. A similar trend was noted in other counties with advanced national M&E systems. For example, Jacobs et al. (2015) noted a widespread proliferation of discourses in countries such as Canada, Denmark, Netherlands, United States and Japan. It is recommended that Zimbabwe and Botswana promote more dialogue through their M&E associations.

Existence of Monitoring and Evaluation professional organisations

Zimbabwe was given a rating score of 50%. However, Botswana got a much lower score of 30%. The main reason being that Zimbabwe do have an active Evaluation Society while Botswana do not have one. The gold standard is that countries should have vibrant M&E Associations fully supported by the government. Examples include South Africa whose SAMEA has a closer link with the government. Norway has an evaluation network operated by the Norwegian Government Agency for Financial Management, Switzerland has an Evaluation Network of the Federal Administration which exists alongside the Swiss Evaluation Society, and the United States has a National Legislative Program Evaluation Society (NLPES) which was created within the National Conference of State Legislatures (Jacob et al. 2015). It is recommended that Botswana should reactivate its dormant association.

National institutional arrangements that support Monitoring and Evaluation

Zimbabwe got a score of 48% and Botswana 50%. The reason being that both countries did a good job in setting up an oversight M&E coordinating unit within the president’s office. Thus, in 1994, Zimbabwe established a National Economic Planning Commission (NEPC) under the Office of the President and Cabinet (OPC) with specific responsibilities for M&E. Botswana in 2007 formed the Government Implementation Coordination Office (GICO) within the OPC to coordinate the M&E processes for all major government projects. This has been the gold standard practice for most countries with fully institutionalised national M&E systems. For instance, as noted by Jacob et al. (2015), Korea did the same to place the M&E coordinating office in the Prime Minister’s office. South Africa has a similar arrangement. In Sweden, several autonomous agencies are wholly or partially dedicated to evaluation. In Canada and Israel, all government departments have an evaluation unit and have had so for decades. In Spain, there is the National Evaluation Agency. Within Japan’s Ministry of Internal Affairs and Communication, the central unit has more than 100 employees dedicated to evaluation. It is therefore recommended that both countries provide further leadership and technical expertise and secure legal and constitutional powers to drive M&E in government ministries.

Institutional arrangements in parliament to support Monitoring and Evaluation

Zimbabwe scored 38% and Botswana 44% on this domain. The reason for getting low scores being that there are no adequate institutional arrangements present within the parliaments for supporting M&E. In both countries, members of parliament often do not adopt provisions, laws and constitutional amendments based on M&E findings. The gold standard is that parliamentarians should have access to M&E data and their debates should be data driven. In the Netherlands, the use of evaluation results in parliamentary discussions has risen significantly, especially concerning development aid. In Norway, it is a common practice to present evaluation results in White Papers, which are then discussed in parliament. Evaluation findings are often scrutinised by the various parliamentary committees in the United Kingdom (Jacob et al. 2015). It is therefore recommended that Parliamentarians in Zimbabwe and Botswana join and become more active in platforms such as the African Parliamentarians’ network on M&E development.

Pluralism of Monitoring and Evaluation institutions and Monitoring and Evaluation capacity building efforts

Zimbabwe scored 62% and Botswana 49% on this domain. The explanation is that in Zimbabwe there are more government institutions and private consultancy firms that provide M&E services than in Botswana. Also, Zimbabwe has more universities and training institutions offering M&E related degree and post graduate programmes. Botswana has limited pool of individual M&E practitioners from different disciplines and limited number of institutions offering training except for Institute of Development and Management. The gold standard is that countries should have more universities offering M&E related undergraduate and post graduate degree programmes as well as creating enabling environments for M&E. For example, South Africa do have two initiatives, both located in universities as host institutions of evaluation: the CLEAR initiative at the University of the Witwatersrand and Crest at the University of Stellenbosch. The Crest Centre more specifically focuses on high level specialist course in evaluation leading to both post-graduate diplomas and to degrees up to and including a PhD (Basson 2012).

Utilisation of Monitoring and Evaluation information

Zimbabwe scored 45% and Botswana 43% implying that the utilisation of information in both countries is not extensive. The gold standard is that governments should promote the utilisation of M&E information for planning and decision making. That way a culture for information use grows which is the core element for the institutionalisation of the national M&E system.

Policies and regulations to govern Monitoring and Evaluation practice

Zimbabwe scored 47% and Botswana 42%. The reason being that in Zimbabwe, an evaluation policy do exist, although there is no separate law or act that explicitly reflects the requirement to monitor and evaluate public programmes and projects on regular basis. Botswana do not yet have a stand-alone policy that promotes the involvement and participation of stakeholders in M&E. The gold standard is that countries with fully institutionalised national M&E systems should have policies and regulations that govern M&E practices. They should also have M&E norms, standards, regulations and plans that guide M&E practitioners to conduct M&Es systematically and independently, while maintaining acceptable standards.

Multi-Stakeholders support on Monitoring and Evaluation efforts

The two countries scored almost at the same level on this domain (Zimbabwe 58% and Botswana 59%). The reason for scoring slightly above average is the fact that both countries indeed received support from the NGOs’ community including American agencies, UN agencies and other independent donors such as Gates Foundation. However, although support from the NGO sector is beneficial, in the long run placing heavy reliance on donor support to build the national M&E systems can lead to sustainability problems upon the exit of these donors. The gold standard in this regard is the availability of government-based funding for M&E leveraging on both NGOs and the private sector financial and technical support. Therefore, to reach the gold standard, it is recommended that other stakeholders such as civil society, academia and other related government institutions such as the auditor general, public service, and national treasury be actively involved.

Democratic system that promotes Monitoring and Evaluation efforts

Zimbabwe scored 58% and Botswana 62%. In both countries, there is a regular demand from civil society and the general public for transparency and accountability from decision-makers on the value for money. However, it is not clear if evaluators perform M&E function free from the political influence in both countries.

Impact and outcome evaluation practice

Zimbabwe scored 48% and Botswana 49% on this domain. The reason being that in both countries impact evaluations have not been fully added to existing evaluation practices. However, the gold standard is that internationally, there is certainly a trend towards more impact and outcome evaluations with countries such as Switzerland, Canada, Finland and the United States taking a lead (Jacob et al. 2015).

Factors necessary for institutionalising national Monitoring and Evaluation systems

The following five factors are derived from this discussion as the key factors necessary for the institutionalisation of national M&E systems.

Supply of Monitoring and Evaluation practitioners

The growth and development of national M&E systems is generally marked by an increase in the demand for M&E practitioners. Thus, as governments demand more evaluative evidence, the number and quality of human resources required to meet this will grow (Twende Mbele 2019). Therefore, having a good supply of M&E practitioners in a country is a good start for attaining full institutionalisation of national M&E system. In Zimbabwe, the supply of M&E practitioners from different academic disciplines was presented as good. This finding resonates with the findings from the Twende Mbele diagnostic study on the supply and demand of evaluators in Uganda, Benin and South Africa showing that in all three countries the supply of evaluation consultants has generally been adequate to meet demand (Twende Mbele 2018). Therefore, it is recommended that Zimbabwe continues to sustain this strength. However, for Botswana where capacity seems to be lacking, there is an imperative need to invest in M&E training to promote growth of the local supply market. In turn, this investment will promote the growth of the national M&E system.

Presence of a democratic system in the country

The availability of a democratic system in a country is vital for full institutionalisation of a national M&E system. The reason being that a democratic environment promotes the much-needed interaction between the government and civil society. In such an environment, civil society represents a wealth of knowledge and acts as a source of evidence generation. Therefore, its full participation thrives effectively in a democratic environment and thus a healthy interaction with government promotes full growth of the national M&E system (Goldman 2019). Botswana is well known for having such a strong democratic system that promotes dialogue and advocacy within the government. However, from the study, Zimbabwe came lacking. It is therefore critical that Zimbabwe creates such an environment if it is to progress further in its institutionalisation journey.

Dedicated national Monitoring and Evaluation department in the higher office

Literature shows that the presence of high level political and technical champions for M&E in a country promotes full institutionalisation of a national M&E system. Most often, these champions function well in situations where a dedicated national department with the capacity to drive M&E exists (Twende Mbele 2018). This has been observed in some wealthy countries with well-developed national M&E systems such as Korea (Jacob et al. 2015). Similar arrangement can be seen in African countries with established national M&E systems such as Benin, Uganda, and South Africa (Goldman et al. 2018). Therefore, as observed in this study, both Zimbabwe and Botswana do have M&E coordinating units within the president’s office. It is therefore recommended that these structures be maintained as they are driving the two countries in the right direction towards realising the full institutionalisation of their national M&E systems.

Active participation of parliamentarians in Monitoring and Evaluation activities

One critical area noted to be lagging for both countries is the active participation of parliamentarians in M&E. Active participation of the parliament is needed in the institutionalisation process of a national M&E system in the sense that parliamentarians can serve as strategic allies in advancing the use of evidence to deepen democracy through their legislative, oversight and representative roles. These roles have the potential to significantly increase the demand for and use of M&E evidence among government and civil society, and to champion and adopt relevant policies to entrench M&E practice (Twende Mbele 2021). Therefore, the two countries need to benchmark with other African countries such as Uganda and South Africa where parliamentarians are more active in M&E matters (Goldman et al. 2018).

A national Monitoring and Evaluation policy framework

Another dimension that is critical in the institutionalisation process of a national M&E system is the national M&E policy framework. A national M&E policy framework provides an important framework to structure, systematise and guide M&E at country level. It also places an obligation on ministries and sectors to increase demand and use of M&E information. Besides that, it also defines what M&E is, determines what needs to be monitored and evaluated, what methodologies are to be used and how data and evaluation findings should be used and will be communicated. In a way, a national M&E policy framework creates a common approach and guiding principles across the public sector and integrates M&E into policy and budgeting cycles, which is a gateway to the institutionalisation of a national M&E system (Chirau et al. 2018). This study showed that Zimbabwe is in the right direction as its national M&E policy is in place. However, Botswana is still lagging. It is therefore recommended that Botswana like other African countries with national M&E systems should expedite the process of developing its national M&E policy.

Conclusion

In conclusion, the study showed that both countries are nearing a halfway mark towards institutionalising their national M&E systems. This implies that both countries still need to work extensively to improve in this aspect. In this regard, the implication is that more effort is needed to improve on those domains and dimensions that scored poorly. Lastly, the implication for the future is that the lessons identified can be used to support other countries which are seeking to institutionalise national M&E systems. Future researches can be carried out as more work is still needed to contribute to the current understanding of the African M&E landscape.

Acknowledgements

The author would like to thank the following people in particular for their support and guidance: Prof. Johann Mouton for his inspiration and tremendous mentorship. I am so thankful to Dr Milandré van Lill and Rein Treptow for data analysis support. I always appreciate the staff at CREST for their unwavering support. The evaluation experts for completing the expert survey tool and providing me with the required information and insights. Friends and colleagues for moral support.

Competing interests

The author declares that he has no financial or personal relationships that may have inappropriately influenced him in writing this article.

Author’s contributions

P.F.M. is the sole author of this article.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

There are no restrictions on the availability to raw data used in the analysis. Data are available from the author upon reasonable request.

Disclaimer

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any affiliated agency of the author, and the publisher.

References

Basson, R., 2012, South Africa: South African Monitoring and Evaluation Association (SAMEA) Voluntarism, Consolidation, Collaboration and Growth – The case of SAMEA, University of the Witwatersrand, Johannesburg, Gauteng, South Africa, viewed 22 July 2021, from https://www.ioce.net/download/national/SouthAfrica_SAMEA_CaseStudy.pdf.

Cakici, H., 2016, ‘Getting to the roots of evaluation capacity building in the global south: Multiple streams model to frame the agenda status of evaluation in Turkey’, Canadian Journal of Program Evaluation 30(3), 277–295. https://doi.org/10.3138/cjpe.30.3.03

Chirau, T., Waller, C. & Mapisa, C.B., 2018, The national evaluation policy landscape in Africa: A comparison, viewed 03 August 2021, from https://twendembele.org/reports/the-national-evaluation-policy-landscape-in-africa-a-comparison/.

Edmunds, R. & Marchant, T., 2008, Official statistics and monitoring and evaluation systems in developing countries: Friends or foes, PARIS 21 Secretariat, viewed 21 July 2021, from https://paris21.org/sites/default/files/3638.pdf.

Furubo, J.E., Rist, R.C. & Sandahl, R. (eds.), 2002, International atlas of evaluation, Transaction Publishers, New Brunswick, NJ.

Gaarder, M. & Briceño, B., 2010, Institutionalisation of government evaluation: Balancing trade-offs, Working Paper 3ie, New Delhi.

Goldman, I., Byamugisha, A., Gounou, A., Smith, L.R., Ntakumba, S., Lubanga, T. et al., 2018, ‘The emergence of government evaluation systems in Africa: The case of Benin, Uganda and South Africa’, African Evaluation Journal 6(1), a253. https://doi.org/10.4102/aej.v6i1.253

Goldman, I., Deliwe, C.N., Taylor, S., Ishmail, Z., Smith, L. & Masangu, T., 2019, ‘Evaluating the national evaluation system in South Africa: What has been achieved in the first 5 years?’, African Evaluation Journal 7(1), a400. https://doi.org/10.4102/aej.v7i1.400

Goldman, I., 2019, Strengthening participation of civil society organisations in national evaluation systems: Insights from Uganda, Rwanda, Kenya and Ghana, CLEAR-AA, University of the Witwatersrand, Johannesburg, viewed 22 July 2021, from www.twendembele.org.

Jacob, S., Speer, S. & Furubo, J.E., 2015, ‘The institutionalization of evaluation matters: Updating the international atlas of evaluation 10 years later’, Evaluation 21(1), 6–31. https://doi.org/10.1177/1356389014564248

Lahey, R., 2013, Developing a national Monitoring and Evaluation (M&E). Capability for Botswana. Phase one report: M&E Readiness Assessment, The World Bank, Washington, DC.

Lázaro, B., 2015, Comparative study on the institutionalisation of evaluation in Europe and Latin America, EUROsociAL Programme, Madrid, viewed 22 July 2021, from http://sia.eurosocial-ii.eu/files/docs/1456851768-E_15_ENfin.pdf.

Leeuw, F.L. & Furubo, J.E., 2008, ‘Evaluation systems: What are they and why study them?’, Evaluation 14(2), 157–169. https://doi.org/10.1177/1356389007087537

Mackay, K., 2007, How to build M&E systems to support better government, The World Bank, Washington, DC, viewed 22 July 2021, from http://www.worldbank.org/ieg/ecd.

Makadzange, P.F., 2020, ‘A study of the institutionalisation of a national monitoring and evaluation system in Zimbabwe and Botswana’, Dissertation, Stellenbosch University, viewed n.d., from https://scholar.sun.ac.za/bitstream/handle/10019.1/109097/makadzange_study_2020.pdf.

Murindahabi, C., 2016, ‘The trajectory of public administration in Rwanda’, Public Policy and Administration Research 6(4), 2016.

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), a25. https://doi.org/10.4102/aej.v1i1.25

Sibanda, V. & Makwata, R., 2017, Zimbabwe post independence economic policies: A critical review, Lambert Academic Publishing, Saarbrücken.

Twende Mbele, 2018, Using M&E to improve government performance and accountability, Twende Secretariat at CLEAR AA, University of the Witwatersrand, Johannesburg.

Twende Mbele, 2019, Diagnostic on the supply and demand of evaluators in Uganda, Benin and South Africa Synthesis Report, CLEAR-AA, University of the Witwatersrand, Johannesburg, viewed 25 July 2021, from www.twendembele.org.

Twende Mbele, 2021, Strengthening capacities for oversight and use of evidence, viewed 25 July 2021, from https://twendembele.org/activities/parliaments/.

UNAIDS, 2008, Organizing framework for a functional national HIV monitoring and evaluation system, viewed 21 July 2021, from https://www.unaids.org/sites/default/files/sub_landing/files/20080430_JC1769_Organizing_Framework_Functional_v2_en.pdf.

Varone, F., Jacob, S. & Winter, L., 2005, ‘Polity, politics and policy evaluation in Belgium’, Evaluation 11(3), 253–273. https://doi.org/10.1177/1356389005058475


 

Crossref Citations

1. Growing and nurturing monitoring and evaluation on the African continent
Mark Abrahams
African Evaluation Journal  vol: 10  issue: 1  year: 2022  
doi: 10.4102/aej.v10i1.674