About the Author(s)


Caitlin Blaser Mapitsa Email symbol
Centre for Learning on Evaluation and Results, University of the Witwatersrand, South Africa

Linda Khumalo
Centre for Learning on Evaluation and Results, University of the Witwatersrand, South Africa

Citation


Blaser Mapitsa, C. & Khumalo, L., 2018, ‘Diagnosing monitoring and evaluation capacity in Africa’, African Evaluation Journal 6(1), a255. https://doi.org/10.4102/aej.v6i1.255

Original Research

Diagnosing monitoring and evaluation capacity in Africa

Caitlin Blaser Mapitsa, Linda Khumalo

Received: 21 July 2017; Accepted: 05 Dec. 2017; Published: 29 Mar. 2018

Copyright: © 2018. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Since 2015, the Centre for Learning on Evaluation and Results-Anglophone Africa (CLEAR-AA) has implemented more than seven diagnostic tools to better understand monitoring and evaluation (M&E) systems in the region. Through the process of adapting global tools to make them more appropriate to an African context, CLEAR-AA has learned several lessons about contextually relevant definitions and boundaries of M&E systems.

Objectives: This article aims to share lessons learned from adapting and implementing a range of global tools in an African context, and puts forward certain key criteria for a ‘Made in Africa’ tool to better understand M&E systems in the region.

Method: This article reviews CLEAR-AA’s diagnostic tools, as well as global good practice diagnostic tools, and compares the strengths and weaknesses of each approach. It further looks at the implementation of specific tools in context and proposes components on the basis of these lessons.

Results: This review has found that most M&E tools have a heavy focus on the technical and contextual aspects of M&E but very few do a thorough job of accommodating the institutional factors. Furthermore, the relationship between the technical elements, the institutional elements and the organisational culture elements has not been made apparent.

Conclusion: A contextually relevant diagnostic tool for M&E systems will balance technical considerations of capacity, institutional factors and issues of organisational culture. Drawing on approaches from organisational change may be of help to strengthen our tool development endeavours.

Introduction

A familiar adage of evaluators is that it matters what you measure. While evaluators may have applied this in various ways in their work, it is often forgotten when measuring monitoring and evaluation (M&E) systems themselves. Monitoring and evaluation diagnostic tools are important in providing an analytical framework to assess an institution’s capacity and willingness to monitor and evaluate its goals. Embedded in these tools are signals about what an evaluator believes constitutes the purpose of M&E, and the functions through which this purpose can be achieved. This research contributes to a discussion of how M&E is understood, by developing a diagnostic tool that can be used to determine whether prerequisites are in place to build a results based M&E system.

A frequent critique by M&E practitioners in developing countries is that the literature on M&E systems has generally emerged from contexts of strong institutional capacity. This means that many of the tools and methods are inappropriate for local evaluators. Initially, a shift away from a strictly technical measurement of M&E systems, and towards a more contextualised definition that looks at how the system functions in practice, was welcome. However, it is also possible for the discourse to shift in the direction of governance and context in environments where there are very real limitations on human resources, budget and technical capacity. Perhaps, most importantly, the mechanisms for change between technical, institutional and cultural components are often ignored. As a result, it is particularly difficult to study the application of these tools in context rigorously, because they hide any assumptions about mechanisms of change. By bringing in principles of organisational change, the discussion in this article will help to contribute to filling of the gap.

While this article does not claim to bring a perfect tool to help us understand M&E systems in Africa, it should be possible to move towards a better fit than we have at the moment. None of the frequently cited methods for defining M&E systems include all of the following: robust measurements of the technical monitoring components with a strong consideration of how evaluation feeds into governance, or what evidence use looks like in practice with enough consideration of context that it is possible to identify some of the internal and external constraining and enabling factors.

Background

The Centre for Learning on Evaluation and Results-Anglophone Africa (CLEAR-AA) has carried out, engaged with or designed diagnostic tools to understand and measure the M&E systems of complex institutions at the national, provincial and municipal levels of government in South Africa, as well as other public and private sector institutions in the region. Each diagnostic tool was developed to respond to the specific needs of the programme of work, which led to fit-for-purpose results. However, with the different tools that have been developed and applied in various contexts, we now have a range of lessons about the principles of diagnostic design, and a much better understanding of the implications that different tools have for the diagnostic findings.

The purpose of this article is the development of a diagnostic tool which will be applicable to an African governance and management context, drawing from the lessons and best practices of the diagnostic tools that the centre has engaged with empirically. An overview of global diagnostic tools is outlined as well, to look at where the emphases lie in global norms and standards.

The result of this article will be the proposal of a model for defining M&E systems that is relevant to an African institutional context, with some indicative suggestion of a range of dimensions that could be included in each section of the model. As the debate continues around M&E tools and approaches that are most relevant to the continent, this tool will then be available for trialling and evolution as various dimensions are tested, and the implications of the local institutional context are better understood.

Kusek and Rist (2004) have pointed out that measures of M&E system effectiveness have historically tended to focus disproportionately on technical issues of individual skills and data collection and management. This has resulted in relatively little emphasis being placed on political, organisational and cultural factors, as well as on purpose and context in designing of M&E systems. Since Kusek and Rist raised this issue over a decade ago, the pendulum has swung in the opposite direction, with much focus in the M&E focusing on context and complexity (Patton 2011). While this has contributed a great deal to contexts where institutional strength and resourcing can be taken for granted, it has left behind many evaluators in Africa who know the danger of making assumptions about institutional strength, or a strongly rationalised bureaucracy. Furthermore, tools with either an overly technical or an overly contextual measurement of M&E embed a range of assumptions about the way technical, institutional and cultural dimensions of organisations are connected with one another, and how the transitions take place between them. These assumptions do not always hold in an African context.

This article addresses the aforementioned issues by reviewing tools from a range of viewpoints in the M&E field, and looking at the strengths and weaknesses each bring when applied in an African context. In the article, we then recommend a tool that balances technical, institutional and contextual factors that affect the design of M&E systems, as will be discussed below.

Methodology and study design

As a multidisciplinary field, evaluators have been engaging with the challenges of building cohesive conceptual frameworks for evaluations that draw on mixed-methods approaches, and also theoretical frameworks and practices that are embedded in different disciplines. Greene, Caracelli and Graham (1989) tackled this by laying out some of the various ways in which approaches are integrated in evaluation practice. This article draws on these lessons to look at M&E systems themselves, which, similar to mixed-methods evaluations, emerge from a range of organisational contexts and theoretical approaches which inform how the units of analysis are defined, and how various levels of M&E systems come together.

A systematic review was conducted for this research which involved collecting and summarising empirical evidence from diagnostic tools that CLEAR-AA has developed, tested and implemented. In addition to this, a desktop analysis of global diagnostic tools and approaches was also conducted. The various dimensions and components of each tool were collated and compared to provide a synthesised view of existing tools. Certain core components were then piloted through four different CLEAR-AA research projects, and the results of this are discussed and applied to the tool proposed.

Literature review

To analyse the various diagnostic tools, this article draws on a range of disciplines to help define both M&E systems and their various components. Gorges (2001) has pointed to the difficulty of finding a theoretical framework that helps explain institutional change, owing to the lack of cohesion the varied theoretical perspectives bring. Even so, this article will begin to merge a range of literatures that consider how systems are defined, how they act in practice and how they change. Emergent literature from systems theory has demonstrated the interlinkages between different elements of M&E systems, and why it is important to acknowledge them as a whole. This is particularly important in Africa, where processes that are considered M&E systems, upon further inspection, do not always contain evaluative components. Then, we use institutionalist literature to understand how the M&E systems are embedded within organisations. Lastly, it will draw on theories of organisational change to help explain the need for components of M&E systems to connect to each other. Finally, the article concludes by engaging with the literature from the field of M&E and discussing the role of context and purpose to M&E systems.

Critical systems theory points to the need to carefully define what is included, and what is excluded when defining M&E systems. In addition, it helps us understand how various interconnected systems may be embedded within organisations, and while they will certainly interact, defining them carefully will help us better understand the multiplicity of non-linear relationships, and feedback loops, which are critical to understanding how we measure the capacity of a system (Bammer 2003).

In keeping with CLEAR-AA’s experience studying M&E systems, the new institutionalist theory as put forward by Walter, Powell and DiMaggio (2012) takes a sociological approach to explore how people within organisations operate according to the various policies and processes in place within to create institutional behaviour. Coming from a sociological standing, they argue that these laws are not only a complex reflection of a range of hard and soft rules, including formalised policies, but also organisational culture and norms. They look at how these institutions can be influenced by contextual factors such as power, incentive systems and constraints, which need to be considered when designing and implementing M&E systems (Bamberger et al. 2011; Compton 2002). This is important, first to acknowledge that what is formalised as an M&E system does not always capture the full range of practices through which evidence gets used within organisations, but also because it helps us understand the drivers of institutional practice. Many M&E tools and approaches are insufficiently rigorous in considering how the data, mechanisms and policies fit into a context of power and the contestation thereof. Particularly in Africa, where M&E has often been externally imposed by donor organisations for accountability, it plays an integral part in these power dynamics, and this contestation must be explored more, rather than ignored (Cloete 2009).

Theoretically, this article finds its grounding in the institutional and critical systems bodies of literature. These theories embed the understanding of M&E systems within organisations as part of organisational structure which legitimises organisations and improves their performance. On the contrary, M&E systems within organisations do not operate in linear or straightforward forms; M&E systems are more complex and ‘circular’, hence systematic thinking can aid in bringing an understanding of the exterior factors or contextual factors that impinge on functional M&E systems.

Finally, a central gap identified by our review of the ways M&E systems are defined is in the realm of organisational change. Dimensions of M&E systems are often seen as made up of discreet components; even if interconnections are acknowledged, the mechanisms through which they influence each other are poorly understood (Seasons 2008). This is another realm where evaluation can contribute, as professional associations are grappling with how change is created in the evaluation field (Greenwood, Suddaby & Hinings 2002). Feldman and Pentland point to the importance of norms and practice in helping define organisational processes (Feldman & Pentland 2003). This research is still an early step in integrating principles of organisational change in the way M&E systems’ capacity is understood (Tarsilla 2014). This is particularly important, given a regional emphasis in research on mechanisms of coordination. As the Made in Africa research agenda expands, this will be one of the critical focus areas. Evaluation capacity developers in the region have faced the limitations of individual training and skills development, and are confronted with the limitations of practice without engaging with institutions, and an enabling environment, which can include coordinated consensus on competencies and skills.

Foundational literature on M&E systems focuses strongly on the technical components or capacity for measuring implementation. Some of the first framings of M&E systems came from results based management approaches and focused strongly on monitoring (Kusek & Rist 2004). By the late 1990s, evaluation was emerging as a priority, but systems for evaluation were still nascent. Evaluation systems linked to some of the more political elements of governance much more strongly than monitoring, and it brought into focus the importance of considering both context and purpose, in addition to technical capacity, when measuring M&E systems (Kusek & Rist 2001). This happened in parallel with developments in the evaluation literature itself looking at how to address complexity in social systems.

Global tools and approaches

This section will provide a brief overview of some of the key, commonly referred to, global diagnostic tools for M&E systems. An outline of each, ending with an overview analysis of the key elements used in each tool follows. The section will conclude with a discussion on its relevance and appropriateness in an African context.

Global EvalAgenda 2016–2020

The Global EvalAgenda outlines the priorities for evaluation during the 2016–2020 timeline (Eval Agenda 2020). The Eval Agenda outlines four essential dimensions of evaluation systems. These are (1) an enabling environment for evaluation, (2) institutional capacities, (3) individual capacities for evaluation and (4) interlinkages among the three dimensions.

These overall dimensions are quite strong, and they meet CLEAR-AA’s initial need for an approach that balances technical and contextual approaches, and maintains a focus on institutions. Furthermore, an explicit consideration of the linkages among the dimensions is welcome.

To take one example of a dimension from this agenda to discuss, the components of strong institutional capacities are as follows:

  • Institutions generating and sharing relevant data to develop and support evaluations.
  • Institutions capable of appreciating and facilitating evaluations.
  • Institutions skilled at collaborating with others.
  • Institutions able to resource quality data generation and evaluation ensuring information accessibility.
  • Continuous evolvement and development of institutions as the evaluation field advances.
  • Academic institutions having the capacity to carry out evaluation research and run professional courses on evaluation.

While these components are all strong and relevant, they have a few limitations in an African context. One is that they still place institutions as strongly aligned to the technical functions of data generation, evaluation management and so on, at the expense of data use and decision-making. While the language of institutions is the right one, it is not apparent that the concepts are serving the relevant purpose in an African context, which would help embed the systemic components of evaluation and contribute to changing of norms and strengthening of technical capacity.

Furthermore, while the focus on interlinkages is welcome, it still misses a component of organisational change. The Eval Agenda was designed aiming to mainly focus on national-level issues, making it difficult to adapt some components, particularly that of interlinkages, to an organisational or systemic level. Soon, we will need a tool that allows us to engage with the mechanisms of change around cultures of evaluation use because that is a critical driver of technical capacity and vice versa.

The World Bank evaluation capacity development

The World Bank developed a guide to assist governments and development agencies in the development of their national and subnational evaluation systems which can be adapted and tailored for other contexts accordingly (Development Bank of Southern Africa, African Development Bank & World Bank 2000). The guide states the different dimensions that must be developed to achieve a robust evaluation system as being demand and supply of evaluations as well as information infrastructure.

Mackay (1998) notes that the main barriers to building evaluation systems in developing countries have included: lack of genuine demand and ownership in countries; lack of a modern culture of fact-based accountability; lack of evaluation, accounting or auditing skills; poor quality of financial and other performance information, and of accounting/auditing standards and systems; lack of evaluation feedback mechanisms on decision-making processes; and the lack of critical mass needed to develop sustainable evaluation systems. The guide was thus developed to address these barriers (Mackay 1998).

The guide then outlines a process for considering issues around evaluation capacity in the global South. While the process is quite exhaustive, the market-based framing that is taken around understanding supply and demand fails to fully take into account the drivers of supply and demand within different contextual considerations. For this, it is important to understand the environment mediating the relationship between supply and demand.

A framework for understanding monitoring, evaluation and learning systems

This framework was set out to provide guidance on the sort of M&E system appropriate for grantees of a range of UK-based donor organisations. Bond, Comic Relied, NIDOS and the Big Lottery study developed the framework based on the following assumptions: (1) Effective and appropriate monitoring, evaluation and learning (MEL) systems require data to be collected, stored, analysed and used; (2) organisational culture impacts on whether NGOs use MEL systems beyond donor reporting and (3) effective and appropriate MEL systems allow NGOs to assess, manage and demonstrate their effectiveness.

The framework has a significant technical focus with emphasis on data, that is, who collects data in organisations, data organisation systems, the extent to which there is a flow between the data that are collected with decision-making in organisations and the resources available for maintaining a MEL system. Leadership buy-in and involvement in evaluation activities is also taken account of. However, this framework does not pay attention to the institutional and contextual factors that could affect these technical elements such as the internal and external policy environment which does affect MEL systems functionality.

United Nations Evaluation Group norms and standards for evaluation

The United Nations Evaluation Group (UNEG) standards for evaluation adopted in 2005 and last updated in 2016 are set to guide the establishment of the institutional framework, management and conduct of evaluations. These present a fairly strong focus on institutional and technical factors of M&E systems. Management support for evaluations is also seen as an essential ingredient towards maintaining these systems.

Institutionally, the importance of organisations possessing an adequate institutional framework for the effective management of its evaluation function is seen as key. These include management understanding and support for evaluation functions, resources available, partnerships and cooperation on evaluation, evaluation policies and guidelines that are periodically reviewed and updated. This further involves having structures and guidelines in place that ensure that information obtained from evaluations is used for decision-making.

Governance is also crucial as UNEG outlines the essentiality of taking ownership and lead from evaluation head(s) of upholding and championing standards and guidelines set out for evaluations.

This framework further includes the importance of inclusivity of human rights–based approach and gender mainstreaming issues which are contextual issues that ought to be taken into consideration from the diagnostic stages of evaluations to maintain developmental impact as well as gender equity of projects.

Discussion of the relevance and appropriateness of global tools

As highlighted in Table 1, global diagnostic tools have acknowledged the need to include components of both technical capacity and governance, but they are still struggling to find a way of integrating these two seemingly different functions. Institutional and contextual dimensions have made an effort at filling in this gap, but they remain relatively weak. It is still difficult to find a tool that looks at the relationship between context, institutions, and culture in a way that indicates some of the mechanisms of organisational change. The Eval Agenda and UNEG are more comprehensive in this regard as they look at evaluation culture, the understanding of the value of evaluations that relate to internal and external context of evaluations, (Global Evaluation Agenda 2016-2020) as well as consideration of human rights–based approach and gender mainstreaming which relate to both internal and external contexts; these aspects are essential for effective M&E systems.

TABLE 1: Summary of the dimensions and components of four widely used global tools to understand monitoring and evaluation systems.

The next section looks at the previous diagnostic tools that CLEAR-AA has used. In relation to the best practices from these as well as the key global tools studied and outlined above, this article will end by presenting a recommended diagnostic tool which can be tested and further developed in the African context.

Overview of Centre for Learning on Evaluation and Results - Anglophone Africa’s diagnostic tools

Since its inception, CLEAR-AA has used a number of different diagnostic tools to help better understand M&E systems in the region. The following diagnostic tools or approaches that CLEAR-AA has applied empirically will be the focus of this study:

  • The 12 components of an M&E system
  • The complexity framework
  • Twende Mbele gender diagnostic tool
  • Six spheres diagnostic tool

Each tool was used for an individual project, ranging from a diagnostic of the City of Johannesburg’s M&E system, to a study on the gender responsiveness of the national evaluation systems of Benin, Uganda and South Africa. What has emerged is a body of knowledge around how we are defining M&E systems, and what each definition means for how we understand its effectiveness in context.

City of Johannesburg’s monitoring and evaluation support and capacity building

The Centre for Learning on Evaluation and Results-AA has been involved in conducting a diagnostic exercise in the City of Johannesburg which aimed to assess the current state of M&E practices in the city. The purpose was to understand the various factors that shape M&E practices and to provide recommendations towards addressing some of the challenges in the city.

The 12 components of a functional monitoring and evaluation system

The components provide an outline of the necessary elements for an M&E system to function efficiently and effectively. These components outlined and briefly described below partly formed the basis of the online survey and workshops that assessed the City of Johannesburg’s M&E system, as well as identified the needs to further develop the system.

The components in three broad categories are as follows:

People partnerships and planning: These encompass structure and organisational alignment, human capacity, partnerships, costed M&E work plans as well as advocacy, communication and culture. These elements directly relate to whether or not an environment is conducive or enabling for an M&E system.

Data collection, capturing and verification: These are the data mechanisms in place to ensure a functional M&E system (accessibility of data that can be used as evidence to inform decisions.) This involves conducting routine monitoring, periodic surveys, databases, supportive supervision and data auditing as well as evaluation evidence.

Data analysis, dissemination and information use: These involve the use of information that is produced through M&E systems in order to improve results.

The components outlined in the tool above include key elements of an organisation’s M&E technical capacity, and provided a good assessment of the city’s technical M&E system.

The complexity framework

The Centre for Learning on Evaluation and Results-AA working with the City of Johannesburg further adopted the five dimensions of complexity model (Bamberger et al. 2015) for a diagnostic evaluation of the city’s implementation of its existing M&E framework. The framework was utilised to understand the elements of the existing M&E models within the city and how they could be better coordinated. The complexity framework provided useful guidance throughout the city’s diagnostic exercise which drew on desktop research, an online survey, in depth interviews and workshops.

The five dimensions of complexity consist of: (1) the nature of the intervention; (2) the interactions between different institutions and the stakeholders involved; (3) the context in which the intervention is embedded; (4) the nature of the context of change and causality; and (5) the nature of the evaluation process.

The nature of the intervention: looks at complexity of the objectives, project size, stability of programme design, implementation procedures, number of services/components, technical complexity, social complexity, project duration and whether or not the programme design is well tested.

The interactions between different institutions and stakeholders: examine if the budget is well defined, number of funding and implementing agencies, number of stakeholders and similarity of interests taking note that the greater the number of the interactions, the more complex the intervention will be.

Embeddedness and the nature of the system: examine the independence of the programme at a wider context and the complexity of the process of behavioural change.

Causality and change: encompass the nature of causal pathways (simple, complex, non-linear), degree of certainty of outcomes and the level of agreement on appropriate actions to address problems.

The evaluation process: assesses the purpose of the evaluation; the time, data and resources affecting the design and implementation of the evaluation; stakeholders involved; and the values and ethics surrounding that.

Dimensions in the model are interrelated, that is, a change in one dimension can create change in another. For an effective M&E system, all blocks within the complexity model have to be equally balancing each other. Each dimension has to be studied to ascertain which blocks are more dominant than others or creating an imbalance between blocks.

The complexity framework offered a good basis to understanding the COJ’S M&E system in terms of the five components described above. However, the tool on its own did not provide explicit means to understanding the technical issues that could be affecting the system, these being inclusive of the skill set, the systems and people capacitated in the right means to sustain this system. Hence, for this, the 12 components and 6 spheres tool (also described in this article proved useful used in coordination)

Furthermore, noteworthy is that not all M&E systems or interventions may be complex, hence the framework may work best in some contexts than others (encompassing more complex M&E systems). The framework may hence be pre-assuming; also noteworthy is that systems that look complex may not be so.

Twende Mbele Gender diagnostic tool

The Africa Gender and Development Evaluator’s Network (AGDEN) was commissioned by The Regional Centre for Learning on Evaluation and Results-AA to conduct a diagnostic study to assess the gender responsiveness of the National Monitoring and Evaluation Systems of South Africa, Uganda and Benin. This exercise was part of a multi-country collaboration of the Twende Mbele Programme that aims to strengthen the national M&E systems of the three countries.

The assessment used a diagnostic matrix that consisted of three main dimensions – National M&E Policy, National M&E System and Advocacy. The framework to assess gender responsiveness in the three countries responded to the following six criteria:

  • Gender equality
  • Participation
  • Decision-making
  • Gender budgeting
  • Evaluability, review, and revision, and
  • Sustainability

The tool further assessed any advocacy present to support gender responsive national evaluation policies. Each of the criteria are discussed in more detail by Jansen van Rensburg and Blaser Mapitsa (2017).

The six spheres diagnostic tool

The six sphere framework has been used and discussed by Crawley (2017) and used by CLEAR-AA. The tool has been used widely in the centre’s East African dialogues as a discussion tool to understand the challenges that legislators face with regard to using evidence for oversight. The framework consists of six spheres laid out hierarchically and moving progressively from one sphere to the other as follows: logistical, technical, contextual, social, political and ideological.

Logistical: This includes availability of time and resources to generate and engage with M&E data.

Technical: This sphere relates to the technical capacity of both evaluators to produce the right kind of evidence and of users of evidence to understand the evidence. The sphere also involves the tools and systems that are in place for M&E information.

Contextual: This involves the structure or hierarchy within an organisation that has an impact on the production and use of M&E information. This sphere also involves linkages and networks (institutional arrangements) between M&E coordinating bodies which affect flow of evidence within an institution, as well as assessment of the evaluation culture within an organisation.

Social/relational: This sphere involves looking at trust and collaboration between evaluation stakeholders which also has an impact on how evaluation information flows within organisations.

Political: This relates to whether or not there is any leadership or buy-in for change within the M&E system which is essential for sustainability of the system within an organisation.

Ideological/value system: This involves the extent to which use of evaluations evidence is part of an organisation’s core value system and is at the heart of the organisation.

This tool encompasses conventional diagnostic and readiness tools in considering technical and institutional elements of M&E systems. Its strengths lie in extending to considerations of the political environment, trust and collaboration between key stakeholders and principles and values around M&E in an organisation.

Experience implementing tools in context

Of the tools discussed above, all were implemented in a complex, African public-sector institution. While there may be no perfect tool, a lesson that emerged strongly from our experience conducting these diagnostics is that the tool must both include a balanced spectrum of technical, institutional and political perspectives of an M&E system, and that it must also build in an understanding of how organisational change takes place.

For the purposes of this article, we are delineating three broad components of M&E systems. This delineation was chosen to be aligned with CLEAR-AA’s theory of change, and definition of the problem of using evidence to strengthen policies and programmes for equitable development. Global approaches to M&E systems have also typically included these key elements as shown in Figure 1 which further demonstrates their significance.

FIGURE 1: The five dimensions of complexity.

These three areas are:

Governance: This includes the culture, values and leadership which promote learning and use of evidence in decision-making.

Institutions: They include the planning, learning and management systems; internal policy frameworks; as well as evaluative processes that transform monitoring into evidence use.

Technical resources: They include data, research, human and financial resources. In addition to these three interrelated components, there is the environment that each pillar lives within, and includes external policy and regulatory frameworks; capacity development structures; causal change mechanisms; other stakeholder interests, expectations, capacity and so on. These are further elaborated on in the following section.

Figure 2 is a representation of measuring the CLEAR-AA empirically tested diagnostic tools against these three broad categories of technical, institutional and governance aspects. As is shown, tools thus far have not provided a fair balance between the key diagnostic elements. Focus has largely been on technical aspects and institutional elements, with little emphasis being placed on governance issues. The centre has thus used its experiences from the previous tools to come up with a more comprehensive tool (outlined in Figure 3) attempting to bring in all the key elements of conducting a diagnostic study and also showing interconnectedness between parts of the system.

FIGURE 2: Comparing the elements of monitoring and evaluation systems reflected in Centre for Learning on Evaluation and Results - Anglophone Africa’s tools.

While the pillars of the components themselves are reasonably well articulated across various tools CLEAR has used, the linkages between them have often fallen through the cracks, and this has been a constraint to our understanding of M&E in the region. Bamberger’s complexity framework has placed itself solidly in this space, and this was a valuable framework of analysis in the City of Johannesburg diagnostic. However, owing to the limitations on the technical capacity of the system within the municipality, it was not possible for that to be a stand-alone tool.

Discussion of tool components

The Centre for Learning on Evaluation and Result’s experience in the field, international best practice and social science theory all point to a need for an M&E systems tool that uses varied data collection processes to better define both the components of the M&E system, and the environment in which it operates. Most tools developed to measure M&E systems look exclusively at the system’s components themselves. With an increasing focus in the field of M&E to evaluating complex programmes, or transformative change, there is a recent shift to focus on context and results of the system as well. However, in the region, a tool is needed that serves both purposes.

This is in keeping with both CLEAR’s experience in implementing the tools that exist, but also emerging research on indigenous knowledge systems, and regionally adapting tools to the epistemological belief system.

What is presented below is an outline of the key pillars of M&E systems, paired with some of the contextual and environmental factors that drive these components of the system. It is particularly in these areas that CLEAR needs to focus research efforts, as it is our experience that these are often the areas that are more important for M&E capacity development, and that they are often poorly understood in context.

Steps to design a diagnostic tool

Figure 3 illustrates key components of a diagnostic tool, which outlines both the pillars of an M&E system, as well as some of the causal mechanisms for change between each category. However, its proposal is a first step to a tool which will be tested and improved in complex African institutions.

FIGURE 3: A balanced model for understanding monitoring and evaluation capacity in Africa.

Figure 3 shows the proposed diagnostic tool which consists of three main categories: technical, institutional and governance aspects. These arms are broken down but not limited to the following sub-components.

The technical category consists of data systems and information infrastructure, research, human and financial resources, time commitment to M&E activities, M&E capacity/skills and capacity building initiatives existent as well as quality control for M&E information. The institutional category consists of national M&E policies, internal policies and operational systems, organisational planning systems, stakeholders and collaboration with other institutions to meet demand and supply of evaluations as well as the professionalisation of M&E. Lastly, the governance category involves leadership capability, leadership buy-in and involvement in M&E activities, accountability, transparency, leadership oversight, participation and representation.

In-between these key categories are structures that are in place facilitating organisational change as well as the contextual and/or environmental factors that operate between all three arms and affecting the functioning of an M&E system. This demonstrates the non-linear structure of what drives an M&E system. This draws back to the institutional theory thesis as discussed above. In addition to structures and processes set out by organisations’ M&E systems, organisations are equally exposed to other contextual factors that may include organisational culture, incentives structure which equally have to be understood when designing M&E systems. The recommended diagnostic tool that this article provides is designed with this in mind ensuring that one is cognizant of both the technical and the contextual elements that affect M&E systems as in line with critical systems thinking which allows us to better understand complex phenomena such as that of designing M&E systems that are regionally effective.

Different contextual factors contribute to bringing about organisational change. For instance, within the technical arm, organisations ought to move beyond solely providing capacity building through training as an intervention for developing M&E systems and also look at other environmental factors that may affect technical functioning. These may involve issues around understanding organisational mandates, organisational culture and communication systems. An organisation could have skilled personnel, budget and time (adequate resources) set apart to conduct M&E, but other factors such as organisational culture and a weak/non-existent incentives structure(s) could affect performance through not clearly defined roles and responsibilities.

In practice, for organisations, having strong institutions involves moving beyond internal and external policy issues, procedures and standards, stakeholders and planning to consideration of other contextual factors and interventions that may affect organisational change in that area. These may include shifting focus towards outcomes-based M&E, advocacy and organisational shared vision for M&E. Similarly, governance within an organisation is further affected by other contextual factors such as the external policy environment, political willingness as well as the development context which may impact on leadership buy-in and championship to M&E activities in an organisation.

Noteworthy, the elements of the tool are interrelated, that is, connectedness which entails recognising that one part of the system can affect another and hence the need for an assessment of each ‘arm’ for an effective M&E system. For instance, an organisation could have skilled M&E people to support a system but may not have effective accountability systems within the organisation, thereby affecting the functionality of the system. This is in line with critical systems thinking which allows viewing of a problem situation from a holistic perspective, hence one element of an M&E system cannot be looked at in isolation to the other in order to ensure effective organisational change; the interrelatedness of the different categories of organisational change is crucial.

Conclusion

In CLEAR’s recent experience, there may be no one size fits all tool to measure M&E systems in different contexts. However, there can be a trade-off between a diagnostic tool so contextualised that it can only be used once in response to a specific capacity development need of one organisation versus using one tool in a variety of contexts to build a body of knowledge, test assumptions and improve it with time. Both theoretical and real-world demands are pushing CLEAR in the direction of some degree of standardisation in the tools that we use. However, in this process, we need to be acutely aware of maintaining a research agenda that will allow for us to reflect on the results of this tool, and adapt it with more experience of testing the tool.

The Centre for Learning on Evaluation and Results has already developed two methodological reflection papers; the first on the gender diagnostic study of Twende (Jansen van Rensburg & Blaser Mapitsa 2017) and then on the two different tools employed as part of the City of Johannesburg diagnostic (Blaser Mapitsa & Korth 2017), and we need to continue this line of enquiry. In addition to this, Twende Mbele is working on a survey of performance management culture in Benin, Uganda and South Africa, which targets one of the components of this diagnostic tool. We need to ensure further research continues to target the causal linkages between different elements of M&E systems. This will allow us to build an evidence base that can strengthen our theory of change around evaluation capacity development.

Acknowledgements

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Author’s contribution

Both authors made equal contributions to the article. C.B.M. led in the conceptual design and structure of the article, and had applied several of the tools analysed through her programmatic role at CLEAR-AA. L.K. performed most of the data analysis, including of external tools. They worked together to identify the best tools to analyse in the article, and to reach a common understanding about a framework of analysis.

References

Bamberger, M., Rugh, J. & Mabry, L., 2011, Real world evaluation: Working under budget, time, data, and political constraints, Sage, Thousand Oaks.

Bamberger, M., Vaessen, J. & Raimondo, E. (eds.), 2015, Dealing with complexity in development evaluation: A practical approach, Sage, Thousand Oaks.

Bammer, G., 2003, ‘Integration and implementation sciences: Building a new specialization’, Ecology and Society 10(2), 95–107.

Blaser Mapitsa, C. & Korth, M.T., 2017, ‘Designing diagnostics in complexity: Measuring technical and contextual aspects in monitoring and evaluation systems’, African Evaluation Journal 5(1), 1–5. https://doi.org/10.4102/aej.v5i1.196

Cloete, F., 2009, ‘Evidence-based policy analysis in South Africa: Critical assessment of the emerging government-wide monitoring and evaluation system’, Journal of Public Administration 44(2), 293–311.

Compton, D.W., Baizerman, M. & Stockdill, S.H. (eds.), 2002, The art, craft, and science of evaluation capacity building, American Evaluation Association, New Directions for Evaluation, Jossey-Bass, San Francisco, CA.

Crawley, K.D., 2017, ‘The six-sphere framework: A practical tool for assessing monitoring and evaluation systems’, African Evaluation Journal 5(1), 1–8. https://doi.org/10.4102/aej.v5i1.193

Development Bank of Southern Africa, African Development Bank & World Bank, 2000, Monitoring and evaluation capacity development in Africa, Development Bank of Southern Africa, Johannesburg.

Eval Partners, EvalAgenda 2020, https://www.evalpartners.org/global-evaluation-agenda

Gorges, M.J., 2001, ‘New institutionalist explanations for institutional change: A note of caution’, Politics 21(2), 137–145. https://doi.org/10.1111/1467-9256.00145

Greene, J.C., Caracelli, V.J. & Graham, W.F., 1989, ‘Toward a conceptual framework for mixed-method evaluation designs’, Educational Evaluation and Policy Analysis 11(3), 255–274. https://doi.org/10.3102/01623737011003255

Greenwood, R., Suddaby, R. & Hinings, C.R., 2002, ‘Theorizing change: The role of professional associations in the transformation of institutionalized fields’, Academy of Management Journal 45(1), 58–80. https://doi.org/10.2307/3069285

Feldman, M.S. & Pentland, B.T., 2003, ‘Reconceptualizing organizational routines as a source of flexibility and change’, Administrative Science Quarterly 48(1), 94–118. https://doi.org/10.2307/3556620

Jansen van Rensburg, M.S. & Blaser Mapitsa, C. 2017, ‘Gender responsiveness diagnostic of national monitoring and evaluation systems – methodological reflections’, African Evaluation Journal 5(1), 1–9. https://doi.org/10.4102/aej.v5i1.191

Kusek, J.Z. & Rist, R.C., 2001, ‘Building a performance-based monitoring and evaluation system: The challenges facing developing countries’, Evaluation Journal of Australasia 1(2), 14–23.

Kusek, J.Z. & Rist, R.C., 2004, Ten steps to a results-based monitoring and evaluation system: A handbook for development practitioners, World Bank Publications, Washington, DC.

Mackay, K., 1998, Public sector performance: The critical role of evaluation, World Bank Operations Evaluation Department. Washington, DC.

Patton, M.Q., 2011, Developmental evaluation: Applying complexity concepts to enhance innovation and use, Guilford Press, New York.

Powell, W.W. & DiMaggio, P.J. (eds.), 2012, The new institutionalism in organizational analysis, University of Chicago Press, Chicago, IL.

Seasons, M., 2008, ‘Monitoring and evaluation in municipal planning: Considering the realities’, Journal of the American Planning Association 69(4), 430–440. https://doi.org/10.1080/01944360308976329

Tarsilla, M., 2014, ‘Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward’, African Evaluation Journal 2(1), 1–13. https://doi.org/10.4102/aej.v2i1.89


 

Crossref Citations

1. State of monitoring and evaluation in Anglophone Africa: Centre for Learning on Evaluation and Results in Anglophone Africa’s reflections
Dugan I. Fraser, Candice Morkel
African Evaluation Journal  vol: 8  issue: 1  year: 2020  
doi: 10.4102/aej.v8i1.505