About the Author(s)


Abdourahmane Ba Email symbol
Business Science Institute, Institut de d’administration des entreprises (IAE), University Lyon 3 Jean Moulin, Lyon, France

Business Science Institute, Wiltz, Luxembourg

Citation


Ba, A., 2021, ‘How to measure monitoring and evaluation system effectiveness?’, African Evaluation Journal 9(1), a553. https://doi.org/10.4102/aej.v9i1.553

Original Research

How to measure monitoring and evaluation system effectiveness?

Abdourahmane Ba

Received: 16 May 2021; Accepted: 17 Aug. 2021; Published: 29 Sept. 2021

Copyright: © 2021. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Although the roadblocks to development achievement in Africa emerge noticeably from resource scarcity, lack of security and good governance, or poor economic approaches, they also surface from ineffective development management practices. The monitoring and evaluation (M&E) systems effectiveness assessment by the World Bank in 2007 revealed little effectiveness, mainly on cases studied in Africa.

Objective: This research investigates the framework for monitoring and evaluation system effectiveness as a development management tool and shapes its measurements. It creates a framework that will help understand better the success factors of an effective M&E System and how they contribute to improved development management.

Methods: A trifold approach was used, which comprises three iterations — Literature review, Case Studies, and Survey. The first revisited the most relevant literature on development management and performance monitoring systems, while the second used a qualitative study of three cases in the West Africa region. The third is a survey of a sample of practitioners and managers in West Africa, where data was analysed using correlations and regressions.

Results: There are significant linkages between ‘M&E-System Quality’, ‘M&E-Information Quality’, and ‘M&E-Service Quality’. The results highlighted that the ‘Results-Based Management Practice’ of organisations, the effective ‘Knowledge and Information Management Culture’, including learning, and the ‘Evidence-Based Decision-Making Practice’ are directly influenced by effective M&E System.

Conclusions: Effective M&E System contributes greatly to expand ‘Improved Policy and Program Design’, ‘Improved Operational Decisions’, ‘Improved Tactical and Strategic Decisions’, and ‘Improved Capability to Advance Development Objectives’.

Keywords: monitoring and evaluation; results-based management; knowledge and information management; evidence-based decision making; development management.

Introduction

With the increasing need for accountability, there is a growing attention for evidence-based decision-making and measured risk-taking in the development management for donors and developing countries governments that bring back effective monitoring and evaluation (M&E) system at the crossings of Aid-effectiveness through the Results-Based Management (RBM) approach (Dabelstein & Patton 2005). Nelson highlighted that the ineffectiveness of projects and policy prescriptions to achieve their objectives is the result of their inability to model the complexity of the socio-economic system that they attempt to address (Nelson 2014).

Effective management approaches are crucial, and Faguet pointed out that development management is an informed attempt to move institutions and organisations to higher degrees of performance (Faguet 2011). LeBel highlighted that if any development policy is to succeed, it must be gauged by an assessment and improvement of linkages and operations across social, economic, and political agencies (LeBel 2011).

North stressed that the direction to economic changes, including growth, is shaped by the incentive structure that institutions pave (North 1991). Mantzavinos, North, and Shariq believe that explaining change, especially social, political, economic and organisational change is one of the greatest challenges of social sciences (Mantzavinos, North & Shariq 2004). And North, in ‘Economic performance through time’, established a clear linkage between the expansion of knowledge and the development process (North 1994:362).

Mackay highlighted the significance of the M&E System in the development performance measurement (Mackay 2010). He established that an effective M&E System helps ensure that the performance of government policies, programmes, and projects are measured. He also highlighted that M&E System provides data and evidence on the performance of donors who support the government’s development actions (Mackay 2010).

As a decision support system, M&E System plays a key role in public policy management. Andrews (2010) argued that:

[T]he tools of rational policy analysis have evolved over a century towards a richer conception of rationality that acknowledges substantive and procedural dimensions, rational applications of reasonable decision rules … Those tools must appropriately blend facts and values as they produce actionable policy advice. (p. 167)

In the early 1990s, Maddock launched in the field of development and aid industry a thought-provoking question on the effectiveness of the M&E System: ‘Has project monitoring and evaluation worked?’ (Maddock 1993). Unachievable objectives, weak reporting systems, and lack of timeliness in information creation processes, among other factors, have been identified as primary constraints that hinder the effectiveness of the M&E System (Maddock 1993).

Barton, through an integrated Information System analysis, identified six weaknesses that undermine the effectiveness of the M&E System: poor development programme design, human resources development needs, quantitative bias, low priority for an information system, involvement limited to data collection, and poor feedback (Barton 1997).

Improving development management strategies was at the forefront of the Monterrey Conference in 2002. As OECD also clarified it, increased commitment to policy and actions that promote economic growth and poverty reduction in developing countries was the main outcome from the 2002 Monterrey Conference, which called for a new partnership that promotes shared responsibility and more attention to management strategies that lead to tangible development results (OECD 2006).

Yet, scholars and development practitioners seek rigorous scientific and managerial standards in assessing the effectiveness of M&E System that will help improve development management effectiveness. This scarcity of evidence in the measurement of M&E System effectiveness may lead to greater confusion and weak successes in assessing the results and achievements of development programmes, primarily in Africa and low-income countries in general. As a result, the waste of resources in unsuccessful learning systems may lead to inefficiencies in development management. This research attempts to develop a model and its measurements that will help better understand the M&E System effectiveness framework and how its effectiveness will help improve development management.

How to model M&E System effectiveness?

Monitoring and Evaluation Systems contribute to knowledge base information on development programmes results for measured risk-taking and improved decision-making (Acquaah, Zoogah & Kwesiga 2013). Monitoring and Evaluation is a performance management tool that promotes accountability through knowledge generation and learning. It also generates strategic information for improved learning and decision-making (Parker 2008).

The learning and information management system is highly linked to the effectiveness of the M&E System. An M&E System is then a learning platform for improved decision-making to advance development objectives. Briner, Denyer and Rousseau (2009) argued:

Evidence-based management is about making decisions through the conscientious, explicit, and judicious use of four sources of information: practitioner expertise and judgment, evidence from the local context, a critical evaluation of the best available research evidence, and the perspectives of those people who might be affected by the decision. (p. 19)

Improved decision-making is key to organisations that manage development programmes. Decision-making based on reliable evidence generated through an effective M&E System is one of the objectives of the M&E System that organisations and institutions design to manage and implement development programmes efficiently. Firstly, this research evaluates the linkages between evidence-based decision-making processes at the organisational level and the effectiveness of the organisation’s M&E System.

Secondly, this research investigates relations between M&E System and RBM practice, knowledge and information management culture, and evidence-based decision-making process. Thirdly, it analyses whether these three organisational capabilities reflect the effectiveness of the M&E System of an organisation. Finally, the linkages between M&E System and improved development programmes and the capability of organisations to advance development objectives were investigated. All these linkages that the research evaluates are summarised in the hypothesised model below (Figure 1).

FIGURE 1: Potential dimensions of effective M&E System at the organsational level.

In 1996, Tom Barton defined an M&E System as an integrated Information System. He stressed that M&E System is an information system devoted to the selection, collection, analysis, and use of information on development projects and programmes (Barton 1997). As such, to analyse M&E effectiveness or success, this research used an integrated Information System approach.

In looking at the Information System literature, the DeLone and McLean Information System Success model was applied and carefully and rigorously operationalised in the field of M&E of development programmes. ‘In the D&M IS Success Model, “systems quality” measures technical success; “information quality” measures semantic success; and “use, user satisfaction, individual impacts,” and “organizational impacts” measure effectiveness success’ (DeLone & McLean 2003).

Research method and design

‘To advance management theory’, Edmondson and Mcmanus argued that scholars are increasingly interested in field research where real people, real problems, and real organisations are studied (Edmondson & Mcmanus 2007). The knowledge from practice is the underlying approach for this research which used both quantitative and qualitative methods. Although critical gaps did not allow for an approach purely experimentally designed, this research builds on a mix of practice-based, exploratory, descriptive and critical approaches to create knowledge. Nicolini highlighted that in a practice-based perspective, knowledge and organisational phenomena are intimately related (Nicolini 2011).

The research used a trifold approach – Literature, Case Studies, and Surveys – which comprised three iterations or evidence-based building blocks, within each of which rigorous methods were used to draw findings and conclusions, as well as to generate managerial recommendations that intend to support improved development management practice (Table 1).

TABLE 1: Summary of the research methods and design.
Iteration 1: Literature review

Data collection through the literature and documentation involved reviewing relevant articles and scientific reports published in scholarly journals and development platforms on development management, performance monitoring, and related fields. Documented evidence was generated on the effectiveness and robustness of the M&E System framework. M&E System has then been analysed through Barton’s (1997) definition:

The M&E system is a form of ‘information system’, which is a broad term for information selection, gathering, analysis and use. It can be described as a logic chain of linked ideas starting (and continuing) with information users. (p. 9)

Relevant articles were selected as well as relevant technical reports, conference procedures for the main M&E learning events the last 20 years in the world of development management. Findings from these documents were critically analysed to support different conclusions in building the M&E System effectiveness framework.

The six dimensions of the updated D&M Information System Success Model (DeLone & McLean 2003) – System Quality, Information Quality, Service Quality, Usage, User Satisfaction, and Net Benefits – guided the design of the research model. This helped generate a well-structured model for the next steps of the research. The model was operationalised into pillars and sub-dimensions based on the general D&M IS Success Model approach.

The review helped analyse the growing need for M&E System to generate meaningful information for decision-makers to advance development in Africa. Through a deep analysis of what was reviewed, the different components of an effective M&E System were mapped to design a research model (Figure 1) that served as a basis for testing and applying the M&E System effectiveness in the next iterations.

A mapping of the different descriptions from the analysis of M&E System effectiveness with the organisations and institutions’ capabilities has shown that an effective M&E System core pillars contribute to (1) ‘Results-Based Management Practice’, (2) ‘Knowledge and Information Sharing Culture’, and (3) ‘Evidence-Based Decision-Making Practice’. This process of constructing the dimensions and sub-dimensions of an effective M&E System led to an in-depth review of the existing knowledge in these three capabilities of organisations to generate valid measurements.

The analysis of the M&E System Net-Benefits and how they apply to the development management field led to a discussion on improved development policy design and formulation and improved tactical, operational, and strategic decisions during development programme design and implementation. The analysis finally led to a discussion on the contribution of effective M&E System to the capability of organisations and institutions to advance the development objectives that would positively change people’s welfare.

Iteration 2: Case studies

This second iteration is a qualitative study of three cases based on a representative sample of development programmes selected from experience within the West Africa region. It operationalised the first three dimensions in the M&E system framework: ‘M&E-System Quality’, ‘M&E-Information Quality’, and ‘M&E-Service Quality’.

According to Zardet, when limits separating a phenomenon and its context are unclear, and when multiple sources are used, and when referring to previous developments is necessary, then a case study is recommended to study the phenomenon (Savall & Zardet 2011).

Yin (2014) also clarifies, in his fifth edition of ‘Case Study Research- Design and Methods’, that:

[A]rticulating a ‘theory’ about what is being studied and what is to be learned helps to strengthen a research design when doing case study research … You can then examine the quality of your emerging design in relation to four tests commonly used in social science research: (a) constructive validity, (b) internal validity, (c) external validity, and (d) reliability. (p. 28)

The case studies focused on how M&E System works effectively at local, national, and regional levels. An approach would be to select cases in the West African region randomly. Indeed, that could lead the research to examples not necessarily relevant to the topics reviewed. Another approach would be to look at the existing documents on cases or apply experts’ and managers’ recommendations in selecting the cases. While those approaches would increase neutrality in selecting the cases, they might not be necessarily relevant enough to show what needed to be captured in this process.

The cases were selected from our experience in the development field in the West African region through a reasoning approach. The main criteria were the levels of effective coordination of development actions, the stage of development and implementation of the M&E System, the coverage and domains (infrastructure development, agriculture, poverty reduction, etc.) of the projects and programmes, and the type and scope of development partners in the region.

In analysing the measurements for ‘M&E-System Quality’ dimension of an effective M&E System, the case of the Office Du Niger Contact Plan 2008–2012 M&E System was considered. The process of designing of the Office du Niger M&E System was an important process as it involved several Ministries sections, local government in Segou, Mali, the various zones of the irrigation system, and all development partners involved in the programmes. The research prioritised content analysis of various reports and documents, interview guides, and data triangulation to generate the measurements relevant to building the ‘M&E-System Quality’ dimension of the effective M&E System.

The same approach was used to study the M&E approach of the National Agency for Rural Electrification in Senegal (ASER), which is in charge of the rural electrification programme of the Government of Senegal. The ASER M&E approach was studied to clarify and define the ‘M&E-Information Quality’ dimension measurements. The various documents and interviews conducted with the main technical staff in the agency helped generate relevant information that was scrutinised using content analysis and mapping of key sub-dimension of ‘M&E-Information Quality’. This approach guided the case study and provided the findings on what are the relevant measurements for the sub-dimension ‘M&E-Information Quality’ of the proposed framework for M&E System effectiveness.

The third case relates to the Regional Program Water and Food Security Initiative (IESA) in Africa funded by the Spanish Cooperation (AECID) and implemented by the United Nations Food and Agriculture Organization (FAO). As the programme was implemented in five countries of West Africa, studying the creation and distribution of M&E information within all interested stakeholders at national, local, and regional levels, and with FAO headquarter in Rome, enabled to understand how M&E System can deliver quality products and services to satisfy various types of users and decision-makers’ needs. The case study analysed the sub-dimensions of ‘M&E-Service Quality’ of an effective M&E System. The approach prioritised content analysis and other qualitative methods, review of relevant information to generate findings on the relevant measurements to assess the sub-dimensions of the ‘M&E-Service Quality’ dimension.

Iteration 3: Sample survey and retro-feedback

The third iteration used a questionnaire generated in line with the M&E System effectiveness framework measurements developed, its dimensions and sub-dimensions, and measurements. A Maximum Variation Sampling approach was used to select 50 programmes or projects (ongoing or closed 2 years ago maximum) in the ECOWAS countries. The following criteria were adopted: project’s duration (2, 3, 5 years, more than 5 years), level of implementation (regional, national, and local), size of operations, and outreach (small scale, medium scale, and large scale). Also, to maximise the differences in stakeholders’ feedback, the sample covered all the following development sectors: Agriculture, Infrastructures, Energy, Poverty – Social Impact, Gender Empowerment, Governance and Democracy, Food Security, Rural Development, Education – Research, Health, Trade, and Investment.

The questionnaire was built around the key components and variables of the M&E System Effectiveness framework designed in Iteration 1 and operationalised in Iteration 2.

Interviews were organised through Skype meetings and with online questionnaires deployed through the SPHINX platform. To ensure validity and reliability, the questionnaire was modeled with five-levels Likert measures (Bertram 2006) for each sub-dimension of the M&E System effectiveness framework.

For each measure of the Likert scale used, a benchmark of the highest value was defined and explained based on M&E System effectiveness criteria and indicators identified in the previous iterations. The use of the benchmarking approach helped to ensure that the Likert scales are applied in a very consistent manner that generated high-quality information, the reliability of scores selected, and a minimal subjectivity in the responses.

Data from the survey were analysed using correlations and regression models with statistical software including SHINX and SPSS as well as MS-Office package (Excel). Data analysis led to the verification of the applicability of the framework and how it operates at different levels of organisations and institutions – regional, national and local. The analysis also led to findings that constituted the basis for providing different adaptations of the model depending on the level and depth of the M&E System effectiveness assessment and the sector or area of the development programme.

Ethical considerations

This article followed all ethical standards for research without direct contact with human or animal subjects.

Results

Table 2 shows how the D&ML IS dimensions are operationalised and used in the M&E System effectiveness framework in the subsequent sections.

TABLE 2: DML IS Success Model guiding the M&E System effectiveness framework design.
Operationalisation of M&E system dimensions

The ‘M&E-System Quality’ dimension of the effective M&E System model proposed in line with the case of the Office du Niger Contract Plan program is captured through tangible sub-dimensions including (1) Design Quality, (2) Set-Up Quality, (3) Operations Quality, (4) Maintenance Quality, and (5) Resource Quality. Measurements were proposed for each sub-dimension. The main outcome is the simplification of the number of factors in each sub-dimension with the selection of significant variables that are quasi-independent. The measurements below were operationalised under these five sub-dimensions of ‘M&E-System Quality’.

Design quality:

  • Existence of a high-quality M&E plan or procedures prior to programme implementation.
  • Quality and simplicity of tools and techniques for data collection and analysis.
  • Existence of capacity-building plan on M&E.

Set-up quality:

  • Timely availability of resources to implement M&E activities.
  • Tests of M&E tools and techniques.
  • Existence of quality baseline information.

Operations quality:

  • Existence of M&E data exchange platform accessible to core programme stakeholders.
  • Capacity of M&E team to visit all programme zones of intervention for data collection, feedback, and analysis.
  • Existence and functionality of decentralised M&E sites to cover programme zones of intervention.

Maintenance quality:

  • Frequency of reviews and assessments of the M&E System design and operations.
  • Percentage of M&E agents benefitting from staff development programme.
  • Technical support received from specialised M&E agencies or other organisations.

Resource quality:

  • Qualifications and level of competency of programme M&E managers.
  • Qualifications and competency of other M&E human resources (enumerators, consultants, etc.).
  • Existence of appropriate financial resources as per the approved M&E plan to conduct M&E activities.

The ‘M&E-Information Quality’ dimension of the effective M&E System model proposed in line with the case of the Senegalese Rural Electrification Program (ASER) is captured through tangible sub-dimensions, including (1) Input information quality, (2) Output information quality, (3) Outcome information quality, (4) Performance information quality, and (5) Risk management information quality. The measurements below were operationalised under these five sub-dimensions of ‘M&E-Information Quality’.

Input information quality:

  • Programme analytical accounting system.
  • Approach for conversion of programme beneficiaries’ contributions.
  • Programme inputs indicators meet data quality assessment standards.

Output information quality:

  • Programme output indicators cover the programme results.
  • Output indicators meet data quality assessment standards.
  • Programme output indicators align with the sector results.

Outcome information quality:

  • Percentage of programme outcome indicators that are/were informed in a timely manner.
  • Programme outcome indicators coverage of the programme intermediate results.
  • Quality of information generated by outcome indicators on intermediate changes at the beneficiaries’ level.

Performance information quality:

  • Availability of quality information for the analysis of programme efficiency.
  • Availability of quality information for the analysis of programme sustainability.
  • Capacity of the M&E System to change the lessons learned and best practices into meaningful information.

Risk management information quality:

  • All the assumptions and risks in the programme logical framework are or were changed into meaningful risk mitigation or attenuation milestones monitored and informed on a regular basis.

The ‘M&E-Service Quality’ dimension of the effective M&E System model proposed, in line with the case of the Office du Niger (ON) Contract Plan 2008–2012 M&E System, is captured through tangible sub-dimensions, including (1) Information Availability, (2) Information Accessibility, (3) System Responsiveness, (4) System Flexibility, and (5) System Sustainability. The measurements below were operationalised under these five sub-dimensions of ‘M&E-Service Quality’.

M&E information availability:

  • Capacity for timely generation of programme M&E mandatory reports.
  • Existence of knowledge-sharing system on M&E information.
  • Existence of communication tools to share M&E data.

M&E information accessibility:

  • Percentage of programme managers having timely access to evidence-based information.
  • Access to M&E data for the general public, media, and researchers.
  • Technical user-friendliness of M&E data for all users.

M&E System responsiveness:

  • Timely availability of M&E data upon request by programme stakeholders.
  • Capacity to carry out specific M&E studies requested by the programme managers on a timely basis.
  • Capacity of the M&E System to respond to sectorial ministry information requests for M&E data.

M&E System flexibility:

  • Capability of the M&E System to integrate efficiently new national directives and/or international standards.
  • Ability of the M&E System to quickly integrate new information needs following programme revisions.

M&E System sustainability:

  • Capacity of the M&E System to remain fully functional when major changes occur in the M&E team.
  • Capacity of the programme managers and other stakeholders to access M&E information with minimum technical support.
  • Ability to use the M&E data after programme closure.
Relation between dimensions of effective M&E system

The analysis examined the relationships between the three dimensions and the ‘M&E-Service Quality’ as an independent variable and ‘M&E-System Quality’ and ‘M&E-Information Quality’ as dependent variables or explanatory factors.

Equation of model

‘M&E-Service Quality’ = 1.90 + 0.40 × ‘M&E-System Quality’ + 0.53 × ‘M&E-Information Quality’

Quality of estimate: The model accounts for 74.70% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.86.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 63.49.

P-value of F: p(F) < 0.01.

M&E system and results-based management (intention-to-use)

Results-based management is assessed through the criteria defined by Barends et al. The six dimensions proposed by Barends et al. in assessing evidence-based management practice are operationalised to measure the RBM dimension’s sub-components in the effectiveness framework of a proposed M&E System. The six dimensions proposed by Barends et al are as follows (Barends, Rousseau & Briner 2014):

  • Asking: transforming a practical issue or problem into an answerable question.
  • Acquiring: systematically searching for and retrieving the evidence.
  • Appraising: critically judging the trustworthiness and relevance of the evidence.
  • Aggregating: weighing and pulling together the evidence.
  • Applying: incorporating the evidence into the decision-making process.
  • Assessing: assessing the outcome of the decision taken.
Equation of model

RBM Practice = 1.89 + 0.37 × ‘M&E-Information Quality’

Quality of estimate: The model accounts for 36.40% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.60.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 25.19.

P-value of F: p(F) < 0.01.

M&E system and knowledge and information management (use)

As clarified here by Nonaka, Toyama and Konno (2000), ‘There are two types of knowledge: explicit knowledge and tacit knowledge’ (p. 7). A participatory M&E System strengthens tacit knowledge as embedded in action, day-to-day activities, procedures, and other values and beliefs. The explicit knowledge within an organisation is centralised and managed through the existing M&E System. That system organises data collection and analysis at various levels and in learning platforms to generate meaningful feedback and information for measured risk-taking.

What are the key factors of development organisations’ capabilities that are influenced by a process of good learning, knowledge, and information management complemented by an effective M&E System? Anantatmula and Kanungo, in the research published in 2006, explained that the ultimate goal of knowledge management is to transform knowledge learning, sharing, and use into competitive advantage via enhanced organisational performance (Anantatmula & Kanungo 2006). They developed six criteria to characterise knowledge and information management:

  • Sharing best practices.
  • Improved productivity.
  • Enhanced quality.
  • Improved employee skills.
  • Improved communication.
  • Enhanced collaboration.
Equation of model

KIM Practice = −1.24 + 0.17 × ‘M&E-System Quality’ + 0.30 × ‘M&E-Service Quality’

Quality of estimate: The model accounts for 78.26% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.88.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 77.38.

P-value of F: p(F) < 0.01.

M&E system and evidence-based decision making (user satisfaction)

Effective M&E System plays a critical role in improving evidence-based decision-making in development organisations. Choo has shown in the 90s the importance of an organisation’s rational behaviour towards a higher quality of performance programmes (Choo 1996).

In analysing the effectiveness of the decision-making process, Schilling et al. in 2007 studied dimensions of the decision-making process that covered (1) The information processors, (2) The approach to processing information, and (3) The results-oriented dimensions. They developed eight criteria to characterise evidence-based decision-making under these three dimensions (Schilling, Oeser & Schaub 2007):

  • General Participation.
  • Top-down versus Bottom-up.
  • Quality of Stakeholder Information.
  • Transparency and Comprehensibility.
  • Rational-Based versus Intuitive-Based.
  • Quality of Information Exchange.
  • Creativity.
  • Strategic Insights.

In the proposed model for M&E System effectiveness, the eight criteria of decision process effectiveness, as defined by Schilling et al. are used to measure the factors of the evidence-based decision-making component of an organisation’s capability influenced by an effective M&E System.

Equation of model

EBDM Practice = −0.93 + 0.57 × ‘M&E-Service Quality’

Quality of estimate: The model accounts for 66.51% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.82.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 87.39.

P-value of F: p(F) < 0.01.

Outcomes of an effective M&E system (net benefits)

Weak development management capabilities may hinder effective policy-making in developing countries (Young 2005). Better RBM increases the capabilities of development organisations and institutions to define well-aligned policies and prepare well-focused programmes that contribute greatly to improved people’s welfare.

Operational and tactical adjustments are essential when used in a participatory approach, mainly when they are based on field programme feedbacks and analysis of programme’s outputs indicators and financial inputs. These adjustments – which originate from what is learned through the programme’s M&E System and the routine monitoring of programme activities – are key to the success of development policy implementation.

Over a programme’s implementation, there might be a need for greater adjustments at the programme outcomes level when the initial ones need to be improved because of the economic context or programme performance. Development organisations may be called upon to adjust an ongoing programme when more in-depth knowledge of the beneficiaries’ status and their environment becomes available. These adjustments in a programme’s intermediate results and outcome levels are a tactical adjustment meant to correct the programme’s direction towards expected changes at the beneficiaries’ level.

Equation of model

NETBENEF01 = 0.90 + 0.12 × RBM Practice

Quality of estimate: The model accounts for 29.24% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.54.

P-value of R: p(R)0.01.

Coefficient of Fisher: F = 18.19.

P-value of F: p(F)0.01.

‘Improved Policy and Program Design’ (NETBENEF01) is a function of RBM practice at the organisation level. Results-Based Management is a development management approach that combines planning, monitoring and evaluation methods and techniques.

Equation of model

NETBENEF02 = 0.03 + 0.13 × EBDM Practice

Quality of estimate: The model accounts for 53.78% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.73.

P-value of R: p(R)0.01.

Coefficient of Fisher: F = 51.20.

P-value of F: p(F)0.01.

The analysis has shown that among the three dynamic capabilities of organisations to manage development actions, only ‘Evidence-Based Decision-Making Practice’ has a significant direct influence on ‘Improved Operational Decisions’ (NETBENEF02). This benefit or outcome of effective M&E System is greatly influenced by the capability of the organisation to build decisions on rigorous evidence.

Equation of model

NETBENEF03 = −0.34 + 0.11 × RBM Practice + 0.06 × EBDM Practice

Quality of estimate: The model accounts for 54.90% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.74.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 26.17.

P-value of F: p(F) < 0.01.

The analysis has shown that ‘Improved Tactical Decisions’ (NETBENEF03), which are the decisions touching the programme-specific results, intermediary results, made anytime during the programme implementation especially during the mid-course review, will succeed through the two dynamic capabilities of the organisations in charge of development actions, namely ‘Results-Based Management Practice’ and ‘Evidence-Based Decision-Making Practice’.

Equation of model

NETBENEF04 = 0.14 + 0.06 × RBM Practice + 0.08 × EBDM Practice

Quality of estimate: The model accounts for 50.16% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.71.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 21.63.

P-value of F: p(F) < 0.01.

Both tactical and strategic decisions need ‘Results-Based Management Practice’ and ‘Evidence-Based Decision-Making Practice’. These two capabilities are critical for programme managers and different stakeholders to take the right decision at tactical and strategic levels.

Equation of model

NETBENEF05 = −0.35 + 0.09 × RBM Practice + 0.08 × KIM Practice

Quality of estimate: The model accounts for 53.75% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.73.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 24.99.

P-value of F: p(F) < 0.01.

This analysis explored the linkages between the three capabilities needed to manage effectively development actions, and the ‘Improved Capability to Advance Development Objectives’.

Equation of model

NETBENEF05 = −0.35 + 0.09 × RBM Practice + 0.08 × KIM Practice

Quality of estimate: The model accounts for 53.75% of the variance of the variable to be explained.

Coefficient of the multiple correlation: R = 0.73.

P-value of R: p(R) < 0.01.

Coefficient of Fisher: F = 24.99.

P-value of F: p(F) < 0.01.

Overall, the analyses led to the revised M&E System Effectiveness Framework by using correlations and regressions and then showing the tangible quantified relationships between the dimensions of M&E System, the Development Management capabilities and the M&E System net benefits. The major outcome is the revised quantified model which is shown in Figure 2. The quantities represent the Standardised Coefficients from the multiple regression analyses.

FIGURE 2: The Revised M&E System Effectiveness Framework.

Discussion

The research found significant linkages between ‘M&E-System Quality’, ‘M&E-Information Quality’ and ‘M&E-Service Quality’. A well-designed M&E System via the elaboration of a clear manual that includes a definition of what to monitor and evaluate, the roles and responsibilities of the different actors, a realistic M&E work plan and budget, and an approach to involve decentralised M&E units, the availability of actionable technical and financial resources, as well as a good M&E System maintenance including capacity building of core actors involved in its implementation, is key to the success in generating meaningful and robust information for decision-makers. Mackay described key factors of success for an effective M&E System, which are as follows: (1) utilization of Monitoring and Evaluation information, (2) good quality of Monitoring and Evaluation information and, (3) sustainability of the system (Mackay 2007).

An effective M&E System equipped with these three dimensions is well-positioned to service decision-makers and other programme stakeholders with relevant information and evidence in the process of development actions preparation, implementation, and capitalisation. Together, ‘M&E-System Quality’ and ‘M&E-Information Quality’ influence positively ‘M&E-Service Quality’ that includes information availability, information accessibility, system flexibility and responsiveness, and system sustainability and resiliency. An effective M&E System should be well-positioned in providing evidence-based learning when answering the ‘So what?’ question, which is, according to Kusek et al. the main rationale of designing results-based M&E systems (Kusek, Rist & White 2005).

Mackay described four key benefits of an effective M&E System at the country level: (1) to support policy-making, (2) to improve policy analysis work, (3) to foster Ministries sector management, (4) to enhance transparency and support accountability (Mackay 2010). ‘Results-Based Management Practice’ of organisations and institutions, as a key capability to ensure that development activities are well prepared, implemented and oriented to the desired outcomes, is influenced directly by ‘M&E-Information Quality’, and indirectly, through ‘M&E-Information Quality’, by ‘M&E-System Quality’.

The ‘M&E-Information Quality’ is necessary to ensure effective ‘Results-Based Management Practice’ within organisations and institutions in charge of development actions. Meier argued that continuous improvement of performance is the central orientation in a management strategy using the RBM approach (Meier 2003).

All the influential sub-dimensions of ‘M&E-Information Quality’ also need ‘M&E-System Quality’ to effectively influence the ‘Results-Based Management Practice’. The pathway to effective ‘Results-Based Management Practice’ is primarily to ensure that ‘M&E-System Quality’ operates well and ‘M&E-Information Quality’ is available. According to Barens et al. (2014), ‘The basic idea of evidence-based practice is that good-quality decisions should be based on a combination of critical thinking and the best available evidence’ (p. 2).

Effective ‘Knowledge and Information Management Culture’, including learning, which is a key capability for organisations and institutions managing development actions, is influenced directly by ‘M&E-System Quality’ and ‘M&E-Service Quality’, and by ‘M&E-Information Quality’ indirectly through ‘M&E-Service Quality’. According to Parker, M&E System plays a dual role for organisations and institutions. Firstly, it contributes to knowledge creation and learning. Secondly, it is a performance management tool that fosters accountability (Parker 2008).

The availability of ‘M&E-Information Quality’ is necessary but not sufficient to ensure effective ‘Knowledge and Information Management Culture’. Effective ‘Knowledge and Information Management Culture’ needs a good ‘M&E-Service Quality’ to deliver quality information with the appropriate contents and formats to the various stakeholders. Hence, M&E System responsiveness, information availability and accessibility, M&E System flexibility and resiliency, as well as sustainability, are key sub-dimensions of ‘M&E-Service Quality’ that are necessary to ensure effective ‘Knowledge and Information Management Culture’, including learning. Choo clarifies that the three reasons why organisations use knowledge are the following: to have a positive impact on their environment, to innovate through knowledge creation, and to make decisions on their actions (Choo 1996).

The way an M&E System is designed, deployed, and maintained, the way its resources are mobilised and used have significant influences on the ‘Knowledge and Information Management Culture’ effectiveness. Also, ‘M&E-System Quality’ (design, operations, resources, and maintenance) is necessary to ensure the success towards effective ‘Knowledge and Information Management Culture’. North, in ‘Economic performance through time’, established a direct linkage between the expansion of knowledge and the development process (North 1994).

Segone noted that ‘Many governments and organizations are moving from “opinion-based policy” towards “evidence-based policy,” and are in the stage of “evidence-influenced policy”’ (Segone 2010). ‘Evidence-Based Decision-Making Practice’ is influenced directly by ‘M&E-Service Quality’ and indirectly by ‘M&E-Information Quality’ and ‘M&E-System Quality’ through effective ‘M&E-Service Quality’. Decision-makers, including governments, need an effective M&E System that delivers high-quality information to ensure that the decisions they take are evidence-based.

‘M&E-Information Quality’ and ‘M&E-System Quality’ influence indirectly ‘Evidence-Based Decision-Making Practice’ through ‘M&E-Service Quality’. Ultimately, organisations must ensure ‘M&E-Service Quality’ including information availability, accessibility, system responsiveness, flexibility, and sustainability, to build the basis for effective ‘Evidence-Based Decision-Making Practice’. Lander et al. clarified that ‘Evidence-informed development refers to the practice of making decisions in development policy and practice informed by the best available evidence’ (Langer et al. 2015:463).

Organisations in charge of development actions are in a continuous process of designing and planning for policy and programmes to adapt their development strategies and goals with the dynamic international and local contexts. In their M&E System, organisations learn from ongoing and past development actions to design new policies and programmes. Effective M&E System contributes to improved policy and programme design. Quesnel recalled in 2010 that the first requirement of any country-led M&E System is the strategic intent or objective of the development programme with its logic and performance measures (Quesnel 2010).

The pathway to ‘Improved Policy and Program Design’ starts with ‘M&E-System Quality’ and ‘M&E-Information Quality’. The quality of the design, operations, resources, and maintenance of the M&E System influences positively ‘M&E-Information Quality’ (quality inputs, outputs, outcomes, performance, and risks management information), which, in turn, influences effective ‘Results-Based Management Practice’ within the organisations. This ultimately contributes significantly to ‘Improved Policy and Program Design’ for ongoing and future development actions. Cracknell pointed out, in analysing the M&E System of public investments in the United Kingdom, that the most serious difficulty in designing an effective M&E System is the problem in stating the programs’ objectives clearly (Cracknell 1994:227).

A well-designed M&E System, which operates without significant challenges, mobilises its resources efficiently to implement its activities without difficulty, and is well maintained, will collect, analyse and generate high-quality information on the development programme achievements and results. Consequently, effective ‘Results-Based Management Practice’ will be achieved within the organisation to ensure ‘Improved Policy and Program Design’.

Effective ‘Knowledge and Information Management Culture’, which builds on ‘M&E-System Quality’ and ‘M&E-Service Quality’, reinforces the ‘Results-Based Management Practice’ of the organisations and contributes to ‘Improved Policy and Program Design’. Meier argued that improved performance is the central orientation of RBM as a management strategy aimed at changing the way organisations operate (Meier 2003).

Development organisations, when they implement development actions, will take operational decisions that focus mainly on the management of inputs and outputs delivery. Operational decisions touch the operations of projects and programmes that guide the implementation towards the achievement of the set development goals. Hence, operational decisions are critical to safeguarding development programmes’ success. Effective M&E System is a useful tool for governments and their partners. Mackay (2010) highlighted that it:

[C]an measure the performance of all government policies, programs, and project. It can identify what works, what does not and the reason why … Additionally, it provides information on the performance of donors who support the work of the government. (p. 170)

The pathway to ‘Improved Operational Decisions’ builds on effective ‘Evidence-Based Decision-Making Practice’. ‘M&E-Information Quality’ and ‘M&E-Service Quality’ influence positively ‘M&E-Service Quality’, which influences ‘Evidence-Based Decision-Making Practice’.

‘M&E-System Quality’ and ‘M&E-Service Quality’ facilitate together effective ‘Knowledge and Information Management Culture’ through improved learning processes, which contribute greatly to effective ‘Evidence-Based Decision-Making Practice’. All these dimensions of an effective M&E System should be performed together effectively to ensure ‘Improved Operational Decisions’. As to quote Choo (1996):

Although organizational decision making is a complex, messy process, there is no doubt that it is a vital part of organizational life: all organizational actions are initiated by decisions, and all decisions are commitments to action. (p. 330)

While ‘Improved Operational Decisions’ require only effective ‘Evidence-Based Decision-Making’, ‘Improved Tactical and Strategic Decisions’ necessitate both effective ‘Evidence-Based Decision-Making Practice’ and effective ‘Results-Based Management Practice’. Tactical decisions are decisions that affect the programs’ intermediary results, and strategic decisions affect the higher levels of programs’ specific objectives and goals. Hence, ‘Evidence-Based Decision-Making’ is necessary but not sufficient to generate ‘Improved Tactical and Strategic Decisions’. There is also a need to ensure effective ‘Results-Based Management Practice’. Effective ‘Knowledge and Information Management Culture’ contributes to ‘Results-Based Management Practice’ and ‘Evidence-Based Decision-Making’ which, generate together ‘Improved Tactical and Strategic Decisions’.

John Young recalled that policy-making is a dynamic and complex process, especially in poor countries, and stressed that improved utilisation of research and evidence could save lives, reduce poverty and improve the quality of life (Young 2005). ‘Improved Capability to Advance Development’ for organisations and institutions in charge of development actions is the highest objective of an effective M&E System.

Effective M&E System has a long-term positive impact on the capability of organisations and institutions to advance development objectives. Two dynamic capabilities are necessary to ensure the ‘Improved Capability to Advance Development’. They are the ‘Results-Based Management Practice’ and the ‘Knowledge and Information Management Culture’. ‘Improved Capability to Advance Development’ is the only effective M&E System ‘Net Benefit’ influenced directly by ‘Knowledge and Information Management Culture’. Knowledge management and learning are key success factors for organisations to move development actions to success.

In 1893, Charles Booth, a Statistician from London, wrote: ‘To effectively deal with poverty there was need to gather quantitative information on characteristic of poverty’ (Booth 1893:37). This research found that to effectively deal with poverty or achieve any development goal, there is a need to set an effective M&E System with quality information, system, and service, within organisations and institutions that practice effectively RBM, and Knowledge and Information Management. Effective M&E System contributes to expanding the successful practice of the key development management approaches, namely, RBM Knowledge and Information Sharing, and Evidence-Based Decision-Making, to advance development objectives and achieve improved welfare and freedom of people.

Conclusion

The M&E System framework established in this research is a contribution to development management. It shows the dimensions of an effective M&E System as they relate to the information systems described by scholars, and how they are linked to the dynamic capabilities of organisations in charge of development actions. It also shows how an effective M&E System contributes to the organisations’ improved policy-making, decision-making, and capability to advance sustainable development goals.

The main outcome of this research is the contribution to the construction of the knowledge base on M&E System effectiveness measurement and analysis. The research has shown that various aspects of M&E System need in-depth research and scholars’ contribution to better clarify the roles of M&E System and the impacts of its dimensions on development management and the livelihood of people through improved policy and programme success. As shown in this research, M&E System, learning and Knowledge, and Information Management as development management strategies would be among the most exciting upcoming research agenda. So many areas are to be reinforced, and need to be improved to advance sustainable development goals.

At the managerial level, the research contributes to the development management practice, as it clarifies critical requirements in building and implementing an effective M&E System. It contributes effectively to the capacity building of development practitioners on the M&E System effectiveness framework and its implications. It proposes opportunities to move from current data quality assessment approaches to a broader M&E System Effectiveness Evaluation (M&E/SEE) and provides the needed measurements and benchmarks to carry out effective measurements of an M&E System. Finally, it shapes the way to improved results-based budgeting of M&E activities and better assessment of M&E System efficiency and management.

Acknowledgements

I would like to thank and acknowledge my thesis Director, Prof. Paul Beaulieu, from the Department of Strategy, Social and Environmental Responsibility, ESG-UQAM Canada; Michel Kalika, President of the Scientific Council of Business Science Institute, Scientific Advisor of the Business Science Institute and Philip LeBel, Emeritus Professor of Economics, University of Monclair, USA.

Competing interests

The author declares that he has no financial or personal relationships that may have inappropriately influenced him in writing this article.

Author’s contributions

A.B. is the sole author of this article.

Funding information

The author received no financial support for the research, authorship, and/or publication of this article.

Data availability

The data that support the findings from this study are openly available online at: https://drive.google.com/drive/folders/1RpMBFsVSsJwksClFgNavcpqiqQwJ8McJ?usp=sharing.

Disclaimer

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any affiliated agency of the author.

References

Acquaah, M., Zoogah, D.B. & Kwesiga, E.N., 2013, ‘Advancing Africa through management knowledge and practice: The way forward’, African Journal of Economic and Management Studies 4(2), 164–176. https://doi.org/10.1108/AJEMS-04-2013-0036

Anantatmula, V. & Kanungo, S., 2006, ‘Structuring the underlying relations among the knowledge management outcomes’, Journal of Knowledge Management 10(4), 25–42. https://doi.org/10.1108/13673270610679345

Andrews, C.J., 2010, Rationality in Policy Decision Making. In Hanbook of Public Policy Analysis: Theory, Politics and Methods, 1st edn., pp. 161–171, Routledge, London.

Barends, E., Rousseau, D.M. & Briner, R.B., 2014, Evidence-based management – The basic principles, Center for Evidence-Based Management, Leiden.

Barton, T., 1997, ‘Guidelines to monitoring and evaluation. How are we doing ?’, CARE-Uganda, 1–153.

Bertram, D., 2006, Likert Scales: CPSC 681—Topic Report, Poincare, 1–11, viewed n.d., from http://poincare.matf.bg.ac.rs/~kristina/topic-dane-likert.pdf.

Booth, C., 1893, ‘Life and Labour of the People in London: First Results of An Inquiry Based on the 1891 Census’, Journal of the Royal Statistical Society 56(4), 557–593. https://doi.org/10.2307/2979431

Briner, R.B., Denyer, D. & Rousseau, D.M., 2009, ‘Evidence-Based Management: Concept clean-up time?’, Academy of Management Perspectives 23(4), 19–32. https://doi.org/10.5465/amp.23.4.19

Choo, C.W., 1996, ‘The knowing organization: How organizations use information to construct meaning, create knowledge and make decisions’, International Journal of Information Management 16(5), 329–340. https://doi.org/10.1016/0268-4012(96)00020-5

Cracknell, B.E., 1994, ‘Monitoring and evaluation of public-sector investment in the UK’, Project Appraisal 9(4), 222–230. https://doi.org/10.1080/02688867.1994.9726955

Dabelstein, N. & Patton, M.Q., 2005, ‘The Paris declaration on aid effectiveness: History and significance’, The Canadian Journal of Program Evaluation 27(3), 19–36.

DeLone, W.H. & McLean, E.R., 2003, ‘The DeLone and McLean Model of information systems success: A ten-year update’, Journal of Management Information Systems 19(4), 9–30. https://doi.org/10.1073/pnas.0914199107

Edmondson, A.C. & Mcmanus, S.E., 2007, ‘Methodological fit in management field research’, Accademy of Management Review 32(4), 1155–1179. https://doi.org/10.1016/0268-4012(96)00020-5

Faguet, J.-P., 2011, ‘Development management’, in The London School of Economics and Political Science, DV3165, 27th edn., pp. 1–54, London School of Economics and Political Science (LSE), London.

Kusek, J.Z., Rist, R.C. & White, E.M., 2005, ‘How will we know the millennium development goal results when we see them? Building a result-based monitoring and evaluation system’, Evaluation 11(1), 7–26. https://doi.org/10.1177/1356389005053181

Langer, L., Stewart, R., Erasmus, Y. & De Wet, T., 2015, ‘Walking the last mile on the long road to evidence-informed development: Building capacity to use research evidence’, Journal of Development Effectiveness 7(4), 1–9. https://doi.org/10.1080/19439342.2015.1095783

LeBel, P., 2011, ‘Economic analysis of development projects the economic environment of development projects’, Center for Economic Research on Africa – CERAF 1(1), 39, viewed 10 October 2021, from http://msuweb.montclair.edu/~lebelp/.

Mackay, K., 2007, How to build M&E systems to support better government, The International Bank for Independant Evaluation Group, IEG, The World Bank, Washington, DC.

Mackay, K., 2010, ‘Building Monitoring and Evaluation Systems to Improve Goverment Performance’, in Country-Led Monitoring and Evaluation Systems: Better evidence, Better Policies, Better Development Results, pp. 169–187, UNICEF.

Maddock, N., 1993, ‘Has project monitoring and evaluation worked?’, Project Appraisal 8(3), 188–192. https://doi.org/10.1080/02688867.1993.9726906

Mantzavinos, C., North, D.C. & Shariq, S., 2004, ‘Learning, institutions, and economic performance’, Perspectives on Politics null(01), 75–84. https://doi.org/10.1017.S1537592704000635

Meier, W., 2003, ‘Results-based management – Towards a common understanding among development cooperation agencies’, CIDA Discussion Paper, 5.0, 0–25, Canadian International Development Agency, Performance Review Branch.

Nelson, C.A., 2014, ‘Towards improved development effectiveness: Engaging the power of foresight thinking in development investment processes’, Development 56(4), 475–483. https://doi.org/10.1057/dev.2014.55

Nicolini, D., 2011, ‘Practice as the site of knowing: Insights from the field of telemedicine’, Organization Science 22(3), 602–620. https://doi.org/10.1287/orsc.1100.0556

Nonaka, I., Toyama, R. & Konno, N., 2000, ‘SECI, Ba and leadership: A unified model of dynamic knowledge creation’, Long Range Planning 33, 5–34, viewed 15 April 2021, from https://www.ai.wu.ac.at/~kaiser/literatur/nonaka-seci-ba-leadership.pdf.

North, D.C., 1991, ‘Institutions’, Journal of Economic Perspectives 5(1), 97–112. https://doi.org/10.1257/jep.5.1.97

North, D.C., 1994, ‘Economic performance through time’, The American Economic Review 84(3), 1–13.

OECD, 2006, ‘Emerging good pratices in managing for development results’, OECD – Sourcebook on Emerging Good Practices 1(1), 186, viewed 12 July 2021, from http://www.oecd.org/dac/effectiveness/36853468.pdf.

Parker, D., 2008, ‘Monitoring and evaluation, and the knowledge function’, in UNICEF (ed.), Bridging the Gap the role of monitoring and evaluation in evidence-based policy making, Issue12th edn., pp. 73–87, UNICEF.

Quesnel, J.S., 2010, ‘The startegic intent. Understanding strategic intent is the key to successful country-led monitoring and evaluation systems’, in Country-led monitoring and evaluation systems: Better evidence, better policies, better development results, pp. 56–76, UNICEF.

Savall, H. & Zardet, V., 2011, The qualimetrics approach – Observing the complex bject (A. F. (Bentley U. Buono (ed.); Informatio), Information Age Publishing, Inc. Illustrated édition (1 juin 2011), Charlotte.

Schilling, M.S., Oeser, N. & Schaub, C., 2007, ‘How effective are decision analyses? Assessing decision process and group alignment effects’, Decision Analysis 4(4), 227–242. https://doi.org/10.1287/deca.1070.0101

Segone, M., 2010, ‘Enhancing evidence-based policy-making through country-led monitoring and evaluation systems’, in Country-led monitoring and evaluation systems: Better evidence, better policies, better development results, pp. 17–31, UNICEF.

Yin, R.K., 2014, Case study reseach: Design and method, 5th edn., Sage, Thousand Oaks, CA.

Young, J., 2005, ‘Research, policy and practice: Why developing countries are different’, Journal of International Development 17(6), 727–734. https://doi.org/10.1002/jid.1235


 

Crossref Citations

1. Country-led monitoring and evaluation systems through the lens of participatory governance and co-production: Implications for a Made in Africa Evaluation approach
Candice Morkel, Adeline Sibanda
African Evaluation Journal  vol: 10  issue: 1  year: 2022  
doi: 10.4102/aej.v10i1.622