About the Author(s)


Enock Warinda Email symbol
Association for Strengthening Agricultural Research in Eastern and Central Africa, Entebbe, Uganda

Citation


Warinda, E., 2019, ‘Evaluating operationalisation of integrated monitoring and evaluation system in Kisumu County: Implications for policy makers’, African Evaluation Journal 7(1), a385. https://doi.org/10.4102/aej.v7i1.385

Original Research

Evaluating operationalisation of integrated monitoring and evaluation system in Kisumu County: Implications for policy makers

Enock Warinda

Received: 28 Feb. 2019; Accepted: 01 May 2019; Published: 20 June 2019

Copyright: © 2019. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Evaluation findings are increasingly becoming valuable for policy makers in Kenya. The Directorate of Monitoring and Evaluation is responsible for providing reliable data findings for decision-makers. They are in turn expected to access the data and information through the National Integrated Monitoring and Evaluation System (NIMES). Unfortunately, the directorate hardly receives timely data as required, thus is unable to make timely decision within the ministry of agriculture, livestock and irrigation in Kisumu County.

Objectives: The aim of this study was to assess the extent of operationalisation of NIMES through utilisation of the electronic project management information system (e-ProMIS) within the three agricultural departments.

Methods: Through single-point face-to-face interviews using semi-structured questionnaires, mixed methods approach and Likert scale were applied to assess the level of operationalisation of, staff competences in, and satisfaction with NIMES. Both random and purposive sampling was used. Using mixed methods approach, primary and secondary data were collected from 10 key indicators and fitted in a binary logistic regression model to assess the level of operationalisation of NIMES.

Results: This article shows that operationalisation of NIMES is unsatisfactory, and data collected are incorrectly formatted. None of the departmental personnel charged with uploading relevant data in e-ProMIS neither accessed nor utilised the platform. There were no champions supporting NIMES, thus no reports generated from the system.

Conclusions: Factors hindering operationalisation of NIMES were: dysfunctional monitoring and evaluation (M&E) systems, limited human capacity on M&E, lack of NIMES champions, limited availability of data, unclear information flow to decision makers and inadequate integration of NIMES in planning and budgeting.

Keywords: NIMES; operationalisation; e-ProMIS; evaluation; agriculture; policy; capacity.

Introduction

Globally, monitoring and evaluation (M&E) has increasingly come to the fore over the past three decades in being regarded as the key in providing evidence of programme and organisational performance (Christie 2007; Johnson et al. 2009; Mackay 2006; Patton 2001; Picciotto 2003). Empirical research shows that operationalisation of M&E system has significantly been influenced by donor demands (Behrens & Kelly 2008; Carman & Fredericks 2008; Hendricks, Plantz & Pritchard 2008; Porter & Goldman 2013), strong internal pressures (Kusek & Rist 2004), need to show value for money (Hauge 2001), need to enhance institutional capacity (Bornstein 2003, 2006; Mackay 2006; May et al. 2006; Mosse & Lewis 2005; Newcomer 2004; UNDP 2013) and increased need for organisational learning (Chen 2005; Samset, Forss & Hauglin 1992).

Colombia, for example, developed its National Results-Based Management and Evaluation System (SINERGIA) to aid in enhancing the country’s reform towards performance-based management, particularly at the central administration through promoting joint planning and budgeting using system-generated data. The system achieved a high level of development and customisation and is held up as an example of best practices by multilateral organisations, donor agencies and other governments (Manuel 2009).

In the United Kingdom, the government’s performance targets contained in the public sector agreements between the Treasury and each of the 18 main departments ensured that the M&E system contained the state department’s overall goal, priority objectives and key performance targets that are also annually reported on (Mackay 2007). In Germany, an M&E system is used by central government to monitor all the activities within the departments to fight corruption (David 2003). In Australia, the government’s ‘whole-of-government evaluation system’ is managed by the Department of Finance, and all ministries evaluate each of their programmes every 3–5 years (Buse & Vigneri 2008). In India, Japan, South Africa, Nigeria and Kenya, the adoption of M&E systems is taking root gradually (Kremer 2003; Sadoshima 2010; World Bank 2004), with most African governments not showing commitment to ensuring the operationalisation of M&E systems (Fleischer & Christie 2009).

Despite the pivotal role of M&E in providing credible information for continental development and decision making (Bhattacherjee 2011; Carlsson et al. 1999; Coryn et al. 2011; Dearden & Kowalski 2003), most government institutions in Africa lag not only in designing user-friendly M&E systems, but also operationalising them to generate timely M&E data for decision-making in all sectors (Birckmayer & Weiss 2000; Diabre 2002; Kusek & Rist 2004; Mackay 2006; Rebien 1996; Vestman & Conner 2006). However, studies in the aforementioned states show that countries with operationalised M&E systems enjoy timely and reliable feedback that informs budgeting and ensures synchrony of government programmes (Mackay 2007; Owen 2007; World Bank 2001a).

Like in many African countries, the introduction of National Integrated Monitoring and Evaluation System (NIMES) in Kenya in 2004 coupled with the policy on e-government created room for M&E to be an integral part of policy formulation and implementation process at the national level (GOK 2007). Kenya expected to use NIMES for informing national development planning and policy dialogue within government and with the private sector, civil society organisations and development partners (GOK 2016; MDP 2014; MPND 2007). This has not been realised (Andersson et al. 2014). Similarly, through the coordination of the Directorate of M&E (MED), the government expected that all ministries, public sectors and sub-sectors at the national and county level would operationalise NIMES at national level, and the Country Integrated Monitoring and Evaluation System (CIMES) at the county level, by collecting accurate and up-to-date data. These have not been undertaken (Andersson et al. 2014).

The study is undertaken because the Kisumu County government, its Ministry of Agriculture and Rural Development as well as the three departments of agriculture, fisheries and livestock face challenges in operationalising NIMES. These challenges range from limited capacity of the departmental staff to undertake M&E functions, including regular data collection, analysis and consolidation of periodic reports and other relevant information, to the utilisation of M&E findings to inform departmental decision-making and budgeting. The factors determining the operationalisation of NIMES in the targeted departments are undocumented, while the extent to which available data have been uploaded onto the electronic project management information system (e-ProMIS) platform, and eventually fed into NIMES are not articulated. It is further unclear how these three departments have complied with the NIMES standard protocols and procedures in reporting on key indicators of performance within the three components.

Based on these gaps, the main purpose of the study is to analyse the extent to which NIMES has been operationalised in the three departments. It is anticipated that unless NIMES is operationalised using the required e-ProMIS platform, the MED will not be able to fulfil its mandate of adequately reporting on progress in the implementation of public policies, programmes and projects, including the county’s contribution to the County Integrated Development Plan (CIDP 2013), the Medium-Term Plan (MTP), the Kenya Vision 2030, the Comprehensive African Agricultural Development Programme (CAADP) and the sustainable development goals (SDGs). The findings in this article focus on three specific objectives, namely: (1) the extent to which the agricultural departments have been capacitated to operationalise NIMES through utilisation of e-ProMIS platform, (2) the status of the performance of agricultural departments towards achieving their M&E objectives based on parameters for NIMES and (3) the key drivers contributing to the operationalisation of NIMES in agricultural departments in Kisumu County.

This article shows the M&E capacity gaps that exist at the individual and departmental level in their efforts towards operationalisation of NIMES through utilisation of the MED-recommended e-ProMIS platform. It provides the heads of these departments and their staff with relevant information and approaches on how to fast track their programme and project performance in the implementation of CIDP, MTP, Vision 2030, CAADP and SDGs. It generates vital lessons for the departmental staff to address existing gaps and initiate operationalisation of NIMES, not only in the county, but also in other counties and government departments. It also identifies effective ways of developing M&E capacity within and across the departments, besides indicating how to integrate NIMES in the annual departmental planning and budgeting.

Guiding principles of National Integrated Monitoring and Evaluation System

An effective NIMES is focused on enhancing the policy environment for M&E usage, strengthening M&E skills and developing the physical infrastructure necessary to support the demands for M&E data and information management. The Kisumu County’s CIMES is designed to ensure regular reporting on implementation progress of the country’s priority policies, projects and programmes outlined in key policy documents such as the MTP, the CIDP, devolved funds programmes, the National Accountability Management Framework and the Performance Contracts and the Performance Appraisal System. The national NIMES that incorporates the CIMES is designed to report on the government’s commitments to other international frameworks such as the SDGs, the New Partnership for African Development’s (NEPAD) and the African Peer Review Mechanism (APRM). Both NIMES and CIMES aim at strengthening governance by improving transparency, strengthening accountability relationships and building a performance culture within the national and county governments to support better policy making, budget decision-making and management.

An effective NIMES incorporates evaluation, hereby defined as the systematic process of analysing information obtained during regular inspection of project and programme activities undertaken by the departments. The M&E officers should upload the evaluation findings and data on the e-ProMIS platform for use by the departmental personnel and institutional administration. An effective NIMES should also have a functional CIMES, hereby referred to as an integrated and observation system for data management at the county level. Kenya has a total of 47 counties, headed by county governors, and managed by county executive committees and senior management staff, including the Head of M&E. Through CIMES, the county governments should verify whether the activities of each county’s priority projects and programmes are happening according to schedules, and whether resources are correctly spent. A major reference document to help the county government, especially the Ministry of Agriculture and its departments of agriculture, fisheries and livestock to assess its performance, is the 10-year CIDP (2013).

Both NIMES and CIMES are supported by the e-ProMIS. This is a Web-based data collection, tracking, analysis and planning tool that ensures coordination of development efforts, data dissemination, alignment and harmonisation of projects with the county strategy. Thus, effective operationalisation of NIMES refers to the systematic integration of NIMES into the decision-making process and creation of a permanent demand for its use within the three departments. The departments are regarded to have operationalised NIMES when the M&E personnel and departmental heads ensure that they regularly follow the guidelines provided in NIMES standard protocols and procedures, and that their decision-making is guided by the information available in NIMES/CIMES.

The Kenya Vision 2030 refers to the national level long-term objectives of the country that aims at making the country achieve middle-income status by 2030. It has a series of 5-year medium-term plans that translate the long-term objective into medium-term priorities, objectives and programmes. The Draft M&E Policy of 2012 that articulates the government’s commitment to manage for development results at all levels supports this Vision 2030. This policy provides a clear framework for strengthening the coverage, quality and utility of the assessment of public policies, programmes and projects by proposing that finances for M&E are clearly allocated within the national budget. It focuses on providing credible and greater evidence to the executive government, the legislature and other actors to make informed policy and programmatic decisions, and to hold the public sector accountable for utilisation of allocated resources. The policy further sets the basis for a transparent process for the citizenry and development stakeholders to mutually appraise results by outlining the principles for a strong M&E system to track achievement of the Vision 2030. All the public policies, strategies, programmes and projects managed by ministries, departments and agencies, county governments, parastatals and executing agencies of public programmes are obligated to apply this policy.

Components of operational National Integrated Monitoring and Evaluation System

Generally, an institution or department is regarded to have an operational integrated M&E system when the following elements are in place: effective leadership and a champion to coordinate and spearhead alignment of M&E tools and frameworks (GAO 2003; Kusek & Rist 2004; World Bank 2001b); sufficient readiness and receptiveness for M&E (Kopczynski & Pritchard 2004; Mackay 2007; Madaus, Stufflebeam & Kellaghan 2000); a critical mass of skilled and full-time personnel to undertake monitoring and process evaluation as well as information and communication technology (ICT) activities (mainly comprising M&E officers and systems analysts); network between departmental and national databases; appropriate data storage and retrieval infrastructure; commensurate budget for M&E activities (at least 2.5% of total sector or departmental budgets); articulate logical or results framework; clear job descriptions for M&E and ICT staff, including who is responsible for accessing and managing the integrated system platform (e.g. e-ProMIS), entering required data sets as well as troubleshooting the system whenever required; routine planning, budgeting and decision-making based on M&E findings; carefully selected set of indicators of performance; learning through regular project and programme portfolio reviews; generation of reports based on system-generated results; periodic review of programme and project performance based on system-generated data sets; and compliance with required frequency of data uploads onto the electronic system (Diabre 2002; Khan 2003; Kusek & Rist 2004; Patton 1997, 1998, 2001; Pawson & Tilley 1997; Rebien 1996; Rist 2000).

In this article, these elements are clustered into four components across individual, institutional and systemic levels. These components comprise: (1) M&E planning and reporting, (2) M&E structures and human resources, (3) M&E processes and procedures and (4) data and information management. Figure 1 summarises the vital components of any fully operationalised NIMES.

FIGURE 1: Components of a fully operationalised National Integrated Monitoring and Evaluation System.

Conceptual framework

In this article, the operationalisation of NIMES (here taken as the dependent variable) is assumed to be heavily influenced by three independent variables, namely: the extent to which each of the three departments has been capacitated to operationalise NIMES; the level to which each of the departments access and manage data based on the requirements by the Directorate of M&E; and the degree to which NIMES receives support from senior departmental officials and staff. The framework further posits that an operationalised NIMES within the county and its constituent departments is likely to contribute to institutional growth, development and performance against the strategic objectives and targets. On the other hand, two intermediate or extraneous variables are assumed to influence the independent and dependent variables. For example, adherence to M&E policy guidelines (①) is anticipated to influence operationalisation of NIMES, such that the departments that access and implement the policy guidelines as indicated by the MED are expected to be better positioned to operationalise NIMES than those that do not. Similarly, the presence of an enabling institutional environment (②) is assumed to influence operationalisation of NIMES. Figure 2 shows the anticipated relationships between the independent, the intermediate and the dependent variables.

FIGURE 2: Conceptual framework for operationalisation of National Integrated Monitoring and Evaluation System (NIMS).

As indicated above, the level of operationalisation of NIMES as well as the utilisation of the e-ProMIS platform by each of the departments is fitted as the dependent variable, where a binary response dummy variable of 1 is assigned if NIMES is operationalised by the department, and 0 if otherwise. Operationalising NIMES (❶) in these departments hinges on studies conducted by Boudreau (1996), UNDP (2013) and World Bank (2004) that opine that availability of sustainable capacity strengthening initiatives within institutions has greater potential in supporting operationalisation of any system, including M&E.

In this article, the dependent variable (❶) is assumed to heavily depend on three independent variables (see Figure 2). To enhance data capture and analysis, these three independent variables are further clustered into 10 indicators, namely:

  1. ‘Capacitated departmental staff on e-ProMIS’ such that the more the number of trained personnel within these departments, the higher the likelihood of operationalising NIMES by ensuring that it is maintained and utilised in generating valuable information for policy makers for timely decision-making, and vice versa.

  2. ‘Frequency of data uploads onto the e-ProMIS platform’ in that the departments with more frequent data and information upload onto the platform are more organised in their routine M&E activities.

  3. ‘Total budget allocated for M&E functions’ in that the departments that allocate significant amounts of budgets (at least 2.5% of the total annual budgets) to support M&E activities are more likely to operationalise NIMES than those with lower budgets.

  4. ‘Capacitated staff capable of uploading data on e-ProMIS’, thus updating the NIMES such that the extent of capacity strengthening of each department and the number of departmental staff or officers able to upload data on e-ProMIS according to protocols and procedures, especially those that have benefited from training on management of e-ProMIS platform are positively correlated.

  5. ‘Access to relevant M&E tools and approaches’ in that the departments with quick access to relevant M&E tools, protocols, instruments, approaches and methodologies are more likely to adopt NIMES and operationalise its standards and procedures than those without.

  6. ‘Staff engaged in M&E activities, such as periodic progress reviews, data capture, and work planning and budgeting’ such that enhanced participation of departmental staff in M&E activities according to NIMES standards and procedures is likely to positively influence transparency, accountability and NIMES sustainability within these departments.

  7. ‘Personnel satisfaction with NIMES’ such that the more the departmental staff are satisfied with the benefits derived from the functional and user-friendly NIMES, the higher the probability that they would ensure its sustainable implementation.

  8. ‘Availability of NIMES champions’ and enhanced ‘departmental priority in NIMES maintenance’ in that the departments that prioritise the maintenance of NIMES and ensure that they have champions for the operationalisation of NIMES are better able to institute the system.

  9. ‘Senior staff buy-in of NIMES’ such that those departments with ownership and buy-in by the senior staff members, including heads of department, deputy directors and directors are better positioned to operationalise NIMES than those without such ownership and buy-in by management.

  10. ‘Obligatory use of NIMES in reporting and planning’ in that departments that make it obligatory to only produce biannual and annual reports using data generated by the e-ProMIS platform and NIMES are able to more quickly operationalise NIMES than those that allow the reports to be generated via other approaches other than through NIMES.

Method

Sampling technique

In this article, the case study design was adopted to answer questions like ‘how?’ or ‘why?’ with regard to operationalisation of NIMES in the county. Out of the 286 staff within the three departments, 43 staff (26 men; 17 women) are full-time personnel who are responsible for M&E, knowledge management and ICT. To ensure data sufficiency and minimise data saturation, the countywide sampling frame and unit of analysis comprised 31 (19 men; 12 women) out of the 43 responsible personnel (Table 1). The study adopted both random and purposive sampling techniques in collecting primary data from the 31 departmental staff. Random sampling was applied where there were more than two personnel per department.

TABLE 1: Sample size distribution.
Data collection method

Primary and secondary data were collected using mixed methods approach. Primary data were collected from the 31 departmental staff using a semi-structured questionnaire. A pre-tested semi-structured Likert-type questionnaire that assessed the extent of respondents’ satisfaction with NIMES was also administered. The five-point Likert scale was adopted as it yields a distribution resembling a normal distribution curve (Likert 1932). Secondary data were collected using desk reviews of relevant documents from MED and departmental library. The data sets were collected from the 10 indicators.

Data analysis procedure

Statistical data were analysed using Statistical Package for Social Sciences (SPSS) programme, besides fitting a binary logistic regression model to estimate the probability of selected independent variables in influencing operationalisation of NIMES in each department. The probability of each department utilising NIMES standards and procedures was estimated using the following model:

where ln ONi = the natural log of the dependent variable, here taken as the likelihood of each department operationalising NIMES; Xi = the vector of explanatory variables; b0 and bi = the parameters to be estimated, and whose magnitudes are to show the direction and impact of change; and ei = the random error term. The double log regression is preferred because all the variables are expressed in the natural log, thus enhancing interpretation.

Equation (1) was then transformed to estimate the expected likelihood of each department operationalising NIMES based on selected factors and covariates, here taken as the 10 indicators of success (Eqn 2).

where ON = the likelihood of each department operationalising NIMES; ST = number of departmental staff capacitated on e-ProMIS; FU = frequency of data uploads (1 = regularly; 0 = irregularly); BG = total budget allocated to M&E activities (US$); OT = total number of personnel able to upload e-ProMIS; ACC = whether personnel have access to relevant M&E tools and approaches (1 = Yes; 0 = No); NS = total number of staff engaged in M&E activities; SAT = personnel satisfaction with NIMES (1 = Yes; 0 = No); CHA = availability of NIMES champion (1 = Yes; 0 = No); APP = senior staff buy-in of NIMES (1 = Yes; 0 = No); UGR = departments update and use NIMES to generates reports (1 = Yes; 0 = No); and εi = the error term.

Ethical consideration

This article followed all ethical standards for a research without direct contact with human or animal subjects.

Results and discussion

Departmental capacity to operationalise National Integrated Monitoring and Evaluation System

Under this independent variable, data sets are generated on five key indicators, namely: (1) number of full-time staff engaged in M&E activities, (2) number of personnel trained on e-ProMIS, (3) total amount of budget allocated to M&E activities, (4) frequency of data uploads on e-ProMIS and (5) number of staff skilled to upload data on e-ProMIS.

The effectiveness of M&E systems and operationalisation of NIMES depends on the proportion of staff who have accessed M&E training, and the proportion of annual budget dedicated to M&E activities. Results show that the three departments have different staff categories – technical, support and administration. Out of the 286 staff, 43 (26 men; 17 women) are full-time personnel who are responsible for M&E, knowledge management and ICT (Figure 3). A total of 23 staff (14 men; 9 women) had received training on general M&E (Agriculture = 11; Livestock = 7; Fisheries = 5), while 12 personnel (7 men; 5 women) had gone through training on e-ProMIS (Agriculture = 6; Livestock = 4; Fisheries = 2). In as much as the tasks of these M&E personnel are well cut out, namely: developing M&E plans, designing and implementing tools and frameworks, evaluating performance against standard indicators, assessing and maintaining data validity, reliability, integrity and precision and leading other personnel in uploading data on e-ProMIS; the majority of these responsibilities are not undertaken. For example, none of the departmental personnel charged with uploading relevant data in e-ProMIS has neither accessed nor utilised the platform. Therefore, it can be concluded that this poor performance of the departments in utilising the skills and resources availed to them raises fundamental questions regarding the nature of reports anticipated to be relayed to the MED to enable them to prepare relevant documents for submission to the budget committees as well as to the parliament for approval. This further raises the question on how the county in general has been conducting its budgetary estimates that are guided by validated performance and progress reports based on the previous allocations and disbursements.

FIGURE 3: Distribution of sampled respondents.

Regarding budgetary allocation for M&E activities, the results show that each of the departments has dedicated an average of 3.6% of the annual budgets of US$ 1 865 249 to M&E functions. However, very little progress has been made in ensuring that appropriate M&E tools and protocols are developed and used in data collection. There is also no evidence of utilisation of NIMES through e-ProMIS platform, because no consistently archived data or time series data are available.

Results show an intermittent pattern in the frequency of data uploads onto the e-ProMIS platform. In as much as 12 personnel (7 men; 5 women) have gone through training on e-ProMIS (four personnel per department), only seven of them (4 men; 3 women) can partially upload data on e-ProMIS (Agriculture = 4; Livestock = 2; Fisheries = 1). This shows some capacity gaps in fisheries and livestock departments because only half of the trained personnel could partially upload data. In as much as the personnel have not uploaded data on e-ProMIS platform, they are prepared to start using the platform. The rest of the personnel have neither accessed the ‘Users’ section of the e-ProMIS, nor seen what the portal looks like. Access to the portal requires that one logs in using ‘User Name’ and ‘Password’. These seven personnel with some capacity to upload data on e-ProMIS did so very irregularly, thus showing some level of disorganisation in the departmental routine M&E activities. Some of the data are uploaded biannually and other annually, while some have not been uploaded. These findings show that the departments have limited ability to operationalise NIMES, thus explaining the very limited efforts made towards operationalising NIMES.

With regard to data quality, the results show a significant relationship (p = 0.04) between the capacitated personnel within the departments and the credibility and accuracy of some of the data sets they have collected. Results further show that over 60% of the data sets collected by these trained personnel meet the threshold of good quality data. Data sets from the Department of Agriculture were of higher quality compared to the rest (Agriculture = 73%; Livestock = 58%; Fisheries = 49%). This is attributed to the fact that personnel from that department have attended assorted trainings and have some fair knowledge of e-ProMIS, compared to Fisheries Department where no personnel was able to upload any data on e-ProMIS.

Data access and management

Under this independent variable, data sets are generated from two indicators, namely: (1) number of personnel accessing relevant M&E tools and approaches, as well as the ease of access to these tools; and (2) number of staff engaged in M&E activities, especially periodic progress reviews, data capture and work planning and budgeting.

The study shows that all the 23 personnel (14 men; 9 women) who have benefited from training on M&E have also accessed relevant M&E tools and approaches (Agriculture = 11; Livestock = 7; Fisheries = 5). The commonest tools they have accessed include: logical framework template, direct observation checklists, structured questionnaires, focus group discussion guide, in-depth interview guide and theory of change guide. However, they have hardly been involved in the development of new M&E tools. Results also show that in as much as only 12 personnel (7 men; 5 women) periodically collect, but infrequently upload, the data, the regulations stipulated in the NIMES that require the M&E personnel to co-develop the data collection tools with inputs from ICT unit are not followed. This explains why data are not frequently uploaded, because they are collected or stored in different formats and exclude standard indicators required in specific fields within the e-ProMIS. Similarly, variations exist among the departments with respect to the kind of M&E documents and standard operating procedures that guide their M&E functions and processes. The department of fisheries uses the performance measurement plan, while the departments of agriculture and livestock use the M&E plan. None of the departments uses the M&E framework, the M&E strategy, the Handbook of Indicators from MED or the M&E policy, all of which are meant to enhance conformity with NIMES. Results further show that all the personnel are engaged in M&E activities in one way or an other, especially in annual progress reviews, work planning and budgeting. However, it is evident that some of the work planning and budgeting sessions hardly use M&E data to inform decision-making and future programming.

Status of National Integrated Monitoring and Evaluation System patronage culture

Under this independent variable, data sets are generated from four indicators, namely: (1) level of personnel satisfaction with NIMES, (2) extent of availability of NIMES champions and enhanced departmental priority in NIMES maintenance, (3) level of senior staff buy-in of NIMES and (4) extent of utilisation of NIMES in reporting and planning.

Results show varied levels of personnel satisfaction with NIMES across the departments. Out of the 31 personnel sampled in this study, 16 (10 men; 6 women) are satisfied with NIMES (Agriculture = 8; Livestock = 5; Fisheries = 3). The satisfied personnel comprise those that have been trained on e-ProMIS (12) and are currently attempting some data uploads (7). Those showing dissatisfaction cite lack of clarity in guidelines provided and availability of limited access to the required data. With regard to availability of champions to support operationalisation of NIMES, the departments indicate that there is very minimal patronage from high-level policy makers and ministry officials. In as much as only the department of agriculture indicates that they have a champion, it is unclear how this champion has supported the operationalisation of NIMES. This confirms the limited departmental priority in NIMES maintenance.

Notwithstanding the limited engagement of a champion on NIMES, there is a general observation that the senior staff have demonstrated their buy-in of NIMES (Agriculture = 10; Livestock = 7; Fisheries = 5). This buy-in is exhibited in their support for budgetary allocation of up to 3.6% annually on M&E. They have also supported the staff in undertaking targeted M&E studies. Regarding utilisation of NIMES in reporting and planning, all the departments indicate they indirectly use generated data from running projects. However, the extent of actual utilisation of NIMES in this process is very limited as indicated by 12 respondents from the departments (Agriculture = 6; Livestock = 4; Fisheries = 2). In general, it is observable that the departments record very minimal NIMES patronage culture. This should alert the policy makers to review the entire process of operationalising NIMES.

Attitudinal assessment

Based on the five-point Likert scale that assessed the level of stakeholder satisfaction with NIMES, an average score of 3.12 (indicating indifference) is generated, with average responses from the departments varying from 2.3 (agreeing) to 3.8 (disagreeing) with posited statements (Figure 4). Most of the respondents (Agriculture = 11; Livestock = 7; Fisheries = 5) agree that some of the M&E reports are distributed to other ministries and devolved units in a timely manner, but not uploaded on e-ProMIS. Only three respondents note that the M&E reports from their department of agriculture are delivered to MED according to schedule. On closer analysis, the M&E reports that the respondents referred to are mainly the annual reports, and not the indicator data sets required for uploads on e-ProMIS. These findings confirm that there is minimal operationalisation of NIMES.

FIGURE 4: Likert scale ratings on attitudinal assessment.

Use of a model to predict the likelihood of operationalising National Integrated Monitoring and Evaluation System

The data generated from the indicators are used to fit a model to determine the extent of operationalisation of NIMES in the three departments. The logit model (Maddala 1983) is used. In this model, the likelihood that the department operationalises NIMES (Pi) is estimated from a vector (X) of explanatory (independent) variables predicted to influence institutionalisation of NIMES and e-ProMIS platform. In this article, the vector X is taken as a function of three sets of factors (Figure 2) and the associated 10 indicators.

Thus, the likelihood that a department operationalises NIMES (Pi) is estimated as:

where ε is the error term with a logistic distribution. Based on Eqn (3), a conceptual model is then fitted as follows:

where the dependent variable, y, takes the value of 1 if the department operationalises NIMES, and 0 otherwise; X is the vector of independent variables, which may include a constant; and b is the corresponding parameter vector. A larger xb indicates a higher likelihood of the department institutionalising NIMES.

Based on Eqn 4, the likelihood of operationalisation of NIMES by any department is further estimated using the following logit model:

where ln ONi = the natural log of the dependent variable, here taken as the likelihood of each department operationalising NIMES; Xi = the vector of explanatory variables; β0 and bi = the estimated parameters whose magnitude show the direction and impact of change; and ei = the random error term. Equation 5 is further transformed to estimate the likelihood of operationalising NIMES using selected and covariates (indicators) as follows:

where ON = the likelihood of each department operationalising NIMES; ST = number of departmental staff capacitated on e-ProMIS; FU = frequency of data uploads (1 = regularly; 0 = irregularly); BG = total budget allocated to M&E activities (US$); OT = total number of personnel able to upload e-ProMIS; ACC = whether personnel have access to relevant M&E tools and approaches (1 = Yes; 0 = No); NS = total number of staff engaged in M&E activities; SAT = personnel satisfaction with NIMES (1 = Yes; 0 = No); CHA = availability of NIMES champion (1 = Yes; 0 = No); APP = senior staff buy-in of NIMES (1 = Yes; 0 = No); UGR = departments update and use NIMES to generates reports (1 = Yes; 0 = No); and ei = the error term.

Using Eqn 6, the transformed model estimates the likelihood of operationalising NIMES as follows:

The model with all predictor variables is statistically significant (p = 0.03). Among the strongest predictors of the likelihood of operationalisation of NIMES include: the amount of annual budget allocated to M&E activities (BG), the number of M&E staff actively engaged in M&E functions (NS), number of staff capacitated on e-ProMIS (ST), frequency of data uploads (FU), number of staff uploading data (OT), level of senior staff buy-in of NIMES (APP) and level to which the departments update and use NIMES to generate reports – UGR (Table 2). Results further show that the odds ratios of 8.24, 5.13, 4.83 and 3.19, respectively, indicate that the departments that annually allocate budgets to M&E functions have regular M&E staff, have their senior staff supporting NIMES and have some of their staff capable of uploading data on e-ProMIS are over 8, 5, 4.8 and 3 times, respectively, more likely to operationalise NIMES (controlling for all other factors in the model). This shows that the model can be useful in predicting the likelihood of each of the departments’ operationalising NIMES. It is reasonable to conclude that the model can explain between 62.2% (Cox and Snell R-squared) and 91.4% (Nagelkerke R-squared) of the variance in readiness to operationalise NIMES. It also correctly classified 93.5% of the cases. This shows that up to 31.8% of variance may be explained by other factors not included in this study, while 6.5% of the cases could not be classified.

TABLE 2: Logistic regression predicting likelihood of operationalising National Integrated Monitoring and Evaluation System (NIMS).

From the study findings, it is evident that the factors influencing operationalisation of NIMES in the ministry of agriculture can be summarised in Figures 2 to 5. This is because, the article shows that operationalisation of NIMES (❶) within the three departments significantly depends on three interrelated factors, namely: (1) enhanced capacity of the departments towards M&E (①), such that individuals and departments whose capacities have been enhanced perform better than the other programme support personnel who have never benefited from any M&E training; (2) an enabling departmental environment that motivates the departmental staff to ensure credible data management (②) and (3) the functionality of the M&E systems and units (③) within these departments, especially with respect to accountability, data generation and sharing, as well as resource mobilisation for further implementation of planned tasks. Figure 5 illustrates these factors in details.

FIGURE 5: Factors influencing operationalisation of National Integrated Monitoring and Evaluation System (NIMS).

Based on the study findings, it is evident that the three departments can still operationalise NIMES by ensuring that the personnel responsible for M&E, knowledge management and ICT undertake the tasks illustrated in Figure 6.

FIGURE 6: Requisite stages for operationalisation of National Integrated Monitoring and Evaluation System (NIMS).

Conclusions

In as much as the three departments have been capacitated through trainings and budgetary allocations for M&E functions to operationalise NIMES via the utilisation of e-ProMIS platform, these departments have not made any significant steps towards realising this objective. Only 7 out of the 23 departmental personnel trained and charged with uploading relevant data onto NIMES have partially accessed and used the e-ProMIS platform. This poor performance of the departments in utilising the skills and resources availed to them compromises the credibility of reports submitted to the MED to enable them to prepare relevant policy documents for the Parliamentary Budget Committee and Parliament for approval.

Given the non-adherence to the NIMES through the MED M&E-approved e-ProMIS, there is unsatisfactory performance of the departments towards achieving their M&E objectives based on parameters for NIMES. None of the requirements of NIMES is adhered to, including the requisite stages of ensuring credible M&E processes such as data collection, data analysis, development of M&E tools and adherence to NIMES protocols and standards.

Among the key factors that influence operationalisation of NIMES are dysfunctional M&E systems and units, sub-optimal departmental and human capacity on M&E, as well as limited enabling environment for joint planning, budgeting, peer review of performance and mutual accountability. Other factors include: limited departmental buy-in and ownership of the process; limited NIMES’ champions within the departments and the entire county government; pronounced individual and institutional M&E capacity gaps; limited availability of data and information as well as unclear information flow to decision– makers on the performance and implementation of strategic plans; partial production of vital lessons for further operationalisation of NIMES; limited integration of NIMES as part of public management practice and culture; and inadequate integration of NIMES in planning and budgeting.

A major hypothesis postulated in this article is that unless NIMES is operationalised using the required e-ProMIS platform, the Directorate of M&E will not be able to adequately report on the progress made in implementation of public policies, programmes and projects, and thus the county’s contribution to the CIDP, the MTP, the Vision 2030, the CAADP and SDGs will be unclear. This hypothesis is confirmed, thereby further exacerbating challenges for MED to provide credible data for decision-making.

This article recommends that: (1) the heads of departments should facilitate capacity strengthening of departmental staff to enable them to meet their M&E objectives and operationalise NIMES; (2) the policy makers should articulate roles and responsibilities for M&E personnel, including data uploads on NIMES; (3) the departments should appoint champions who ensure that the personnel operationalise NIMES; (4) the heads of departments should ensure inter-departmental meetings and exchange of information and lessons learned and (5) policy makers should enforce integration of NIMES in annual planning and budgeting.

Acknowledgements

Competing interests

The author declares that he has no financial or personal relationships that may have inappropriately influenced him in writing this article.

Authors’ contributions

I declare that I am the sole author of this research article.

Funding

The author declares that he did not receive any support in form of grants, equipment, drugs or other support from any source to undertake this task, or to facilitate the conduct of the work he has described in this article or the writing of the article itself.

Data availability statement

The data sets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Disclaimer

The author declares that the views expressed in the submitted article are his own and not an official position of the institution or funder.

References

Andersson, B., Jensen, R.I., Naitore, H. & Christoplos, I., 2014, Final evaluation of the National Integrated Monitoring and Evaluation System (NIMES) Capacity Development Project (CDP), Citat Publishers, Stockholm.

Behrens, T.R. & Kelly, T., 2008, ‘Paying the piper: Foundation evaluation capacity calls the tune’, in J.G. Carman & K.S. Fredericks (eds.), Nonprofits and evaluation: Volume 119 of new directions in programme evaluation, pp. 37–50, Jossey-Bass Publishers, San Francisco, CA.

Bhattacherjee, A., 2011, Social science research: Principles, methods, and practice, Version 2.0, University of South Florida, viewed 23 February 2019, from Blackboard: my.usf.edu.

Birckmayer, J.D. & Weiss, C.H., 2000, ‘Theory-based evaluation in practice: What do we learn?’, Evaluation Review 24, 407–431. https://doi.org/10.1177/0193841X0002400404

Bornstein, L., 2003, ‘Management standards and development practice in the South African aid chain’, Public Administration and Development 23, 393–404. https://doi.org/10.1002/pad.291

Bornstein, L., 2006, ‘Systems of accountability, webs of deceit? Monitoring and evaluation in South African NGOs’, Development 49(2), 52–61. https://doi.org/10.1057/palgrave.development.1100261

Boudreau, J.W., 1996, Human resources and organisation success: Working paper series, Cornell Centre for Advanced Human Resource Studies (CAHRS), Cornell University, New York.

Buse, K.E. & Vigneri, L.M., 2008, Can project-funded investments in rural development be scaled up? Lessons from the millennium villages project. Natural resource perspectives, vol. 118, Overseas Development Institute (ODI), London.

Carlsson, J., Eriksson-Baaz, M., Fallenius, A.M. & Lövgren, E., 1999, Are evaluations useful? Cases from Swedish Development Cooperation, Sida, Stockholm, Sweden.

Carman, J.C. & Fredericks, K.A., 2008, ‘Non-profits and evaluation: Empirical evidence from the field’, in J.G. Carman & K.A. Fredericks (eds.), Non-profits and evaluation. New directions for evaluation, vol. 119, pp. 51–71, Winter.

Chen, H., 2005, Practical programme evaluation: Assessing and improving planning, implementation and effectiveness, Sage, Newbury Park, CA.

Christie, C.A., 2007, ‘Reported influence of evaluation data on decision makers’ actions’, American Journal of Evaluation 28(1), 8–25. https://doi.org/10.1177/1098214006298065

CIDP, 2013, Kisumu County Integrated Development Plan, 2013–2017, Government Printers, Nairobi.

Coryn, C.L.S., Noakes, L.A., Westine, C.D. & Schröter, D.C., 2011, ‘A systematic review of theory-driven evaluation practice from 1990 to 2009’, American Journal of Evaluation 32(2), 199–226. https://doi.org/10.1177/1098214010389321

David, C., 2003, ‘With help from corporations, German Group fights corruption’, Wall Street Journal, 26 November, pp. 14–15.

Dearden, P.N. & Kowalski, B., 2003, ‘Programme and project cycle management (PPCM): Lessons from south and north’, Development in Practice 13(5), 501–514. https://doi.org/10.1080/0961452032000125875

Diabre, Z., 2002, Forward: The handbook on monitoring and evaluation for results, United Nations Development Programme, New York.

Fleischer, D.N. & Christie, C.A., 2009, ‘Evaluation use: Results from a survey of U.S. American Evaluation Association members’, American Journal of Evaluation 30(2), 158–175. https://doi.org/10.1177/1098214008331009

Government of Kenya (GOK), 2007, Master plan for the implementation of a National Integrated Monitoring and Evaluation System in Kenya, Ministry of State for Planning, National Development and Vision 2030, Nairobi, Kenya.

Government of Kenya (GOK), 2016, Guideline for the development of county integrated monitoring and evaluation system, Ministry of Devolution and Planning, Nairobi.

Hauge, A., 2001, Strengthening capacity for monitoring and evaluation in Uganda: A results-based perspective, World Bank Operations Evaluation Department, ECD Working Paper Series, No. 8, Washington, DC.

Hendricks, M., Plantz, M.C. & Pritchard, K.J., 2008, ‘Measuring outcomes of united way-funded programmes: Expectations and reality’, in J.G. Carman & K.A. Fredericks (eds.), Nonprofits and evaluation. New directions for evaluation, vol. 119, pp. 13–35, Wiley Periodicals, Inc., Washington, DC.

Johnson, K., Greenseid, L.O., Toal, S.A., King, J.A., Lawrenz, F. & Volkov, B., 2009, ‘Research on evaluation use: A review of the empirical literature from 1986 to 2005’, American Journal of Evaluation 30(3), 377–410. https://doi.org/10.1177/1098214009341660

Khan, K., 2003, Strengthening of monitoring and evaluation system, Pakistan Poverty Alleviation Fund, Islamabad.

Kopczynski, M.E. & Pritchard, K., 2004, ‘The use of evaluation by nonprofit organisations’, in J.S. Wholey, H.P. Hatry & K.E. Newcomer (eds.), Handbook of practical programme evaluation, 2nd edn., pp. 649–669, Jossey-Bass, San Francisco, CA.

Kremer, M., 2003, ‘Randomised evaluations of educational programmes in developing countries: Some lessons’, American Economic Review Papers and Proceedings 93(2), 102–115. https://doi.org/10.1257/000282803321946886

Kusek, J.Z. & Rist, R.C., 2004, Ten steps to a results based monitoring and evaluation system: A handbook for development practitioners, The World Bank, Washington, DC.

Likert, R., 1932, ‘A technique for the measurement of attitudes’, Archives of Psychology 140, 1–55.

Mackay, K., 2006, Institutionalisation of monitoring and evaluation system to improve public sector management, ECD Working Paper Series, No. 15, World Bank, Washington, DC.

Mackay, K., 2007, How to build M&E systems to support better government, International Bank for Reconstruction and Development, World Bank, New York.

Madaus, G.F., Stufflebeam, D.L. & Kellaghan, T., 2000, Evaluation models: Viewpoints on educational and human services evaluation, 2nd edn., Kluwer Academic Publishers, Hingham, MA.

Maddala, G.S., 1983, Limited dependent and qualitative variables in econometrics, Cambridge University Press, Cambridge, UK.

Manuel, F.C., 2009, Building a results-based management and evaluation system in Columbia. Revised September 2009, The World Bank, Washington, DC.

May, E., Shand, D., Mackay, K., Rojas, F. & Saavedra, J., 2006, Towards institutionalising monitoring and evaluation systems in Latin America and the Caribbean: Proceedings of a World Bank/Inter-American Development Bank Conference, World Bank, Washington, DC.

Mosse, D. & Lewis, E.D., 2005, The aid effect. Giving and governing in international development, Sage, London.

Ministry of Devolution and Planning (MDP), 2014, M&E framework for Kenya, 2014, Second draft. Government Printers, Nairobi.

Ministry of Planning and National Development (MPND), 2007, Master plan for the implementation of a National Integrated Monitoring and Evaluation System, 2007–2012, Government Printers, Nairobi.

Newcomer, K.E., 2004, ‘How might we strengthen evaluation capacity to manage evaluation contracts?’, American Journal of Evaluation 25, 209. https://doi.org/10.1016/j.ameval.2004.03.006

Owen, J.M., 2007, Programme evaluation: Forms and approaches, 3rd edn., 298 p., Guilford Press, New York.

Patton, M.Q., 1997, Utilisation-focused evaluation. The new century text, 3rd edn., Sage, Thousand Oaks, CA.

Patton, M.Q., 1998, ‘Discovering process use’, Evaluation 4(2), 225–233. https://doi.org/10.1177/13563899822208437

Patton, M.Q., 2001, ‘Evaluation, knowledge management, best practices, and high quality lessons learned’, American Journal of Evaluation 22(3), 329–336. https://doi.org/10.1177/109821400102200307

Pawson, R. & Tilley, N., 1997, Realistic evaluation, Sage, London.

Picciotto, R., 2003, ‘International trends and development evaluation: The need for ideas’, American Journal of Evaluation 24(2), 227–234. https://doi.org/10.1177/109821400302400206

Porter, S. & Goldman, I., 2013, ‘A growing demand for M&E in Africa’, African Evaluation Journal 1(1), Art #25, 1–9. https://doi.org/10.4102/aej.v1i1.25

Rebien, C., 1996, Evaluating development assistance in theory and practice, Avebury, Aldershot.

Rist, R.C., 2000, ‘Evaluation capacity development in the People’s Republic of China: Trends and prospects’, in K. Malik & C. Roth (eds.), Evaluation capacity development in Asia, pp. 39–44, United Nations Development Program Evaluation Office, New York.

Sadoshima, S., 2010, Annual evaluation report on Japan’s economic cooperation, Ministry of Foreign Affairs 2-2-1 Kasumigasaki, Chiyoda-ku, Tokyo, Japan.

Samset, K., Forss, K. & Hauglin, O., 1992, Learning from experience – A study of the feedback from the evaluation system in the Norwegian aid administration, Ministry for Foreign Affairs, Oslo.

United Nations Development Programme (UNDP), 2013, MDG progress report, Transaction Publishers, Lusaka.

U.S. General Accounting Office (GAO), 2003, Programme evaluation: An evaluation culture and collaborative partnerships help build agency capacity, Emerald Group Publishing Limited, Washington, DC.

Vestman, O.K. & Conner, R.F., 2006, ‘The relationship between evaluation and politics’, in I.F. Shaw, J.C. Greene & M.M. Mark (eds.), The Sage handbook of evaluation, pp. 225–242, Sage, London, UK.

World Bank, 2001a, Outcomes-based budgeting systems: Experience from developed and developing countries, World Bank Group, Washington, DC.

World Bank, 2001b, Readiness assessment – Toward results-based monitoring and evaluation in Romania, World Bank Group, Washington, DC.

World Bank, 2004, Monitoring and evaluation: Some tools methods and approaches, World Bank Group, Washington, DC.


 

Crossref Citations

1. Etat des systèmes de suivi et d’évaluation en Afrique francophone : une approximation au moyen d’un diagnostic rapide
Edoé D. Agbodjan, Miché Ouédraogo, Moussa Thiaw, Pascal K. Kablan
African Evaluation Journal  vol: 11  issue: 1  year: 2023  
doi: 10.4102/aej.v11i1.654