About the Author(s)


Hesborn Wao Email symbol
Division of Evidence-Based Medicine and Comparative Effectiveness Research, USF Health Morsani College of Medicine (MCOM), University of South Florida, United States

Rohin Onyango
Africa Capacity Alliance, Nairobi, Kenya

Elizabeth Kisio
Children of God Relief Institute-Lea Toto Program, Karen-Nairobi, Kenya

Moses Njatha
ICF, MEASURE Evaluation PIMA, Nairobi, Kenya

Nelson O. Onyango
School of Mathematics, University of Nairobi, Kenya

Citation


Wao, H., Onyango, R., Kisio, E., Njatha, M. & Onyango, N.O., 2017, ‘Strengthening capacity for monitoring and evaluation through short course training in Kenya’, African Evaluation Journal 5(1), a192. https://doi.org/10.4102/aej.v5i1.192

Original Research

Strengthening capacity for monitoring and evaluation through short course training in Kenya

Hesborn Wao, Rohin Onyango, Elizabeth Kisio, Moses Njatha, Nelson O. Onyango

Received: 18 Nov. 2016; Accepted: 24 Dec. 2016; Published: 13 Apr. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Weak monitoring and evaluation (M&E) systems and limited supply of M&E human resources in Africa signal the need to strengthen M&E capacity.

Objectives: This exploratory study evaluated the effect of short course training on professionals’ knowledge and skills in the areas of mixed methods research, systematic review and meta-analysis and general principles of M&E.

Methods: A partially mixed concurrent dominant status design including quantitative (multilevel modelling and meta-analyses) and qualitative (thematic content analysis) components was employed to evaluate the impact of a 4-day short course training focusing on these areas.

Results: Thirty-five participants participated in the training. Participants experienced an increase in knowledge in the three areas; however, average change in knowledge did not differ across participants’ employment settings. Participants’ self-stated objectives considered as SMART and belonging to a higher level in Bloom’s taxonomy were associated with change in knowledge. Based on comments made by participants, majority intended to apply what they learned to their work; clarity of content delivery was the most liked aspect of the training, and the use of more practical sessions was recommended as a way to improve the training.

Conclusions: This study provides preliminary evidence of potential of the use of short course training as an approach to strengthening capacity in M&E in less-developed countries such as Kenya. It underscores the importance of participants’ self-stated objective(s) as an element to be considered in the enhancement of knowledge, attitudes and skills needed for acceptable capacity building in M&E.

Introduction

Effective monitoring and evaluation (M&E) is pivotal for good governance and public resource management because it promotes transparency, accountability and a performance culture (AFDB 2009; Mackay 2006). The emphasis on results, effectiveness and impact continues to highlight the weaknesses and the limited supply of M&E capacity in Africa, thus signalling the growing need to strengthen M&E capacity development efforts in the continent (AFDB 2009; Porter & Goldman 2013).

The literature on continuing professional development in M&E identifies practice-based learning activities for enhancing professional competence and lately links it to various strategies for lifelong learning (Kuji-Shikatani 2015; Patton & Patrizi 2005; Stevahn & King 2016; Torres 2016). There is emphasis on the importance of conceptualising organisational evaluation capacity building (ECB), both in terms of the capacity to do [conduct] evaluation and to use it (Cousins et al. 2008, 2014b; Hueftle Stockdill, Baizerman & Compton 2002). Briefly, ECB refers to the intentional continuous act of creating and sustaining organisational processes that make doing quality programme evaluation and using it routine within and between organisations (Hueftle Stockdill et al. 2002). With the rising demands for accountability and evidence-based practices, ECB continues to be a topic of great interest to many organisations. Although evaluators are engaged in ECB activities in these organisations, little is known about the strategies used and their effectiveness (Preskill & Boyle 2008). Broadly, delivery of training to individuals aspiring to work in M&E field (pre-service) or those already practicing M&E (in-service) have been either via direct approaches (e.g. university-based graduate programmes, achievement-oriented training offered by professional societies in the form of workshops, online programmes or training institute) or via indirect approaches such as the use of practicum activities (Cousins, Bourgeois & Associates 2014a).

Over the last 10 years, Kenya has made significant progress in strengthening M&E capacity. A number of training programmes have been established by universities or colleges, research institutions and development partners. Nine universities (five public and four private) currently offer master’s level training in M&E. International organisations and communities of practice such as MEASURE Evaluation provide platforms for M&E training including self-paced courses, webinars, or live classes. Although these training programmes contribute to the supply side by preparing M&E professionals, research involving multiple organisations across Africa has shown that contents of similar trainings tend to emphasise theory at the expense of practical application and are perceived to have limited practical utility or relevance to trainees’ work (Cousins et al. 2014b; Tarsilla 2014). These findings are consistent with findings of studies conducted in developed countries (Clinton 2013; Suarez-Balcazar & Taylor-Ritzler 2013). To conduct evidence-based M&E, for example, professionals need to have skills on how to search for evidence [which often include systematic review and meta-analysis (SRMA)], how to prepare evidence if none exist and how to undertake mixed methods investigation as is increasingly necessitated by the evaluation questions addressed.

For most M&E professionals, who tend to have limited time to attend formal training, short-term courses (hereafter, short courses) provide an opportunity amongst an array of capacity building initiatives to enhance M&E skills. By short courses, we are referring to intensive and highly specialised training aimed at refreshing and upgrading the knowledge and skills of professionals so as to effectively perform their work. In Kenya, a number of institutions (e.g. Kenya Institute of Management, Population Studies & Research Institute at the University of Nairobi, the ADAM Consortium Project, InsideNGO and AMREF International) conduct short courses to strengthen capacity in M&E. Although short courses are increasingly used, a systematic evaluation of their impact on professionals’ knowledge and skills has not been performed, especially in less-developed countries. Based on a recent survey of 35 national evaluation societies in 33 low- and middle-income countries, it was found that face-to-face interaction, which often includes hands-on tacit knowledge, is associated with enhanced ECB (Dewachter & Holvoet 2016). Based on results of this multinational study, we surmise that short course, a training modality that includes face-to-face interaction, has the potential to strengthen M&E capacity of participants.

Objectives

The primary purpose of this study was to evaluate the effect of short course training on professionals’ knowledge and skills in three areas: mixed methods research (MMR), SRMA and general principles of M&E. A secondary purpose was to explore the extent to which short course training facilitated application of knowledge to workplace. This baseline evaluation was intended to inform policies related to the use of short courses as a strategy to strengthen M&E in less-developed countries such as Kenya.

Methods

Description of the short course training

A team of five trainers from the University of South Florida, University of Nairobi, ICF, MEASURE Evaluation PIMA, Children of God Relief Institute and Africa Capacity Alliance conducted a 4-day short course consisting of three interrelated segments: MMR, SRMA and general principles of M&E. The purpose of the training was to strengthen the practical M&E skills by focusing on MMR, SRMA and statistical data analysis.

The training consisted of 16 modules. Each day (08:30–17:30), four modules were covered, each lasting about 2 h. There was a 30-min morning tea break, 1-h lunch break and 15-min evening tea break. A typical session consisted of an interactive PowerPoint presentation with lots of practical examples, questions and answers, practice using different analysis software with trainers being present for consultation, request of verbal feedback from participants, sharing of real-life examples (e.g. having two doctoral students present a summary of their dissertation work, which was based on MMR design) and reinforcement of key concepts at the end of each session. Moore, Green and Gallis’ conceptual model for planning and assessing continuous learning guided the structuring of the training (Moore, Green & Gallis 2009). For example, consider Preparing Evidence, which was a sample SRMA module, the session’s activities were structured such that the trainer could assess the extent to which a participant knew what to do (e.g. could identify dichotomous, continuous or time-to-event outcomes in a meta-analysis scenario), how to do it and when to do it (e.g. could determine whether relative risk, mean difference or hazard ratio is the appropriate effect measure) and could independently demonstrate to others how to do it (e.g. could show peers how to perform a given meta-analysis using software such as CMA or Stata).

This training was unique in several aspects. Firstly, it incorporated hands-on use of multiple software (RevMan, Stata, R, Excel and CMA) to perform different analysis, thus affording participants the opportunity to practice what they learned before returning to their workplace. Secondly, rather than waiting until the end of training, participants provided feedback which informed readjustment of the training to be more relevant to their needs. Thirdly, participant diversity in terms of disciplines encouraged interaction as all shared a common interest in M&E. Fourthly, unlike most training where trainers all come from the same institution, ours was a multidisciplinary team from different collaborating institutions: a Carnegie African Diaspora Fellowship Program (CADFP) fellow from the University of South Florida in the United States; a CADFP host from the School of Mathematics, University of Nairobi; and three M&E experts from three different capacity development partners in Kenya (ICF, MEASURE Evaluation PIMA, Children of God Relief Institute and Africa Capacity Alliance). The composition of the training team was informed by a study indicating that collaboration between international partners and African institutions or between in-country institutions and organisations keen on ECB was a promising strategy for enhancing M&E capacity in Africa (Tarsilla 2014).

Study design

To evaluate the effect of short course on participants’ knowledge and skills in the areas of MMR, SRMA and principles of M&E, a partially mixed concurrent dominant status design was employed (Leech & Onwuegbuzie 2009). Partially mixed implies that findings from quantitative and qualitative phases were integrated after completing data analyses; concurrent implies that data for both phases were collected concurrently; and dominant implies that the quantitative phase had more weight in addressing the overarching question, ‘What is the effect of short course training on participants’ knowledge and skills?’ Numeric data quantified participants’ perception of the effect (quantitative phase), whereas qualitative data facilitated explanation of the quantified impact, thus allowing us to gain a deeper insight of the impact of the training and justify meta-inferences drawn (Greene & Caracelli 1997).

Evaluation questions

Two questions were addressed in the quantitative phase: (1) Do participants experience a change in knowledge following participation in the short course?; (2) What factors affect change in knowledge? Five questions were addressed in the qualitative phase: (1) What objectives do participants cite for attending the training?; (2) What changes in practice do participants intend to make following the training (or perceived barriers to making practice changes, if any)?; (3) What aspects of the training do participants like the most?; (4) What topics do participants suggest for future training?; and (5) How can the training be improved?

Context and participants

Data for this study were collected as part of a larger project, Building Capacity Through Training and Mentoring of Graduate Students and Early-Career Faculty in Systematic Reviews, Meta-Analysis, and Mixed Methods Applications, whose aim was to build capacity in M&E through training and mentoring in MMR and SRMA. The first author was a CADFP fellow, the senior author was CADFP host and the remaining co-authors were senior M&E professionals in Kenya. The goal of CADFP is to facilitate equitable, effective and mutually beneficial international higher education engagements between African Diaspora academics in the United States and Canada and scholars in Africa (CADFP 2016). Consistent with Operations Evaluations Department’s three pillars of M&E capacity development (AFDB 2009), this training focused on strengthening individual knowledge and skills in M&E using short course approach; enhancing institutional capacity through collaboration with host institution and engaging experts as co-trainers; and creating an enabling environment by engaging the leadership at the host institution to consider offering similar training in the future.

The training targeted both pre- and in-service M&E professionals including early-career faculty and graduate students interested in enhancing their programme evaluation skills and practitioners in health, social and behavioural science research from public and private sectors. Noteworthy, participation in this study was motivated by participants’ desire to enhance their knowledge and skills in M&E in general and specifically in two areas: MMR and SRMA. About one-third of the participants were funded by their institution. The remaining participants were either self-funded (non-students) or received a waiver (students) from the host institution.

Evaluation of the training

At the end of the training, participants were emailed a link to the evaluation survey and requested to complete it using either their laptops or smartphones. The survey, developed in Qualtrics (data management software), contained objective items (quantitative data) and open-ended items (qualitative data). The items were developed according to the first two of Kirkpatrick’s four levels of programme evaluation (Kirkpatrick & Kirkpatrick 2005), that is, reaction (participants’ opinion of the learning experience) and learning (degree to which participants perceived their knowledge changed as a result of participating in the training). Plans are underway to send a post-training survey in 6 months; however, we attempted to assess change in participants’ improvement in job performance by using a proxy measure, intention to make practice changes following the training.

Quantitative data

Change in knowledge, the outcome of interest, was assessed by having participants rate their level of knowledge about content covered before and after the training using a 5-point rating scale (1 = novice to 5 = expert). Participants’ reaction to the training (e.g. degree of satisfaction with content, extent information was effectively conveyed, whether the sessions were organised and whether there was an opportunity for interaction) was rated on a 5-point scale (1 = disagree strongly to 5 = agree strongly), whereas the degree of achievement of objectives was rated on a 3-point scale (1 = no, 2 = somewhat, 3 = yes).

Qualitative data

The survey included open-ended items requiring participants to state their objective for attending the session, changes in practice they intended to make following the training (or perceived barriers to making the changes), aspects of the training they liked the most, topics they suggested for future training and recommendation of how to improve the training. Responses to these items constituted the qualitative data. Qualitative data were quantitised to aid integration with quantitative data. For example, self-stated objectives were examined for a degree to which they constituted a SMART objective (Doran 1981), each objective being scored 1 if the following elements were specified: Specific (activity or action of interest), Measurable (amount of change expected in terms of quantity, quality or frequency), Achievable (attainable given time frame and resources), Relevant (impact of activity) and Time-bound (time frame for action). A score of 0 was recorded if an element was missing. A maximum score of 5 indicated SMART objective, whereas a score of < 5 indicated the objective was not SMART. Similarly, we used Bloom’s taxonomy (Anderson & Krathwohl 2001; Bloom et al. 1956) to determine the highest category implied in the self-stated objective(s). Knowledge (recall of facts and basic concepts) was scored 1, comprehension (understanding of ideas and concepts) was scored 2, application (use of information in a new setting) was scored 3, analysis (drawing connections amongst ideas) was scored 4, synthesis (assembling elements to form whole) was scored 5 and evaluation (justification of a position) was scored 6. A score of 3 and above was considered high-level Bloom’s taxonomy, whereas a score of less than 3 was considered low-level Bloom’s taxonomy. Survey items were developed by the first author and independently reviewed by other trainers for clarity, appropriateness and relevance. Additional feedback on the survey was obtained from two volunteers who reviewed the items for clarity and appropriateness.

Data analysis
Quantitative: Multilevel modelling and meta-analysis

To model change in knowledge, we considered the clustering of participants (level-1 unit of analysis) within employment settings (level-2). Acknowledging this nested nature of data implied, we did not assume the outcome was invariant across employment settings (academic vs. government agency vs. nongovernmental organisations). Such an assumption may lead to incorrect conclusions being drawn from the resulting inferential statistics (Raudenbush & Bryk 2002). The SAS PROC MIXED routine was used to fit hierarchical models (Singer 1998). We began with the unconditional means model to assess the variation of mean change in knowledge across employment settings. Next, the outcome (Yij) was expressed as a linear combination of the grand mean (γ00), setting (µ0j) and random error associated with the ith participant in the jth setting (rij): Yij = γ00 + µ0j + rij, where µ0j ~iid N(0, τ00) and rij ~iid N(0, σ2). We estimated the fixed effect γ00 (the average change in knowledge) and two random effects, τ00 (variability in means across settings) and σ2 (variability in means within settings). Next, we added the predictors, one at a time, and assessed model fit using different indices (Akaike 1973). Because content covered in the three segments was different (MMR vs. SRMA vs. M&E), meta-analysis technique was used to assess if the impact of the training differed across the segments. For each segment, we calculated the mean difference in change in knowledge and aggregated the mean difference for all participants in the segment. Thereafter we used these aggregate mean differences from each segment and compared them across segments. The participants, intervention comparison and outcome (PICO) characteristics for meta-analysis were as follows: Participants (each segment included at least one participant engaged in M&E-related work), Intervention (participation in a training segment), Comparison (pre-training and post-training knowledge levels were compared) and Outcome (change in knowledge = ‘post-training level’ – ‘pre-training level’).

Qualitative: Thematic content analysis using constant comparison technique

Thematic content analysis was accomplished in four steps. Firstly, responses to open-ended items were independently coded by three members of our research team. We employed in vivo coding [i.e. assigning a section of data (word or statement) a label using a word or short phrase taken from that section] (Wao, Dedrick & Ferron 2011). This technique of coding ensures that the concepts remain as close as possible to participants’ own words. Next, we constantly compared each code with the preceding ones to avoid redundancy. The third step involved aggregating codes containing statements similar in content to form themes. Finally, we computed theme frequency (i.e. the number of participants who cited significant statements classified under a particular theme, expressed as a percentage of all participants). The strategy of quantitising qualitative data (i.e. computing theme frequency) allowed us to glean additional information from the qualitative data, thus enhancing the credibility of our findings. Microsoft Excel was used to manage qualitative data, whereas NVivo was used to perform qualitative data analysis.

Results

A total of 35 participants from diverse backgrounds participated in the training (43% women; 31% students, 21% faculty members, 41% in-service M&E professionals; 31% from academic institutions, 31% public institutions and 31% NGOs). Participants spent an average of 24 min to complete the survey (9–56 min). Although all participants were expected to participate in all modules, a few participants were about 5–10 min late for some modules.

Quantitative phase
Participants’ knowledge increased in each training segment

The results of the meta-analysis showed an overall statistically significant increase in mean change in knowledge following the training [standard mean difference (SMD) = 1.60, 95% CI: 1.03, 2.17] (Figure 1). For each segment, the mean change in knowledge was as follows: MMR (SMD = 1.92, CI: 1.28, 2.57), SRMA (SMD = 1.86, CI: 1.19, 2.54) and M&E (1.60, CI: 1.03, 2.17). There was moderate heterogeneity amongst training segments (I2 = 56.7%).

FIGURE 1: Effect of short course training on knowledge (N = 35 participants).

Change in knowledge did not differ across employment settings

The results of the unconditional means model showed that for each segment, the average change in knowledge did not differ across employment settings (τ00); however, there was significant variation (σ2) amongst participants within employment settings (Table 1, top panel). Estimates of the intra-class correlation (ρ), the portion of the total variance between employment settings, suggested existence of clustering of knowledge change within settings. Overall, multilevel modelling results showed significant increase in knowledge across employment settings.

TABLE 1: Parameter estimates and standard errors for modelling change in knowledge following short course training (N = 35 unique participants).
Factors associated with change in knowledge

When other factors were added to the unconditional model (Table 1, bottom panel), for the MMR segment, two factors were associated with change in knowledge: whether the objective was considered SMART and whether it belonged to a higher level Bloom’s taxonomy. For the M&E segment, the extent to which the session’s content was perceived as applicable to work was associated with change in knowledge. For the SRMA segment, we did not find evidence of a factor associated with change in knowledge (Table 1, upper panel).

Qualitative phase

Majority of participants who consented to participate in the three sections of the evaluation survey responded to the open-ended questions (MMR 85%, SRMA 96% and M&E 100%).

Participants’ self-stated objectives for attending the training

We assumed that the content of the training was relevant and time-bound within the training lifespan. Thus, determination of whether an objective was ‘SMART’ depended on evidence of being specific, measurable and attainable. Thematic analysis revealed that only a few participants stated objectives which were classified as SMART objectives (MMR 35%, SRMA 26% and M&E 30%). For example, ‘… to integrate quantitative and qualitative methods of analyses’ (MMR), ‘… to collect and summarise all empirical evidence’ (SRMA) and ‘… to conduct M&E quantitative analysis’ (M&E) were classified as SMART objectives because action verbs (italicised) were used to describe the activity. Vast majority of participants used less precise action verbs (e.g. ‘know’, ‘understand’, ‘appreciate’ and ‘be aware of’), which are open to multiple interpretations and difficult to measure. Similarly, participants tended to use vague phrases (italicised): ‘To be able to get a lot wiser in the programmes or software used to analyse data’ and ‘To be informed and have a broader view on the available research methods and their applications’.

Participants’ self-stated objectives classified as belonging to higher level Bloom’s taxonomy varied (MMR 43%, SRMA 30% and M&E 55%). Examples included ‘To learn MMR skills which in turn will assist me in the supervision of undergraduate and postgraduate academic projects’ (MMR), ‘Gain more skills in SR and MA especially as it applies to M&E’ (SRMA) and ‘To equip myself with the M&E skills that I can use in my career today and in future’ (M&E).

Changes in practice participants intended to make following the training

The majority of participants made statements suggesting that they intended to apply what they learned in their work (MMR 67%, SRMA 57% and M&E 83%). For example, MMR (‘I plan to use MMR in conducting programme evaluations and during my PHD,…I’ll apply MMR in my literature review’ and ‘Better equipped to conduct technical reviews of evaluation proposal, reviewing academic pieces of work (theses, abstracts)…better equipped to facilitate technical evaluation methods training..’), SRMA (‘Encourage more students to consider conducting a SR and MA as their thesis or dissertation if this sparks interest or appropriate for their chosen topic’ and ‘I am going to use results from existing systematic reviews more effectively at work … I plan to conduct a more structured literature review for my master’s thesis which is ongoing based on the skills I acquired’) and M&E (‘Use of work plans in my daily office tasks and projects’, ‘I am now in a better position of writing good frameworks for proposals’ and ‘Endeavour to use M&E tools to structure all the M&E activities within the projects I am in charge in the organisation’). Only six participants cited potential barriers, which we broadly categorised as institutional (‘A key barrier is a lack of buy in or mandate limitations – I will have to negotiate with my boss to go slightly beyond the scope of my mandate since most evaluations are outsourced’) and availability of software (‘Software availability especially qualitative analysis software such as NVivo that is commercial’ and ‘Access to the various statistical packages could be a hindrance’).

What participants ‘liked most’ about the training

Although ‘clarity of content delivery’ and ‘applicability of knowledge acquired to work’ emerged as the most liked aspects of the training (i.e. highest average values in Table 2, last column), there were significant differences in the frequencies of themes describing most liked aspects of the training across training segments. For MMR, the most liked aspect was ‘clarity of content delivery’ (‘The clear explanations as to the difference in undertaking MMR as opposed to not’). For SRMA, it was the ‘use of technology (software) or tools’ (‘The use of a software to do SR & MA’, ‘Use of CMA for meta-analysis’, and ‘The use of software in SR’), whereas for M&E, it was ‘intriguing nature of the content presented’ (‘I particularly liked the new or emerging issues in M&E as it brought me up to speed with what is happening in M&E especially in LDCs’).

TABLE 2: Frequency of themes describing most liked aspects of the short course training.
Topics suggested for future trainings

For the MMR segment, participants suggested qualitative data analysis (‘More practice in analysing qualitative data’ and ‘Practical on qualitative analysis’) including the use of analysis software (‘software for creating qualitative research themes’ and ‘The analysis of QUAL data’). For the SRMA, more practicals using different software and emphasising data acquisition (‘How to easily identify the variable to pick for use in SR and MA’ and ‘Critical appraisal of studies’) and interpretation (‘Interpretation of the resultant findings of the two processes’) were cited. For the M&E, participants cited ‘Developing M&E work log frame and work plan’, advance M