About the Author(s)


Candice Morkel Email
Centre for Learning on Evaluation and Results (CLEAR), Wits School of Governance, University of the Witwatersrand, South Africa

Mokgophana Ramasobama
Centre for Learning on Evaluation and Results (CLEAR), Wits School of Governance, University of the Witwatersrand, South Africa

Citation


Morkel, C. & Ramasobama, M., 2017, ‘Measuring the effect of Evaluation Capacity Building Initiatives in Africa: A review’, African Evaluation Journal 5(1), a187. https://doi.org/10.4102/aej.v5i1.187

Original Research

Measuring the effect of Evaluation Capacity Building Initiatives in Africa: A review

Candice Morkel, Mokgophana Ramasobama

Received: 16 Nov. 2016; Accepted: 27 Feb. 2017; Published: 26 Apr. 2017

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: The growing demand for evidence to support policy decisions, guide resource allocation and demonstrate results has elevated the need for expertise in monitoring and evaluation (M&E). Despite the mushrooming of short courses in M&E, their impact on improving the capacity to meet the demand has not been adequately and comprehensively measured or evaluated. The purpose of this article was to highlight the need for improving the measurement of evaluation capacity building (ECB) to better understand what works in building M&E capacity in Africa.

Objectives: This article provides important insights into the need for empirical and rigorous measurement of ECB interventions and their role in strengthening evaluation practice.

Method: The study was primarily a desktop review of existing literature, corroborated by a survey of a few senior representatives of organisations responsible for capacity building across the African continent.

Results: The review found that there remains little empirical evidence that indicates whether ECB processes, activities and outcomes are ultimately effective. There is also very little empirical evidence that helps to interpret how change happens, and how this may shape ECB efforts. Training is acknowledged as only one element of ECB, and there is a need for a multi-pronged approach to ECB.

Conclusion: Much more empirical and rigorous research is needed to build a clear understanding of what conditions are needed in ECB in Africa to strengthen evaluation practice. This article is useful for guiding further research into measuring the effect of ECB, as well as implementing more effective models of ECB towards strengthening evaluation practice in Africa.

Introduction

Problem statement

Evaluation capacity building (ECB) has been recognised as a critical issue for development by international agreements such as the Accra Agenda for Action, the Cairo Consensus on Capacity Development as well as the Busan High-Level Forum (Lucas 2013). The realisation that development interventions need to be informed by good evidence of what works – and why – has meant that capacity building in monitoring and evaluation (M&E) has grown in leaps and bounds, largely due to the growth in the demand for evaluation by national as well as international donors, government agencies and others. Building effective M&E systems has long been recognised as a critical support mechanism for resource allocation, accountability, decision-making and programme design in the development sector (The World Bank in Ross & Hopson 2006). The growing demand for functional M&E systems in Africa has led to a rising need for skilled evaluators who are able to produce the kind of evidence that can reliably inform decision-making in policy and development programmes (Porter & Goldman 2013). There is also an emerging demand across the African continent (e.g. South Africa, Benin and Uganda) for evaluation in particular, apart from the demand for monitoring systems (Porter & Goldman 2013). The first decade of the 21st century has therefore seen the rise (and rise) of capacity building initiatives and an increasing commitment and enthusiasm for undertaking and measuring the impact of ECB (Preskill & Boyle 2008). What is yet insufficiently known is the precise relationship between ECB interventions and strengthening evaluation systems and practice, and the best methods of measuring this.

An oft-quoted definition of ECB by Stockdill, Baizerman and Compton (in Ross & Hopson 2006:124) points to the intended outcome of ECB, defining it as ‘the intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its uses routine’. Preskill and Boyle (2008) have also designed a comprehensive ECB model, which incorporates this aspect of ECB and its intents in the following definition:

ECB involves the design and implementation of teaching and learning strategies to help individuals, groups, and organisations, learn about what constitutes effective, useful, and professional evaluation practice. The ultimate goal of ECB is sustainable evaluation practice – where members continuously ask questions that matter, collect, analyze and interpret data, and use evaluation findings for decision-making and action. For evaluation practice to be sustained, participants must be provided with leadership support, incentives, resources, and opportunities to transfer their learning about evaluation to their everyday work. Sustainable evaluation practice also requires the development of systems, processes, policies, and plans that help embed evaluation work into the way the organization accomplishes its mission and strategic goals. (p. 444)

The authors designate 10 different strategies, of which training is one, in their taxonomy of factors to consider when embarking on any ECB intervention (Preskill and Boyle 2008). ECB is therefore multidimensional and training is one component of thereof. Nonetheless, the mushrooming of M&E training courses in academic institutions, private consultancies and other institutions across the African continent has been significant, and it forms the cornerstone of any ECB intervention, such that ‘capacity building’ is often used synonymously with the concept ‘training’.

The challenge in the evaluation community is that there is still no globally ratified consensus on the guidelines for desired evaluator practice and essential evaluator competencies that is used consistently in ECB in Africa (despite the existence of legitimate competencies and standards produced by, for example, the International Development Evaluation Association, the African Evaluation Association, the Joint Committee on Standards for Educational Evaluation and others). There is also no professional body overseeing the content of M&E training material on the African continent. The quality, content and composition of training courses vary between institutions. Of the many training courses that are undertaken every year, it is not known whether these assist individuals to become better evaluators, or if they strengthen organisational evaluation practice. There is little empirical evidence available that tests whether ECB processes, activities and outcomes in general are ultimately effective (Preskill & Boyle 2008; Tarsilla 2014). Post-training evaluation forms are often completed; however, these are more about the quality of the course and perspectives about the facilitators, rather than tools to assess changes in knowledge, attitudes, practice or behaviour. In recognition of this challenge, this article seeks to contribute to the research agenda by providing newer reflections on moving beyond the challenges in the measurement of the effectiveness of ECB interventions (in particular training) in strengthening evaluation practice, towards the enhanced achievement of development outcomes on the African continent.

The problem that this review sought to unpack was the absence of adequate attempts to measure the effectiveness of ECB in strengthening M&E capacity on the African continent. The problem is compounded by the lack of a singular and agreed upon definition of what constitutes M&E capacity. This further problematises the measurement of capacity (i.e. which indicators to use, which tools can be used to measure and the level – individual or organisational – at which capacity is measured, as well as the absence of baseline indicators, which tell us what capacity was there to start with before training began).

This review therefore explores the challenges facing ECB and measuring the effect of ECB in Africa through a document study and a survey of regional institutions (including the South African Development Community, Economic Community of West African States, the African Union Commission and Economic Community of Central African States, among others) from parts of the central, east and west African ECB community. It also makes recommendations for moving beyond the status quo in the design and measurement of ECB initiatives on the African continent.

Purpose of the review

The primary purpose of this review is to explore the extent to which the effectiveness of ECB (including its contribution to strengthening evaluation practice) is adequately measured on the African continent.

Background

Capacity building initiatives in Africa have traditionally been driven by donor agencies since the inception of development aid, and take various forms. Most commonly, they include the training of individuals as a basic tenet, supported by the provision of technical support. ECB has experienced rapid growth within public sector organisations on an international scale (Naccarella et al. 2007; Tarsilla 2014). As with capacity building in general, a large part of ECB is comprised of training. Tarsilla (2014) further argues that the short-term evaluation training programmes funded by international agencies in developing countries rarely respond to local trainees’ and organisations’ interests and needs. He even emphasises the complementarity – and yet distinction – between the terms ECB and evaluation capacity development (ECD). It appears that short course provision has become the common mode of training in ECB, regardless of which institution is the provider. Despite recent attempts to validate and develop common measures of ECB (Labin 2014; Nielsen, Lamier & Skov 2011) the impact of short courses and training within the ECB suite of interventions has not been adequately and comprehensively measured or evaluated. There is largely agreement that more research is needed to develop an empirical basis to support efforts to embark on such measurement (Nielsen et al. 2011; Wandersman 2014).

It is important to interrogate the effects of capacity building interventions in Africa, especially in light of the mushrooming of M&E training offerings across the continent. The questions that remain unanswered include:

  • What role does training play in ECB in Africa?
  • How do we ascertain that the training to build evaluation capacity is yielding the intended results?
  • What are the challenges to measuring the effect of training on building evaluation capacity and how might they be addressed?

In response to the widespread agreement in the evaluation sector that there is a research gap in what constitutes evaluation ‘capacity’, and how the various factors that constitute ECB inter-relate, Taylor-Ritzler et al. (2013) developed the Evaluation Capacity Assessment Instrument (ECAI). It was developed in recognition of the need for a ‘validated instrument’ to enable practitioners and scholars to measure the results of ECB efforts, and to use these results to shape future ECB efforts (Suarez-Balcazar & Taylor-Ritzler 2013:190). The instrument, consisting of 68 items, measures participants’ perceptions of their ability to mainstream and use evaluation findings (Suarez-Balcazar & Taylor-Ritzler 2013). A number of other tools have also been developed, for example, the Capacity and Organizational Readiness for Evaluation (CORE) tool (Morariu, Reed & Brennan 2011) and the checklist Building Organisational Evaluation Capacity (Volkov & King 2007); however, these do not address the question around whether or how ECB efforts strengthen evaluation capacity. Often there is an underlying assumption that capacity building interventions have a direct link with performance (whether organisations or individuals). Thus far, the research is nascent: very little empirical evidence has been established to correlate the two, or even to confirm the effects of capacity building in general (Sobeck & Agius 2007:237; Suarez-Balcazar & Taylor-Ritzler 2013).

The growing demand for good governance and accountability within the African continent demands that we embark on a path to adequately measure the effect of capacity building interventions. Considering the protracted global economic downturn and the impact on the availability of financial resources within developing countries (in particular the continent of Africa) for the myriad development challenges facing it, there is a need to ensure that any expenditure on interventions results in improvements in development results.

The specific objectives of the review were to explore:

  • The various approaches to defining and measuring ECB.
  • The extent of, and ways in which, selected key institutions on the African continent measure the effect of training on strengthening evaluation capacity.
  • Challenges experienced in the measurement of the effect of training on ECB in Africa, and recommendations for how these may be addressed.
Literature review

The literature review focused on the following key concepts: evaluation capacity; ECB and the measurement of these. It further explores the aspect of competencies in evaluation, and how this interfaces with the concepts of capacity and capacity building, which form the back-drop to an exploration of the efforts at, and complexities of, the measurement of ECB interventions.

Defining evaluation capacity and evaluation capacity building

Significant resources have been invested in building evaluation capacity (particularly by international donor agencies) in Africa; however, it is not clear whether these efforts are yielding results (Tarsilla 2014). One possible source of the problem is what Nielsen et al. (2011:326) refer to as ‘conceptual ambiguity’ around evaluation capacity. The authors therefore embarked on a study of a Danish model and measurement tool that is used to map evaluation capacity in the public sector in order to address the issue of the plurality of the concept of evaluation capacity. The following elements of a definition of evaluation capacity (at an organisational level) emerged from this study Nielsen et al. (2011):

  • The ability to sustain the capacity to use evaluative knowledge in decision-making.
  • Aligning the organisation’s functional elements in a way that is coherent, and supports effectiveness.
  • Allowing evaluative knowledge in its varied forms to be used in all phases of the policy cycle.
  • The ability to use evaluation knowledge at several levels of practice and decision-making to improve organisational effectiveness.

This study challenged the notion that evaluation capacity is limited to the ability to conduct evaluations and use the findings appropriately (Nielsen et al. 2011).

Brinkerhoff and Morgan (2010:3) define capacity at a systems level as the ‘the evolving combination of attributes, capabilities and relationships that enables a system to exist, adapt and perform’. Horton (1999:156) similarly describes capacity as ‘the ability of individuals and organizations or organizational units to perform functions effectively, efficiently and sustainably’. In all of these definitions, capacity is not confined to the individual, while the notion of the ability to perform is identified as an important characteristic. An assumption is often made that capacity leads to an improvement in performance; however, LaFond, Brown and Macintyre (2002) posit that this is not always the case. Morgan (in Brinkerhoff & Morgan 2010:2) notes that ‘the concept of capacity seems to exist somewhere in a nether world between individual training and national development’. The authors, in describing the European Centre for Development Policy Management’s ‘5-C’ approach, noted five distinct individual capabilities which, when developed adequately, converge to form capacity at an organisational level. The five capabilities are noted as:

the capability to commit and engage; the capability to carry out technical, service delivery and logistical tasks; the capability to relate and attract support; the capability to adapt and self-renew; the capability to balance diversity and coherence. (Brinkerhoff & Morgan 2010:2)

The authors further posit that capacity is a ‘latent phenomenon’, and only becomes explicit once it is exercised in order to attain a certain outcome. Defining evaluation capacity is therefore complex and multifaceted, including the provision of financial as well as other resources (Crisp, Swerissen & Duckett 2000).

Building and measuring capacity is equally complex, it can take many forms and may occur at an individual, organisational or systems level. LaFond et al. (2002:5), describe capacity building as a ‘process or activity that improves the ability of a person or entity to carry out stated objectives’. Horton (1999) on the other hand argues that capacity building is something like ‘social experimentation’. A study conducted by Tarsilla (2014), involving a sample of 21 international development organisations who perform ECB activities across the African continent, measured whether their interventions were yielding the desired results. Among the findings were the following: 92% of the respondents (n = 33) could not provide any evidence of the ECB programme suitability and 89% of the respondents (n = 31) cited that there is a need to employ practical evaluation theories and principles as opposed to the current sporadic evaluation training initiatives. These results are concerning in light of the assumption that ECB is good and leads to positive outcomes.

The lack of consensus in defining evaluation capacity and capacity building poses challenges in the measurement of ECB interventions.

Training, evaluation capacity building and the building of individual competencies

The study conducted by Tarsilla (2014) points out that international development partners have invested significant financial resources in ECB in Africa, without satisfactory results. One of the challenges highlighted by the author includes a continued focus on short-term courses despite evidence of their deficiencies in building capacity (Tarsilla 2014). Preskill and Boyle (2008) identify training as only one of 10 strategies within the ambit of ECB, and it is largely agreed that training, on its own, is insufficient as a comprehensive ECB strategy. Tarsilla, in support of the Organisation for Economic Cooperation and Development (OECD) definition of evaluation capacity ‘development’ sees it as ‘a process whereby individuals, organisations and society as a whole embarks on a path to strengthen, create and maintain evaluation capacities over a period of time’. This definition supports the notion that training is only one aspect of a much larger process in building evaluation capacity, and is focused on a much broader goal of effective development results.

Davies and Mackay (2014:419) assert that ‘quality training opportunities for evaluators will always be important to the evaluation profession’. While this is true, there is very little consensus as to what evaluators need to know, understand and to be defined as ‘trained’. The literature is clear that training programmes are limited in what they can achieve in strengthening evaluation practice and although short-term courses may be able to upgrade individual capacities around specific areas of learning, significant and lasting change is not possible with training alone (Dillman 2012; Horton 1999:182; Preskill & Boyle 2008; Suarez-Balcazar & Taylor-Ritzler 2013). During the provision of training, case studies may provide some insight into this and provide an answer to the need for more evidence; however, the challenge is that the learning may be very specific to the cases and context. A useful case study by Cohen (2006) illuminates how an embedded approach, where capacity building took on multiple forms (facilitation, coaching and mentorship) throughout the ECB intervention, provided positive results in strengthening evaluation capacity. The intensity of this strategy is not feasible with the current multiplying demand for ECB across the African continent, although some of the lessons regarding the ‘ingredients’ of effective capacity building are useful. These include, for example, the need to assess organisational readiness, the importance of teaching in context, providing opportunities for reflection and maintaining a partnership stance (Cohen 2006).

Dillman (2012:280) argues that there is a need to employ a ‘deliberate and intentional attitude’ in how education and training interventions are designed. A combination of classroom-based and field-based work must be employed in order to build evaluation competencies. Coursework remains important in the transfer of knowledge (theory); however, to build on this knowledge and develop competencies that may strengthen evaluation practice, an eclectic mix of professional development, fieldwork and mentorship is needed (Dillman 2012). This can be supplemented by innovative methods, for example evaluation learning circles, developed by Carolyn Cohen (2006), which is a long-term immersion experience that involves participant-led learning sessions, and which resulted in very positive learning outcomes.

Training therefore remains important, but not in isolation. A study conducted on ECB interventions in 13 community-based organisations revealed that ECB does increase organisational evaluation capacity (Stevenson et al. 2002). It is significant, however, that the intervention consisted of more than just training, and included a needs assessment, the identification of ‘exemplary’ evaluations that would be used as models for other programmes, as well as providing on-site and telephonic technical assistance (Stevenson et al. 2002). Dillman (2012) concurs that multiple and varied training experiences are needed to develop essential evaluator competencies. Although training will always be a very significant part of ECB efforts, research trends are indicating that capacity is a multidimensional concept, and therefore needs to incorporate multiple approaches to be successful (cf. Naccarella et al. 2007; Preskill & Boyle 2008; Suarez-Balcazar & Taylor-Ritzler et al. 2013).

Podems (2014) contributes to the discourse and posits that a list of competencies is a useful guide to those vested with the responsibility to design, provide guidance and teach evaluation courses. The author (Podems 2014:132) states that ‘the assumption is that a person having those core skills would be a competent evaluator who would have the abilities to implement feasible and credible evaluations’. It is further asserted that the competencies guide will provide direction on the areas of knowledge and skills to be improved, further supporting practitioners to use the list as a guideline to nurture the evaluation profession rather than acting as a prerequisite to entry into the profession. The Department of Planning, Monitoring and Evaluation (DPME) in the Presidency in South Africa currently uses such a competency guideline to develop evaluation capacity short courses (Podems 2014). There are ongoing debates among ‘pro’ and ‘anti-competency’ scholars: those who are pro use of competency guidelines are of the view that the initiative will eliminate unqualified evaluators who are currently parading as consultants with the sole objective of increasing their bottom line. Dillman (2012) offers the caution that despite literature available on evaluator competencies and training programmes, there is very little evidence of how these evaluation competencies are gained. Podems and Dillman both acknowledge the danger that such guidelines may have the unintended consequence of being perceived as a barrier for aspiring new professional evaluation entrants, because it can be used to determine who is and is not an evaluator.

Training, and by extension efforts at building individual competencies, can only go as far as imparting and facilitating skills and knowledge transfer, but cannot determine the capacity to implement these. This has implications for measurement and attribution, and how the evaluation community may determine if a single training intervention has the ability to build capacity in a comprehensive sense.

Measuring evaluation capacity building and the use of findings

Challenges in defining capacity and ECB invariably translate to challenges in measuring ECB interventions and effects. For example, Lucas (2013:5) lists the following variables that influence capacity for M&E: attribution, timeframes, multiple types of change, multiple actors, the identification and interpretation of change and boundary-setting. Research around ECB has, over the last decade and more, attempted to address the challenges and gaps of ECB measurement (cf. LaFond et al. 2002; Lucas 2013; Sobeck & Agius 2007; Wandersman 2014); however, the problem in practice is that what is measured is not helpful and does not contribute to advancing the discourse on ECB. Taylor-Ritzler et al. (2013) acknowledge that there is a growing interest among practitioners and scholars in measuring the impact of ECB in organisations. The success of training interventions is usually measured by using self-reporting tools and answering questions around the satisfaction of participants, knowledge gained and utilisation of the knowledge and skills acquisition (Medina et al. 2015). However, these are inadequate as they do not answer the question of impact (Medina et al. 2015). ‘Striking differences’ have been identified between the acquisition of evaluation skills and competencies (e.g. effective communication skills, project management) and evaluation knowledge in a review conducted by Dillman (2012) on evaluator skill acquisition. One of these differences is that fieldwork and mentorship were perceived as having had the greatest effect on skill acquisition, while coursework was rated as being the most impactful in contributing to the acquisition of theoretical knowledge (Dillman 2012). This was also confirmed in a tracer review on 15 in-service training courses conducted for the South African DPME in 2015 (CLEAR-AA 2015). The tracer study revealed that ‘participants gained most in the transfer of knowledge. … The courses were found to be less effective on [transferring knowledge on] managing evaluations’ (CLEAR-AA 2015:5).

Tracer studies, if conducted empirically and designed well, could shed light on the impact of training (in particular) on knowledge acquisition, skills and practices. As very little research could be found on tracer studies in the ECB sector, ECB practitioners could learn from research conducted in other sectors, such as education, health and behavioural sciences. Studies in these sectors have shown that tracer studies ‘can be useful for gathering information that positively impacts on training and policy’ (Mubuuke, Businge & Kiguli-Malwadde 2014:55). One such example is research that was conducted on the impact of multicultural counselling training using a self-administered written pre-post-test (D’Andrea, Daniels & Heck 1991). The instrument tested participants’ own perceptions of their post-training levels of awareness, knowledge and skills. Using quantitative data analysis, the findings suggested that the training may have substantially improved post-training levels of awareness, knowledge and skills (D’Andrea et al. 1991). More empirical research is needed in the ECB sector to reliably test whether training can show improvements in post-training levels of evaluation capacity.

Crisp et al. (2000) make the observation that establishing the effect of capacity building requires a range of strategies, including quantitative measures of involvement (i.e. to what extent and how individuals were engaged in the capacity building process), as well as more subjective qualitative measures (how this was experienced). These studies further highlight the complexity of studying and measuring the effect of ECB on strengthening evaluation practice, as even single interventions (such as training) may be further de-constructed into modalities and their specific effects. This also supports the need for validated, rigorous and empirical instruments to measure the effect of ECB interventions, and is currently a critical research gap.

Beyond the literature review1

The review was conducted through a literature review on ECB and its measurement, focusing specifically on the adequacy of training as one aspect of ECB, supported by a survey of key strategic institutions operating on the African continent and linked to ECB activities. The survey was conducted with a small convenience sample of key stakeholders (including organisations responsible for capacity building or specialising specifically in ECB across the African continent) who were invited to participate in a self-administered survey in the context of their responsibility for or linkages to ECB interventions across the African continent. The organisations were participants of a workshop, hosted by the African Capacity Building Foundation (ACBF) in October 2016, to identify the potential content, design and implementation modalities for an Africa-wide M&E capacity building programme. CLEAR-AA had also been invited to the workshop. It was a closed meeting, attended by a select number of organisations who are actively involved in regional integration, capacity building or M&E activities across the African continent. Permission was requested from the ACBF to distribute the survey instrument, and all of the organisations present were invited to complete the survey.

The survey instrument was developed using the following thematic areas as the point of departure: (1) the extent of, and ways in which, selected key institutions on the African continent measure the effect of training on strengthening evaluation capacity and (2) challenges and recommendations in the measurement of the effect of training on ECB. The items on the survey included biographical information on the organisation represented by each respondent, including the countries within which it operated. This provided insight into the ‘footprint’ of the organisations on the African continent. In addition to the biographical information, the survey consisted of eight closed-ended items and three open-ended questions covering the following issues: the extent to which organisations were involved in M&E training, whether, as well as how, organisations measured the effect of their training (knowledge, behaviours, attitudes, practices) and challenges and recommendations to measuring the effect of ECB efforts on the African continent.

In total, 13 surveys were returned by senior representatives of 12 regional organisations, listed in Table 1 (numbered 1–11, as one respondent did not provide the name of the organisation they represented).

TABLE 1: Survey respondents.

The snap survey provided perspectives of senior staff members of key regional organisations who are extensively involved in the coordination or implementation of capacity building programmes in Africa in general and ECB specifically. Their views or responses, however, might not be representative of their entire respective organisations. Perspectives were sought around training and its effect on strengthening evaluation capacity. The results of the survey were analysed using simple descriptive statistics (tabulated in the results section of this article), due to the sample size being too small to draw any inferences to a larger population. Qualitative data were analysed using content analysis, as no preconceptions about the categories of findings were articulated ahead of the analysis. The findings emerging from the survey results were corroborated with the findings from the literature review, and the results categorised into themes and discussed in response to the stated objectives of the review.

Trustworthiness

Representativeness

There was no expectation to generalise or transfer the results of this review beyond the organisations represented in the sample. Convenience sampling, as a non-probability sampling method, precludes the drawing of inferences to a larger population. However, the corroboration of the survey results with the findings emerging from the literature studied allowed for the drawing of some general conclusions about the measurement of the effect of training on evaluation practice, and extrapolate this (in broad terms) across the African continent.

Survey results

The extent to which the effect of training is measured

The survey results indicated that training remains an important component of ECB in Africa. According to Table 2, almost all the respondents surveyed (12 out of 13) conduct some kind of training in M&E on the African continent. Almost half (5 out of 13) always or often conduct training.

TABLE 2: Number of respondents who conduct training in M&E.

Eight out of twelve of the respondents surveyed do M&E related work, while one has M&E functions which may not include a training component as a major function. Those who do not specialise in M&E capacity building (5 out of 13) listed the following as their main functions:

  • Regional coordination and integration
  • Economic integration of member states
  • Capacity building in policy management
  • M&E related services
  • Project monitoring

The survey did not request respondents to explain whether their training interventions were part of broader capacity building initiatives.

The survey results also revealed that very few of the respondents who conduct M&E training consistently measure the effect of their efforts. Table 3 illustrates that almost all the respondents surveyed (10 out of the 13) reported that they only rarely or sometimes measure the effect of their training.

TABLE 3: Number of organisations who measure the effect of training.
Approaches to measurement and the use of findings

Table 4 illustrates that, of those who do measure the effect of their training in some way, most (7 to 8 out of 13) reported that they:

  • Only sometimes or rarely measure the effect of training on behavioural change (7 out of 13).
  • Only sometimes or rarely measure the effect of training of knowledge (8 out of 13).
  • Rarely measure the effect of training on attitudes (8 out of 13).
  • Rarely measure whether participants use the training in their day-to-day work (8 out of 13).
  • Respondents listed the following as the various approaches and methods used to measure the effect of their training interventions:
  • Questionnaires and evaluation forms
  • Interviews (individual and group)
  • Simple chi-squared test
  • Treatment and control (experimental design)
  • Pre-post-test (before-after comparison)
  • Most-significant change (MSC) technique
  • Individual follow-up
  • Site visits
TABLE 4: What gets measured.

In general, very few respondents regularly use the findings of their abovementioned pre-training or post-training measurement efforts. Only four respondents reported that they always or often use the findings of the studies they conduct to adjust their training interventions (Table 5). More than half (7 out of 13) respondents only sometimes or rarely do so.

TABLE 5: Whether findings are used to adjust training interventions.

This could be related to the challenges highlighted by respondents around the measurement of their training programmes, which included inadequate data, limited resources as well as poor attention paid to evaluating the impact of ECB interventions. These challenges are further discussed below.

Challenges and recommendations in measuring the effect of training

The challenges raised by survey respondents can be categorised into four broad areas: challenges related to planning and design, weaknesses in data collection and information management, resource challenges and challenges in the capacity to measure the effect of training programmes. In terms of planning and design, survey respondents highlighted the challenge of weak planning of ECB impact assessments in particular. It was emphasised that not enough attention is paid to measuring the effect or impact of ECB interventions. One reason that was provided for this is that donor interests are in what was termed ‘programmatic initiatives’, and therefore activities related to measuring the impact of the training programmes are not prioritised. At least one respondent pointed out that the selection of appropriate indicators to measure the effect of training was a challenge in measuring the effect or impact of ECB.

In relation to data collection and information management, respondents highlighted that tracing beneficiaries (i.e. training participants) in order to measure the effect of the training intervention posed a challenge. For those who managed to collect information from participants, the low response rate as well as what were held to be ‘poor/inaccurate responses to questions’ were added to the list of challenges. Inadequate monitoring data, the loss of institutional memory as well as inadequate knowledge management were also listed as areas of concern. The third category of challenges revolves around the availability of resources to support the measurement of training effects. This included both human and financial resource constraints for some respondents, where one respondent noted ‘resources in terms of skills and financial to monitor impact studies of any kind’ as a challenge. One respondent also noted that the capacity to measure the effect of training posed a challenge to their ability to do any measurement of this kind.

The survey respondents also made a number of suggestions to address these challenges. These included improving the availability of proper monitoring data as well as improved knowledge management. At least one respondent suggested that tracer studies and some form of post-training follow-up must be introduced. More rigorous methods of measuring the effect of training were also recommended, including the use of impact evaluations.

Discussion

The primary objectives of this article were to: (1) explore the extent of, and ways in which, selected key institutions on the African continent measure the effect of training on strengthening evaluation capacity and (2) discuss the challenges of and make recommendations for improving the measurement of the effect of training on ECB in Africa. Each of these is discussed separately below.

The extent of, and ways in which, selected key institutions on the African continent measure the effect of training on strengthening evaluation capacity

The findings of the review, supported by literature (Tarsilla 2014), revealed that the measurement of the impact of training as an element of ECB is not commonplace, and that it is difficult to find empirical research on how the acquisition of skills and competencies takes place, or how to measure it. The survey results indicated that there are very limited attempts by the institutions sampled, who have quite a significant training ‘footprint’ in Africa, to measure the effect of training on behavioural change, knowledge, attitudes and practices. Some of the approaches and methods listed by a small number of respondents included: questionnaires, interviews, experimental designs (i.e. treatment vs control groups), pre-post-tests (before-after comparison), focus groups as well as individual follow-up. These methods are mostly used to determine effects at the individual level. However, there is widespread agreement that capacity is acquired at multiple levels: individual, organisational and systems (LaFond et al. 2002; Taylor-Ritzler et al. 2013). Preskill and Boyle’s (2008) model of ECB is instructive in this sense, as it provides an overarching approach that incorporates all three levels of learning. This has implications for how the effects of any kinds of ECB interventions are actually measured. If the measurement is confined to the individual level (as is the case with the respondents sampled in this review), important findings around organisational and systemic change may be missed. Strengthening evaluation practice in Africa should not be confined to building the capacity of individuals to become better M&E practitioners, but should be linked to a broader programme of transformation of organisations and systems to ensure better use of evidence for better development results. Perhaps the issue is not so much improving the measurement of training, but on the design of ECB interventions, of which training should form one component.

Capacity, in all its complexity, is in fact hard to measure (Tarsilla 2014). The findings also revealed that it may be easier to measure the acquisition of technical knowledge by individuals, but much more difficult to measure the effect of evaluation training on attitudes, behaviours and practices. This may explain, therefore, why the institutions surveyed in this review were more likely to measure knowledge acquisition, as compared to changes in attitudes and behaviours. According to Dillman (2012), coursework has been rated as being the most impactful in contributing to the acquisition of theoretical knowledge. Both the measurement and acquisition of knowledge are easier than more complex measurements of changes in behaviour and practices, as illustrated by Kirkpatrick’s four-level model of assessing training effectiveness (Bates 2004). In his model, the extent to which skills, knowledge and attitudes are measured is located at the second stage of the evaluation of learning, rendering it more complex and difficult to measure than the transfer of learning as well as the achievement of results (Bates 2004).

The intersecting variables (such as incentives to learn, institutional support, etc.) that play a role in the extent to which evaluation practice is strengthened in individuals and organisations contribute to this complexity. The review found that there is little empirical evidence available that tests whether ECB processes, activities and outcomes in general are ultimately effective (Preskill & Boyle 2008). There is also very little empirical evidence that helps to interpret how change happens, and how this may shape ECB efforts in the future. The critical pathways that lead from ECB to strengthened evaluation practice are often hidden (at best), or not measured at all (at worst). There is broad consensus that training is only one element of ECB, and that there is a need for a multi-pronged approach to ECB, in line with what some authors (Tarsilla 2014) define as ECD.

Even within the formal academic education sector, there are hardly any evaluations of changes in knowledge, attitudes and behaviour as a result of teaching and learning that the ECB sector can learn from (Burgess & Carpenter 2008). The measurement of short-term effects has been the focus of most donor agencies involved in ECB in Africa, while the measurement of long-term results has been less forthcoming (Tarsilla 2014). Much more needs to be done around pedagogic research and ECB, as well as the development of methodologies to evaluate strengthened evaluation capacities and capabilities, which are not well developed (LaFond et al. 2002).

The need for more empirical research in this area of ECB could perhaps be filled by the use of randomised control trials (RCTs, Suarez-Balcazar & Taylor-Ritzler 2013). The argument against the use of RCTs in ECB evaluation is that the variables are too vast to control or offer predictions, while the costs are prohibitive (Tarsilla 2014). The use of multiple case studies and their comparison may be far better suited to a deeper understanding of the specific conditions that facilitate improved evaluation capacity in individuals, organisations and systems (Tarsilla 2014). Some successful research was done on the success case method, which demonstrated that evaluating a small number of cases for impact can provide justification for the future resourcing of successful training programmes (Medina et al. 2015). Overall, there is little consensus on which approaches work – and work best – in assessing the effectiveness of capacity building efforts (Burgess & Carpenter 2008), and more research is needed in this area.

The tracer study conducted by CLEAR-AA in 2015 on the perceptions of participants about the effect of in-service evaluation training conducted on behalf of the South African DPME, revealed that the highest perceived gains were made in the area of knowledge transfer and less in the management of evaluations (CLEAR-AA 2015). The survey findings give an indication that some institutions are embarking on post-training evaluation that goes beyond the testing of perceptions about the course, its content and the facilitator; however, these studies are seemingly not used internally, and are not available widely for learning to be shared. In general, very few respondents regularly use the findings of their aforementioned pre-training or post-training measurement efforts.

Challenges and recommendations in the measurement of the effect of training on evaluation capacity building

ECB initiatives in Africa are often poorly designed and planned (Tarsilla 2014), and continue to be focused on training primarily, despite its (known) shortcomings as a standalone offering (Dillman 2012; Tarsilla 2014). When training is offered as a standalone intervention to individuals, outside of any other supporting mechanisms of ECB, it is not effective, while it has been lamented that more practical efforts are required to enable participants to apply what had been learnt in practice (Tarsilla 2014). The findings of the study conducted by Taylor-Ritzler et al. (2013) confirmed, statistically, that individual factors (knowledge and motivation) are inextricably linked to organisational factors (leadership, support, resources and a learning climate), providing further empirical evidence that capacity building at an individual level is an insufficient condition of building organisational or system-wide evaluation capacity.

Suarez-Balcazar & Taylor-Ritzler (2013) posit that the ‘science-practice model’ of ECB is needed in order to move the field forward towards this outcome. In other words, there needs to be a virtuous cycle of learning from theory, research and practice. It is held that not enough is currently being done to ensure that research around the measurement of ECB, the practice thereof and efforts to empirically test the impact of ECB interventions inform each other in a continuous and dynamic process (Suarez-Balcazar & Taylor-Ritzler 2013).

Poor planning and design adds to the measurement of ECB efforts as well, confirmed in the survey findings wherein respondents highlighted the challenge of weak planning of ECB impact assessments, and the lack of attention that is paid to measuring the effect or impact of ECB interventions. Even though some authors (Taylor-Ritzler et al. 2013; Volkov & King 2007) have developed tools to guide organisational ECB, there are not enough validated instruments to measure evaluation capacity in general, and those that exist are seemingly not known or used by survey respondents. Respondents to the survey lamented the absence of adequate indicators to measure the effect of their ECB efforts. No mention was made of any kinds of instruments such as the ECAI and the Checklist for Building Organisational Evaluation Capacity (Taylor-Ritzler et al. 2013; Volkov & King 2007). Survey respondents agreed that the solution (partly) lies in the availability of proper monitoring data, improved knowledge management and more rigorous methods of measuring the effect of training (such as the use of impact evaluations).

LaFond et al. (2002) have pointed to three challenges to measuring capacity: differing views on what constitutes the link between capacity and performance, what determines ‘adequate’ performance and how external conditions impact on capacity and performance. There is a need to collect data that will assess whether a number of variables that could impinge on the ECB outcomes, including participant selection, motivation and incentive to learn, organisational support, the presence or absence of institutional structures to support the implementation of skills acquired, individual uptake of training material, and so forth. Instruments that could test a variety of variables and correlate these to the effect of the training on the immediate, proximal and distal outcomes of the ECB intervention could improve the curriculum as well as contribute towards the improved achievement of development outcomes. However, this will require resources (human and financial), which was raised by survey respondents as one of the challenges they face in undertaking any efforts towards the measurement of training effects.

Limitations of the review

The small sample size and non-probability sampling method in this review does not allow for a generalisation of the results to the entire ECB community across the continent of Africa. Furthermore, the view of the individual respondents cannot be held to be representative of their respective organisations. Some of the institutions sampled are also hosts to other institutions who specialise in training provision and other ECB interventions, and this distinction was not made in the instrument. The survey instrument also did not assess whether institutions supplemented their training in M&E with other ECB interventions, which should form part of future research of this nature.

The findings of this review indicate some of the research gaps in ECB as well as the kinds of empirical evidence that is required to adequately measure the effect of training on changing knowledge, behaviours, attitudes and practices. Future studies of this nature would need to adopt a more robust probability sampling method and possibly employ more extensive methods of both quantitative and qualitative data collection (such as interviews and focus groups) in order to emerge with a deeper and more nuanced analysis of the conditions under which ECB (and training in particular) are most suited to delivering on development outcomes.

Recommendations

A number of questions remain unanswered around what is considered a ‘capacitated’ individual or institution as it pertains to evaluation, how this is measured and what interventions are required to build such capacity. In response to the objectives of this review, the following recommendations are a reflection of what institutions (especially those whose mandate involves ECB) need to focus on going forward if ECB is to have any impact on strengthening evaluation practice on the continent in a meaningful way.

What can we learn about the role of training in ECB and its effect on improving knowledge, attitudes, behaviour and practices in evaluation practice, and how this can be better measured?

As emphasised above, training forms one part of a package of interventions in M&E capacity building. The findings of the review have revealed that M&E training is often delivered as a standalone intervention in ECB, but that an integrated set of services (which may include technical assistance, coaching and other means of support) is more effective in strengthening evaluation practice. The design of ECB interventions for institutions should, ideally, include a multiplicity of strategies and activities and should (as far as possible) not constitute of training as a standalone event.

What we can learn from the extent of, and ways in which, selected key institutions on the African continent measure the effect of training on strengthening evaluation capacity

The findings have shown that capacity building is a multidimensional, dynamic and complex phenomenon, which is influenced by many elements including individuals’ existing capacity, their knowledge, behaviours and attitudes. It is also influenced by the context within which the individual or institution finds themselves. M&E capacity is not measured in a standardised manner on the African continent, and therefore it is difficult to build an empirical case for why there is a need to continue providing short course training to an ever-growing clientele. There is a critical research gap in evaluating the impact of training on changes in attitudes, behaviours and practices in the evaluation sector (as well as to some degree knowledge), and therefore more research is therefore required in this area. It is also recommended that all training interventions should include, as a matter of course, a plan for the evaluation of the effect of the training on the acquisition of evaluation competencies, which need to be agreed upon at the design phase of the ECB intervention.

There is scant use of tracer studies to track M&E training participants’ acquisition of knowledge and skills through coursework programmes in the ECB sector. One is even less likely to find studies that track whether training has improved participants’ ability to use their newfound skills in a way that improves evaluation practice. Evaluative thinking is a fundamental tenet of M&E, and it is a foregone conclusion that any interventions of this nature need to be monitored and evaluated to test the soundness, efficiency and effectiveness of the intervention logic. All institutions embarking on ECB interventions should therefore ensure that tracer studies are built into all training of this nature, so that ongoing empirical evidence may be built up over time about the effect of training on strengthening evaluation practice.

The write-up and publication of the results of those who do conduct studies would be very useful and are critical for the evaluation community in Africa to learn more about the strategies and methods of ECB (in particular training) that work well, and those that do not. There is also an opportunity to develop a database of findings on the evaluation of ECB efforts, which would be valuable to the evaluation community in general, and ECB individuals and institutions in particular, to inform the trajectory of future ECB interventions.

Much more empirical and rigorous research is needed to build a clear understanding of what conditions are ideal for the transfer of evaluation skills, competencies and knowledge, and the strengthening of evaluation practice at large.

What are some of the main challenges that need to be addressed in the measurement of ECB efforts, and how can these be addressed?

The multiplicity of challenges in ECB in Africa hampers the measurement of capacity building interventions. These include the lack of clarity and consensus on the meaning of evaluation capacity, the various dimensions that constitute capacity, the levels at which capacity may be identifiable (e.g. individual and organisation) as well as the absence of indicators and baseline information on evaluation capacity on the continent. The absence of a variety of robust measures of evaluation capacity, and in some cases lack of knowledge around those which do exist, means that (1) more empirical research is needed on measuring the attribution of various dimensions of ECB on strengthening evaluation practice and (2) more work is needed to mainstream the use of instruments that do exists (e.g. Taylor-Ritzler et al. 2013, ECAI). It is also important to isolate training as a specific capacity building intervention in such research, due to its ubiquitous nature in ECB across the continent.

ECB interventions must be better designed to achieve the results they set out to achieve, and a longer-term view of capacity building needs to be adopted by institutions who are actively involved in ECB.

Conclusion

This article presents the findings of a desktop review, combined with a snapshot survey of institutions from parts of the African continent, of the measurement of ECB in strengthening evaluation practice. The review explored the challenges facing the ECB sector in general, as well as the difficulties in measuring the effect of ECB interventions, in particular training, as only one aspect of ECB. The findings revealed that the complexity of defining ECB renders it almost impossible to test whether ECB processes, activities and outcomes in general are ultimately effective. The many intersecting variables that play a role in the extent to which evaluation practice is strengthened in individuals and organisations would best be measured through empirical studies on the continent, of which there are currently very few. The review also found that there is a general gap in the research in Africa on the impact of training on improving the competencies required for quality evaluations. More work needs to be done to determine, through research, what conditions are ideal for the transfer of evaluation skills, competencies (which a number of evaluation associations are currently working on) and knowledge, and the strengthening of evaluation practice at large.

ECB has become a ‘hot topic’ within the evaluation field (Preskill & Boyle 2008:443). This desktop review is important as it begins to highlight the research gaps in measuring the effect of ECB and training in particular on strengthening evaluation practice in Africa. There is a need for more empirical evidence about ‘what works’ to inform the ways in which capacity building is defined, and how capacity building interventions are developed, implemented and measured. Sobeck and Agius (2007:237) posit that information on the processes that lead to successful capacity building, and the strategies that lead to better outcomes in this regard, can provide support to capacity building funders on decisions around where to invest. This article will be useful in guiding future empirical research into more rigorous methods of measuring the effect of training on the broader efforts at ECB, and more importantly, shed light on the quest to strengthening evaluation practice.

Acknowledgements

This article contributes to addressing the gap in the research around measuring the effect of evaluation capacity building interventions in general, and across the African continent in particular. It provides a snapshot of the extent of the measurement of the effect of training on strengthening evaluation competencies and practice on the continent, and provides insights into future areas of research that are needed in order to enhance evidence-based policy and development practice.

Competing interests

The authors declare that they have no financial or personal relationship that may have inappropriately influenced them in writing this article.

Authors’ contributions

C.M. was the project leader and was responsible for the problem identification and project design. Both C.M. and M.R. reviewed relevant literature on the topic and jointly did the write-up.

References

Bates, R., 2004, ‘A critical analysis of evaluation practice: The Kirkpatrick model and the principle of beneficence’, Evaluation and Programme Planning 27, 341–347. https://doi.org/10.1016/j.evalprogplan.2004.04.011

Brinkerhoff, D. & Morgan, P.J., 2010, ‘Capacity and capacity development: Coping with complexity’, Public Administration and Development 30, 2–10. https://doi.org/10.1002/pad.559

Burgess, H. & Carpenter, J., 2008, ‘Building capacity and capability for evaluating the Outcomes of Social Work Education (the OSWE Project): Creating a culture change’, Social Work Education 27(8), 898–912. https://doi.org/10.1080/02615470701844308

CLEAR-AA., 2015, Report on the results of a tracer study of DPME’s evaluation capacity development in-service training courses, Sept 2012 to March 2014, unpublished government report.

Cohen, C., 2006, ‘Evaluation learning circles: A sole proprietor’s evaluation capacity-building strategy’, New Directions in Evaluation Fall, 85–93. https://doi.org/10.1002/ev.200

Crisp, B.R., Swerissen, H. & Duckett, S.J., 2000, ‘Four approaches to capacity building in health: Consequences for measurement and accountability’, Health Promotion International 15(2), 99–107. https://doi.org/10.1093/heapro/15.2.99

D’Andrea, M., Daniels, J. & Heck, R., 1991, ‘Evaluating the impact of multicultural counselling training’, Journal of Counselling and Development 70(1), 143–150. https://doi.org/10.1002/j.1556-6676.1991.tb01576.x

Davies, R. & MacKay, K., 2014, ‘Evaluator training: Content and topic valuation in university evaluation courses’, American Journal of Evaluation 35(3), 419–429. https://doi.org/10.1177/1098214013520066

Dillman, L.M., 2012, ‘Evaluator skill acquisition: Linking educational experiences to competencies’, American Journal of Evaluation 34(2), 270–285. https://doi.org/10.1177/1098214012464512

Horton, D., 1999, ‘Building capacity in planning, monitoring and evaluation: Lessons from the field’, Knowledge, Technology and Policy 11(4), 152–188. https://doi.org/10.1007/s12130-999-1008-2

Labin, S.N., 2014, ‘Developing common measures in evaluation capacity building: An iterative science and practice process’, American Journal of Evaluation 35(1), 107–115. https://doi.org/10.1177/1098214013499965

LaFond, A.K., Brown, L. & Macintyre, K., 2002, ‘Mapping capacity in the health sector: A conceptual framework’, International Journal of Health Planning and Management 17, 3–22. https://doi.org/10.1002/hpm.649

Lucas, B., 2013, Current thinking on capacity development, Applied Knowledge Services Helpdesk Research Report, Web-published report, viewed 26 October 2016, from www.gsdrc.org

Medina, L., Acosta-Perez, E., Velez, C., Martinez, G., Rivera, M., Sardinas, L. et al., 2015, ‘Training and capacity building evaluation: Maximising resources and results with Success Case Method’, Evaluation and Program Planning 52, 126–132. https://doi.org/10.1016/j.evalprogplan.2015.03.008

Morariu, J., Reed, M. & Brennan, K., 2011, Capacity and Organizational Readiness for Evaluation (CORE) tool, viewed 08 February 2017, from http://www.pointk.org/resources/node/593

Mubuuke, A.G., Businge, F. & Kiguli-Malwadde, E., 2014, ‘Using graduates as key stakeholders to inform training and policy in health professions: The hidden potential of tracer studies’, AJHPE 6(1), 52–55.

Naccarella, L., Pirkis, J., Kohn, F., Morley, B., Burgess, P. & Blashki, G., 2007, ‘Building evaluation capacity: Definitional and practical implications from an Australian case study’, Evaluation and Program Planning 30, 231–236. https://doi.org/10.1016/j.evalprogplan.2007.05.001

Nielsen, S.B., Lemire, S. & Skov, M., 2011, ‘Measuring evaluation capacity – Results and implications of a Danish study’, American Journal of Evaluation 32(3), 324–344. https://doi.org/10.1177/1098214010396075

Podems, D., 2014, ‘Evaluator competencies and professionalising the field: Where are we now?’, The Canadian Journal of Program Evaluation 28(3), 127–136.

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), Art. #25, 9 pages. https://doi.org/10.4102/aej.v1i1.25

Preskill, H. & Boyle, S., 2008, ‘A multidisciplinary model of evaluation capacity building’, American Journal of Evaluation 29(4), 443–459. https://doi.org/10.1177/1098214008324182

Ross, S. & Hopson, R., 2006, ‘Book review: Building evaluation capacity: 72 activities for teaching and training by Hallie Preskill and Darlene Russ-Eft’, American Journal of Evaluation 27(1), 124–127. https://doi.org/10.1177/1098214005284982

Sobeck, J. & Agius, E., 2007, ‘Organisational capacity building: Addressing a research and practice gap’, Evaluation and Programme Planning 30, 237–246. https://doi.org/10.1016/j.evalprogplan.2007.04.003

Stevenson, F., Florin, P., Scott Mills, D. & Andrade, M., 2002, ‘Building evaluation capacity in human service organisations’, Evaluation and Programme Planning 25, 233–243. https://doi.org/10.1016/S0149-7189(02)00018-6

Suarez-Balcazar, Y. & Taylor-Ritzler, T., 2013, ‘Moving from science to practice in evaluation capacity building’, American Journal of Evaluation 35(1), 95–99. https://doi.org/10.1177/1098214013499440

Tarsilla, M., 2014, ‘Evaluation capacity development in Africa: Current landscape of internatonal partners’ initiatves, lessons learned and the way forward’, African Evaluation Journal 2(1), 13. https://doi.org/10.4102/aej.v2i1.89

Taylor-Ritzler, T., Suarez-Balcazar, Y, Garcia-Iriarte, E., Henry, D.B. & Balcazar, F.E., 2013, ‘Understanding and measuring evaluation capacity: A model and instrument validation study’, American Journal of Evaluation 34(2), 190–206. https://doi.org/10.1177/1098214012471421

Volkov, B.B. & King, J.A., 2007, A checklist for building organisational evaluation capacity, viewed 08 February 2017, from http://dmeforpeace.org/sites/default/files/Volkov%20and%20King_Checklist%20for%20Building%20Organizational%20Evaluation%20Capacity.pdf

Wandersman, A., 2014, ‘Getting to outcomes: An evaluation capacity building example of rationale, science, and practice’, American Journal of Evaluation 35(1), 100–106, viewed 08 February 2017, from http://www.pointk.org/resources/files/core_tool.pdf

Footnote

1. The 12th organisation remained anonymous


 

Crossref Citations

1. Whither Made in Africa Evaluation: Exploring the future trajectory and implications for evaluation practice
Precious Tirivanhu
African Evaluation Journal  vol: 10  issue: 1  year: 2022  
doi: 10.4102/aej.v10i1.614