Article Information

Authors:
Michele Tarsilla1

Affiliations:
1Evaluation Capacity Development Group, United States

Correspondence to:
Michele Tarsilla.

Email:
mitarsi@hotmail.com

Postal address:
2070 Belmont Road NW, Apt 607, Washington DC 20009, United States

Dates:
Received: 14 Jul. 2014
Accepted: 10 Sep. 2014
Published: 18 Dec. 2014

How to cite this article:
Tarsilla, M., 2014, ‘Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward’, African Evaluation Journal 2(1), Art. #89, 13 pages. http://dx.doi.org/10.4102/aej.v2i1.89

Note: The paper was presented at the 7th African Evaluation Association (AfrEA) conference held in Yaounde, Cameroon, 1–5 March 2014.

Copyright Notice:
© 2014. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward
In This Original Research...
Open Access
Abstract
Abstrait
Introduction
Study objective
   • Key questions
Methodology
   • Design
   • Case selection
   • Methods
Ethical considerations
   • Study limitations
Results
   • Question 1: What is the current level of recognition of evaluation capacity development amongst international development partners in Africa?
      • Three different ‘evaluation capacity development recognition scenarios’ emerged
   • Question 2: What are the main modalities of evaluation capacity development implementation amongst international funders in Africa today?
   • Question 3: What is the content of the evaluation capacity development strategies implemented by international funders in Africa today?
   • Question 4: How do international funders evaluate the effectiveness of their evaluation capacity development strategies in Africa?
   • Question 5: What are the lessons learned on what is working and what is not working in evaluation capacity development in Africa?
   • Question 6: What possible synergies could be put in place between evaluation capacity development funders and African institutions in the future to achieve more effective evaluation capacity development programming?
Discussion
   • The lack of a clear Evaluation Capacity Development systemic vision in Africa and the implementation of erratic evaluation capacity development tactics rather than strategies
   • From local evaluation capacity development to global evaluation capacity development: Opportunities and risks
   • A shift from functional evaluation capacity development to transformative evaluation capacity development in Africa is needed
   • Why are there so few evaluations of evaluation capacity development initiatives funded by international partners in Africa?
   • Future engagement opportunities with African actors and institutions towards more effective evaluation capacity development
Conclusion
   • Evaluation capacity development conceptualisation: A new evaluation capacity development definition and evaluation capacity development theory of change are needed
   • Evaluation capacity development planning: Evaluation capacity development joint funding should be established at the national level and related goals ought to be defined in a more participatory fashion
   • Evaluation capacity development implementation: A further contextualisation of evaluation capacity development strategies’ and programmes’ content is needed
   • Evaluation capacity development
   • Dissemination of evaluation capacity development related information
Acknowledgements
   • Competing interests
References
Appendix I
   • Organisations included in the study
Footnotes
Abstract

Despite the copious resources allocated by international development partners to enhance African countries’ capacity to evaluate the performance and impact of development programmes and policies, most evaluation capacity building (ECB) efforts have not yielded the expected results. Time and energy have been focused on the measurement of short-term effects whilst long-term results have largely remained elusive. As a result, a variety of actors across the continent are calling for more innovative strategies. In particular, more efforts are currently being made to revitalise the evaluation function in international development at the global level and to enhance a shift from short-term training to more contextually relevant, systemic learning, equity and sustainability efforts. This article aims to provide a critical overview of ECB initiatives undertaken by international development partners in Africa over five years (2009–2014) that worked well and investigate how they could be improved. The common issues stress the need for harmonisation and collaboration between international partners and African institutions and more effective collaboration with in-country institutions and organisations committed to evaluation capacity development (ECD). The analysis in this article is timely and relevant for both the strengthening of so-called made-in Africa evaluation methods and approaches and the roll-out of systemic and organic ECD strategies. The debate spurred by this article is likely to contribute to the current global debate on what strategies ought to be taken as part of the post-2015 agenda. This in turn will spur the debate on ECD to increase in importance and undoubtedly in intensity.

Abstrait

Malgré les ressources abondantes allouées par les partenaires internationaux de développement pour renforcer les capacités des pays africains à évaluer les résultats et l'impact des programmes et politiques de développement, la plupart des efforts de Renforcement des capacités en évaluation (RCE) n'ont pas donné les résultats escomptés. Le temps et l’énergie ont été concentrés sur la mesure des effets à court terme tandis que les résultats à long terme sont largement restés absents. En conséquence, une variété d'acteurs à travers le continent appelle à des stratégies plus innovantes. En particulier, des efforts supplémentaires sont actuellement déployés pour revitaliser la fonction d’évaluation dans le développement international au niveau mondial et pour améliorer le passage de la formation à court terme à des efforts plus pertinents selon le contexte, d'apprentissage systémique, de parité et de durabilité. Cet article vise à donner un aperçu critique des initiatives de RCE entrepris par les partenaires internationaux de développement en Afrique sur quatre ans (2000–2004) qui ont bien fonctionné, et approfondir la façon dont elles pourraient être améliorées. Les questions communes soulignent la nécessité d'une harmonisation et d'une collaboration entre les partenaires internationaux et les institutions africaines, et d'une collaboration plus efficace avec les institutions et organisations nationales impliquées dans le RCE. L'analyse dans cet article est opportune et pertinente à la fois pour le renforcement des soi-disantes méthodes et approches d’évaluation « made in Africa » (fabriqué en Afrique) et le déploiement de stratégies de RCE systémiques et organiques. Le débat suscité par le présent article est susceptible de contribuer au débat mondial actuel sur les stratégies devant être prises dans le cadre de l'ordre du jour post-2015. De même, ceci stimulera le débat sur l'augmentation en termes d'importance, et sans doute en termes d'intensité, du RCE.

Introduction

According to the Organization for Economic Cooperation and Development (OECD 2006), evaluation capacity development (ECD) is ‘understood as the process whereby people, organisations and society as a whole unleash, strengthen, create, adapt and maintain evaluation capacities over time. Strengthening evaluation capacities is not an end goal in itself, but, should be seen, rather, as a means to support more effective policies and programmes to achieve development results’. Despite the definition, most ECD work funded by international development partners in Africa to date has been an unsystematic ‘donor-centric’ endeavour. Driven by noble intentions but even more so by their special interests in ensuring the success and leveraging the impact of their respective programmes in a variety of countries, international ECD funders have been committing most of their resources to punctual and short-term activities (e.g. training aimed at individuals, technical assistance provided to selected ministries, study visits amongst grantees). As a result, the outcome of ECD initiatives funded by international partners in Africa to date has not been the promotion of a stronger evaluation culture within systems, as suggested by the earlier definition, but rather the enhancement (modest in the majority of cases) of technical evaluation capacity amongst a few local staff members working for those very same international partners who fund such initiatives. As a result, most ECD strategies – as they are conceptualised today – still promote the smooth implementation of ‘aid’ processes and, by discouraging context-specific learning and ownership of the evaluation function, hamper ‘development’ effectiveness. Such conceptualisation of ECD initiatives has certainly served international partners’ accountability interests (Picciotto 2009a). However, the perpetuation of a ‘culture of compliance’ shared by many African countries and governments in relation to the stringent reporting requirements imposed by international ECD funders has placed most of the continent (South Africa remains one of the few exceptions) in an aggravated state of dependency and one of hiatus. Neglecting the fact that local evaluation capacity already exists across the continent, international ECD funders have often been perceived as claiming to be the ones ‘building’ it from scratch. As a corollary, the priorities set by the funders, which the conceptualisation of such ECD programmes are based on, have not been able to adequately reflect those endogenous learning needs and aspirations characterising the systems where the ECD funders operate. Unfortunately, the scenario depicted above, as bleak as it may seem, is predominant, as also attested by a number of recent studies on this topic. According to the recent multilateral aid review conducted by the Department For International Development (DFID 2011), for instance, only six donors (out of the total sample of 30 included in the study) use partner country systems for at least two-thirds of their bilateral aid. Similarly, only one out of 13 ECD-friendly targets set by donors in relation to their efforts to strengthen capacity through coordinated donor support had been met. The same study also concluded that most of the donor support for capacity development both within and outside of the evaluation arena (accounting for $25 billion per year) remains supply driven and that technical cooperation initiatives appear more tied than other forms of bilateral assistance. Such figures are discouraging. However, ECD practices amongst international partners have evolved over the last decade and new opportunities for enhancing the current African ECD initiatives – and harmonising them at various levels – are available today, as predicated by the Paris Declaration (2005), Accra Agenda for Action (2008) and the Busan High-level Forum on Aid Effectiveness (2011). Highlighting such opportunities and reflecting upon them towards the formulation of possible ‘exit strategies’ from the current ECD hiatus in Africa is one of the objectives of this article.

Study objective

The article, which builds upon a broader unpublished research on ECD conducted by the same author between December 2013 and April 2014, has three main objectives:

  • Provide a description of the current landscape of international partners’ ECD initiatives in Africa through a review of the ECD strategies and programmes managed by multilateral organisations, bilateral development agencies and foundations across the continent.

  • Identify a series of lessons learned and good ECD strategies that larger ECD funders in Africa might want to take into account to promote contextually relevant evaluation theories and practice more effectively in their future work.

  • Outline opportunities for more fruitful ECD collaborations amongst development partners as well as between international ECD funders and Africa-based institutions, towards more holistic and comprehensive national and regional ECD strategies to be pursued as part of the post-2015 agenda.

Key questions

For a better understanding and reflection on the current ECD landscape in Africa, the article addresses four main questions:

  1. What is the current level of ECD recognition amongst development partners operating in Africa?

  2. What are the main modalities of ECD implementation amongst international funders in Africa today?

  3. What is the content of the ECD initiatives implemented by international funders in Africa today?

  4. How do international funders evaluate the effectiveness of their ECD strategies in Africa?

Based on the international ECD funders’ feedback as well as a thorough literature review on the topic, the papers also aim to explore two additional questions:

  • What has been learned to this date on what works and what does not in ECD in Africa?

  • What are the possible synergies that could be put in place between ECD funders and African institutions to achieve more effective ECD programming in Africa in the future?

Methodology
Design

In general terms, the design of the study described in this article is descriptive cross-sectional featuring both the simultaneous and sequential use of different qualitative methods for exploratory purposes (Bamberger, Rugh & Mabry 2012; Greene & Caracelli 1997; Mertens & Tarsilla 2014; Stake 1995; Yin 2009).

Case selection

The 21 organisations included in the sample were selected in collaboration with a number of African evaluators and ECD funders before the study began, based on two main inclusion criteria: organisations needed to (1) originate most of their resources outside of Africa and (2) demonstrate financial commitment to ECB in Africa. Organisations also needed to fulfil one or both of the following additional criteria: (3) possess prior experience of direct support to African Evaluation Association (AfrEA) or (4) contribute to the current discourse on the future of ECB across Africa, through presentations at international fora or ad hoc research on the topic.

Methods

The study presented in this article includes the use of different methods during its four phases:

  1. A review of ECB and ECD resources available on the websites of a purposive sample of international organisations funding ECD initiatives in Africa (Appendix I). Such resources included project documents, evaluation reports, official policies and videos on ECD-related issues. The findings of the literature review conducted during the first phase informed the development of the interview protocol used during the second phase.

  2. A series of semi-structured interviews (between 60 and 90 minutes in duration) and consisting of 15 different questions grouped according to the six key study questions. Interviews were held over the phone with a sample of evaluation officers and heads of evaluation offices (n = 36) representing 80% of the organisations included in the original sample (n = 21) used during the literature phase. Occasional online follow-up interviews and offline discussions with ECD funders and specialists were organised at the end of this second phase.

  3. A comparative analysis of all the literature review and semi-structured interviews findings. Upon the conclusion of the semi-structured interviews and the offline discussions, all responses, previously transcribed in a word processor format, were grouped together by question and sub-question (each question generally consisted of three sub-questions) and analysed. The major themes and categories identified in the course of the comparative analysis allowed for the expanding of the major themes and categories emerged during the content analysis of the resources included in the organisations’ websites review. In order to enhance the rigour of the analysis, the emerging themes were discussed and agreed upon with two researchers who had contributed to the initial website review as well as with a senior researcher tasked with the study quality assurance.

  4. A critical review of preliminary findings by international ECD funders and African practitioners. The preliminary findings of this article were presented during the keynote speech delivered at the AfrEA bi-annual conference in Yaoundé, Cameroon, on 06 March 2014. On that occasion, a panel of ECD funders (representing all those interviewed as part of this article) commented on the findings of the article. Similarly, a purposive sampling of African practitioners (n = 14) representing different national evaluation associations in Africa commented on the key preliminary findings of the article shared with them a few days before the conference. This article incorporates all such comments, whilst protecting their confidentiality.

Furthermore, a systematic review of both ECB and ECD literature (n = 54) produced after June 2010 was conducted to triangulate the findings from the semi-structured interviews, primarily to shed some light on: (1) the main lessons learned on what works and what does not in the way international ECD funders plan, implement and evaluate ECD initiatives across Africa and (2) possible synergies that could be put in place between ECD funders and African institutions to achieve more effective ECD programming in Africa in the future.

Ethical considerations

Given the low sensitivity of the study questions, a simple informed consent was sought either verbally or in writing from each individual respondent prior to the interview. No vulnerable groups were interviewed as part of this study. Although a few quotations from respondents have been included in this study, anonymity was guaranteed.

Study limitations

This article presents three main limitations:

  1. The study findings are based on a review of ECD practices amongst some of the larger international development agencies and organisations working in Africa. As a result, some other relevant African ECD initiatives (either funded by smaller international partners or supported by African organisations and governments) might have been missed. As a result, the applicability of the study conclusions beyond larger international funders of ECD initiatives should be assumed with caution.

  2. The peer-reviewed and grey literature reviewed as part of this study was selected based on an electronic search for words that were explicitly related to evaluation (e.g. ‘ECD’ and ‘evaluation capacity building’). However, as some agencies might fund programmes that are similar in scope to ECD programmes but have different names, some information may be missing.

  3. A series of semi-structured interviews were conducted with representatives of different development partners. However, as respondents did not always have an in-depth knowledge of all the ECD initiatives sponsored by their respective organisations and their respective organisations’ websites often lacked thorough ECD-related information, this study might not have been able to cover the entirety of ECD initiatives funded by each of the interviewed partners.

Results
Question 1: What is the current level of recognition of evaluation capacity development amongst international development partners in Africa?

Three different ‘evaluation capacity development recognition scenarios’ emerged

Scenario 1: Most international partners who have a formal evaluation policy (DFID, World Bank) lack an explicit recognition in it of ECD as a priority area of intervention.

Scenario 2: Only a few of those international partners who have a formal evaluation policy included an ECD section in it. However, in doing so, they have often failed to provide sufficient guidelines on how to operationalise ECD in the field and some have even started contracting out the ECD function to private firms in a variety of countries (e.g. United States Agency for International Development [USAID]).

Scenario 3: The majority of the international partners lacking a formal evaluation policy do not have an ECD strategy or any other document available that could attest a well-articulated position on ECD-related issues. Interestingly, they still fund piecemeal ECD initiatives (often contributing resources to multi-partner ECD programmes) (Swedish International Development Agency [SIDA], Swiss Cooperation Agency, MasterCard Foundation, Irish Aid).

Question 2: What are the main modalities of evaluation capacity development implementation amongst international funders in Africa today?

ECD funders do not seem to follow a common ECD pathway. Although most international ECD funders implemented a similar range of activities (training was everybody's favourite activity to support despite its limitations), none of them relied upon a predetermined package of interventions (e.g. a sort of menu of pre-arranged activities to choose from). Overall, nine main trends in ECD implementation were observed.

Firstly, the tendency amongst larger ECD funders (DFID, German Development Agency [GIZ], World Bank) to implement projects and programmes that are entirely devoted to the strengthening of evaluation capacity, either short-term (e.g. the IPDET training organised in Canada, China and the Czech Republic) or long-term (the DFID Capacity Support to Uganda's Government Project or the GIZ Supporting ECD Project in Government and Civil Society in Uganda).

Secondly, the increasing practice amongst smaller ECD funders to mainstream a discrete number of ECD activities in projects and programmes with a different sectorial or thematic focus (e.g. MasterCard Foundation, SIDA, Gates Foundation).

Thirdly, the increasingly common provision of funding for ECD activities to multi-donor initiatives featuring a specific ECD mandate and a variable primary target, such as Centre for Learning on Evaluation and Results (CLEAR) and EvalPartners. This is particularly common amongst relatively younger organisations that had lesser exposure to ECD-related issues in the past (MasterCard Foundation) and some of the bilateral development agencies (SIDA, SDC, Finnish Ministry of Foreign Affairs).

Fourthly, the continued prominence of short-term training despite the realisation of the corresponding limitations.

Fifthly, the realisation amongst several bilateral development agencies that, for ECD initiatives to be successful in Africa, a stronger internal evaluation capacity within their respective organisations is first needed. Agencies like the UN Food and Agriculture Organization, the Irish Ministry of Foreign Affairs, SIDA and USAID are currently making an effort to first build their internal evaluation capacity and then to convey more coherent and specialised messages on evaluation to implementing partners and clients in Africa.

Sixthly, the strengthening of international partners’ learning agendas and the subsequent increased focus on mutual and context-specific learning beyond the traditional accountability focus of past ECD strategies. Due to their strong focus on results-based management (RBM), most ECD funders are experiencing a tension between their learning agenda (still at an incipient stage) and their more deeply ingrained accountability-driven practices.

Seventhly, the proliferation of online platforms for the generation and dissemination of evaluation knowledge in the African region. Such is the case of EvalPartners webinars, the AfrEA List-Serve, the IPDET List-Serve, the IDEAS List-serve, the World Bank Independent Evaluation Group (IEG) online library and the USAID Learning Lab Platform.

Eighthly, the decentralised funding and management of ECD initiatives within international development partners (whereby the central evaluation office might formulate some generic strategies on ECD and the country offices or function and programme bureaus as well as consulting firms recruited for a three-to-five-year period) might engage more directly with in-country partners to implement ECD initiatives in the field. Such is the case of the 5-Year Technical Assistance Plans, consisting of ‘Monitoring and Evaluation Platforms’, signed by four USAID country offices in Africa (Ethiopia, Rwanda, South Sudan and Uganda) with evaluation consulting firms.

Ninthly, the majority of ECD funders are still targeting either the supply or the demand of evaluation but not both at the same time. ECD targeting is often dictated by an organisation's mandate: UN agencies tend to primarily work with national governments (demand) whilst foundations traditionally privilege interactions with civil society and private sector organisations (supply).

Question 3: What is the content of the evaluation capacity development strategies implemented by international funders in Africa today?

Four main findings emerged with respect to the content of ECD strategies currently funded by international partners in Africa.

Firstly, ECD strategies mostly consist of short-term training initiatives. In particular, 83% of the evaluation officers in the sample (n = 30) stated that evaluation topics are generally presented as part of monitoring and evaluation (M&E) training courses without an adequate differentiation between the two concepts, and that the related objectives are usually determined according to funders’ priorities.

Secondly, the content of evaluation training modules is more theoretical than practical. When asked to provide feedback on the relevance of evaluation workshops to their professional needs, 80% of the African practitioners commenting on the preliminary findings of this study (n = 11) responded that, despite the facilitator's expertise, the ‘practical utility’ of what they learned in class was quite minimal. In particular, respondents, consistent with the literature (Clinton 2014; Suarez-Balcazar & Taylor-Ritzler 2013), lamented the fact that most ECD programmes provided them with no indirect ECB support (Cousins et al. 2014), that is, with opportunities to put into practice what they learned in training.

Thirdly, an increasing number of ECD strategies stress the critical link existing between performance management and evaluation. In order to mitigate the risk of promoting evaluation as a tool of compliance over learning, 53% of respondents (n = 19) have put in place a sort of blended approach. Such is the case with the USAID, which recently issued a call for the combined delivery of evaluation and performance management training and technical assistance activities aimed at both its staff in Washington, DC, and overseas. Similarly, the Bill and Melinda Gates Foundation offers evaluation training and other types of evaluation technical assistance that are not that different from efforts conducted by other funders under the RBM agenda and evidence-based policymaking support budget.

Fourthly, most training content is developed outside of Africa. As attested by 67% of respondents (n = 24) and based on a thorough review of the evaluation resources made available on their websites and used during the development of their curricula, most of the knowledge disseminated through ECD programmes in Africa today is generated in North America, Europe or Australia and no real made-in-Africa evaluation methodologies (AfrEA 2007; OECD 2005) or approaches are being included in training curricula or other forms of ECD support.

Question 4: How do international funders evaluate the effectiveness of their evaluation capacity development strategies in Africa?

When asked to what extent their ECD programme had ‘left something behind’ in their targeted countries (e.g. in terms of enhanced evaluation capacity amongst the organisations and individuals involved in your ECD programme or activities), 92% of respondents (n = 33) could not provide any evidence of programme sustainability. Of all the different ECD initiatives, the four-week IPDET programme in Canada – despite the relatively high cost of individual enrolment – was cited by several ECD funders as being one of the most effective, thanks also to the vibrant community of practice that it has been able to foster since its creation 13 years ago. Overall, with respect to the evaluation of ECD initiatives sponsored by the international organisations included in this study, five main scenarios were identified.

Firstly, the dearth of evaluations of ECD initiatives: 86% of respondents (n = 31) did not assess the results of the funding allocated by their agencies for ECD purposes.

Secondly, the tendency to contract private firms to evaluate the effectiveness of ECD initiatives. Such is the case of two recent evaluations: the EvalPartners formative evaluation announced in a variety of online fora in April 2014 and the CLEAR mid-term evaluation funded by the DFID Evaluation Office on behalf of all the CLEAR partners and (findings are available on the following website: http://www.theclearinitiative.org/PDFs/CLEAR%20Midterm%20Evaluation%20-%20Final%20Report%20Oct2014.pdf). The methodology of this second evaluation (the methodology of the first one was not known at the time the study was conducted), whose questions were developed in accordance to the five DAC criteria (relevance, effectiveness, efficiency, impact and sustainability), included the use of network analysis, stakeholder interviews, opinion leaders surveys, interviews with service providers and innovative ways of seeking performance feedback, such as crowdsourcing.

Thirdly, a combination of internal and external evaluations of the same ECD initiatives (especially those targeting individuals through short-term and medium-term training) in those rare cases when an evaluation was conducted. Such is the case of the IPDET programme whose effectiveness has been evaluated over the years through two primary methods: (1) pre-tests and post-tests administered to the programme participants, including their feedback for improvements, and (2) alumni tracer studies in 2004 and 2010 (Buchanam 2004; Cousins, Elliott & Gilbert 2010; 2011).

Fourthly, and less common than the other three scenarios, the conduct of a joint evaluation of ECD programmes funded by more partners in the same country. Such is the case of the 2010 evaluation of the ‘DFID and EC capacity building support project’ aimed at the Uganda's Office of Prime Minister, the Ministry of Finance, Planning and Economic Development, and the Uganda Bureau of Statistics.

Question 5: What are the lessons learned on what is working and what is not working in evaluation capacity development in Africa?

Five main findings emerged with respect to the strategies perceived by respondents, and confirmed by specialised literature, as the most effective to promote ECD in Africa.

Firstly, short-term training initiatives targeting individuals are no longer effective unless they are combined with other activities and are implemented as part of systemic processes. 89% of respondents (n = 31), in agreement with the diagnostic conducted by the OCED DAC ECD Task Team between 2012 and 2013, stressed the need for a shift from sporadic evaluation training initiatives of questionable scope and effectiveness to longer-term programmes consisting of initiatives with a real practical application of evaluation principles and theories. In their opinions, piecemeal and ‘generic’ training aimed at a scattered number of individuals within in-country organisations and institutions are no longer effective. However, according to 80% of respondents (n = 29), it is not clear yet how to implement an alternative and cost-effective ECD strategy.

Secondly, parachuting international consultants – of unverified experience – from outside of Africa does not enhance the development of national evaluation capacities across the continent. In the course of an interview with several African practitioners, including former AfrEA board members, the lack of a professional certification of evaluators’ competencies and the subsequent lack of comparable qualifications on which to base the selection of evaluation trainers and other professionals to be employed in the design and delivery of ECD programmes were also identified as contributing factors of ECD initiatives’ modest results.

Thirdly, engaging both the executive and legislative powers is critical to the success of a national ECD initiative. According to a recent CLEAR report on the supply and demand of evaluation in Africa, the executive power (e.g. the president's office in South Africa or the prime minister's office in Uganda) is the privileged interlocutor of ECD funders, as it is within such branch of the political and administrative system from which most directives and policies on evaluation are created and disseminated. However, strengthening evaluation capacity within national legislative bodies (e.g. parliaments) is becoming an increasingly popular strategy, too. An illustration of that is the new strategy implemented by the Department of Performance Monitoring and Evaluation (DPME) in South Africa aimed to build the understanding and awareness of evaluation within the parliamentary committee that the Department reports to – the Standing Committee on Appropriations – with workshops, international study tours and regular reporting. Fora convening MPs from across Africa are already taking place (e.g. the one organised at the margins of the latest AfrEA conference on 03 – 04 March 2014) and new entities representing MP voices are being established.

Fourthly, blended targeting is key to ECD effectiveness. 53% of respondents (n = 19) suggested that the development of ECD programmes targeting more than one person with different responsibilities (e.g. managerial and operational) from the same organisation, with the provision of online follow-up sessions (a sort of follow-up mentoring) is a particularly effective strategy. In particular, 42% of respondents (n = 15) stressed that, whilst workshops have privileged the enhancement of individual evaluation capacity and larger initiatives aimed at enhancing evaluation capacity in parliaments and governments have aimed to strengthen the ECD-enabling environment, organisational level ECD has been long neglected and more ought to be done to address this gap (Boesen 2007). As a result, gauging the organisational readiness to learn about and practise evaluation was identified as key during the ECD planning phase: this consists of measuring the so-called intangibles (often related to CD categories), some of which are universal to all organisations (Stockdill, Baizerman & Compton 2002).

Fifthly, bad evaluation terms of reference (ToR) discourage ECD. According to 71% of the African practitioners commenting on the preliminary findings of this article (n = 10), the imposition of stringent conditions in evaluation ToR, with respect to both the composition of evaluation teams and the time allowed for the development of an evaluation proposal, is detrimental to the ECD cause in Africa. As stated by one respondent: ‘we once received a ToR asking to mobilise evaluation professionals with more than 15 years of evaluation experience in eastern Democratic Republic of Congo within two weeks from receipt. This does not really make business sense as very good African evaluators are fully booked, often six or more months in advance, and those who are at the start of their career might not have access to timely information on evaluation vacancies. There is a growing number of very qualified evaluators in Africa who no longer work for international donors anymore because of the frustration involved in working on evaluation in such professionally questionable conditions’. Likewise, the limited duration of in-country evaluations (often less than two weeks) was indicated as the reason for relatively weaker ECD effectiveness. In the words of a different African respondent: ‘in a recent evaluation conducted in my country, we as a team could not even engage with some of the key programme stakeholders in the government as the limited time available for conducting the evaluation was not sufficient to get those same very officials to block a specific timeslot and venue to hold the interview’.

Question 6: What possible synergies could be put in place between evaluation capacity development funders and African institutions in the future to achieve more effective evaluation capacity development programming?

As some of the multilateral ECD programmes that attract a large share of international funding for ECD (e.g. Evalpartners and CLEAR) are currently undergoing an evaluation, 53% of the ECD funders (n = 19) stated that the modalities of their future ECD engagement in Africa will be informed by the corresponding findings. Meanwhile, ECD funders currently seem to espouse one of the three following perspectives on future ECD engagement in Africa.

Firstly, some ECD funders are interested in establishing synergies with other like-minded organisations and partners in Africa, such as AfrEA. Of all the ECD funders interviewed as part of this landscape study, the MasterCard Foundation seems to be one of the most inclined to consider an ECD partnership with AfrEA and other institutions. Likewise, the representative from the Finnish Ministry of Foreign Affairs confirmed their strong commitment to ECD and anticipated that the future strategy will certainly build on prior efforts. The African Capacity Building Foundation, too, plans to keep supporting AfrEA in the near future and one of its officers suggested that the association's future work should focus on: (1) enhancing evaluation capacity in its member countries, (2) stimulating demand and use of M&E evaluation for evidence-based decision-making across the continent and (3) creating space for exchange amongst African evaluation professionals. The CLEAR Centre for Anglophone Africa, too, will keep partnering with the South Africa M&E Association and will continue to provide workshops free of charge on innovative methodologies, such as rapid impact evaluations and made-in-Africa evaluation, to a variety of African stakeholders, both in the public and private sector. DFID, too, will keep supporting ECD in Africa in the future and, in compliance with its evaluation policy (sections 4.1 and 4.2) will strive towards the strengthening of partnerships with regional and national evaluation institutions across the continent, such as recognised academic and political institutions, which are more effective at building awareness and transforming potential and latent demand within the public sector. Furthermore, the Bill and Melinda Gates Foundation will keep getting involved in ECD, but more on the supply side. Therefore, it will partner with other international ECD funders who are specifically interested in the development of networks in specific thematic areas (e.g. health, agriculture).

Secondly, some partners are interested in establishing synergies with other ECD funders and partners, but not AfrEA, at least in the short term. The future investment in ECD made by two of the interviewed funders (and mostly consisting of support to CLEAR to this date) might shift depending on the findings of the current CLEAR mid-term review but also includes activities strengthening the link between their countries’ national evaluation societies and AfrEA.

Thirdly, some partners are currently not planning any specific ECD partnership to pursue in the near future. On the one hand, some justified their caution by mentioning that the organisational challenges experienced by AfrEA in the past had severely affected the association's performance. On the other hand, three other ECD funders felt that, should the DAC Evalnet coordinate an ECD initiative, it would be more legitimate for them to get involved than if their own evaluation unit were to start such an initiative independently.

In response to the three engagement scenarios described above, AfrEA's orientation is quite clear: the association certainly intends to become ‘the partner of choice’ for national governments and international partners alike and has a clear interest in playing a leadership role in the generation and dissemination of true African evaluation knowledge across the continent. According to some of the former AfrEA board members, desirable topics around which the association would be interested in developing future ECD partnership in Africa, and which international partners might want to support, include the development of an evaluators’ and evaluation trainers’ accreditation system as well as the dissemination of evaluation capabilities assessment (e.g. Evaluation Skills Matrix developed by the DPME within the Presidency of South Africa).

Discussion
The lack of a clear Evaluation Capacity Development systemic vision in Africa and the implementation of erratic evaluation capacity development tactics rather than strategies

As attested by the findings of the interviews with the international ECD funders in Africa as well as the review of their organisation's websites, the current level of formal ECD recognition amongst international development partners with a vested interest in Africa still appears to be too low. Such lack of a systematic recognition of ECD, less so amongst the largest donors, is quite surprising considering the plethora of international agreements that are intended to enhance development effectiveness through the promotion of evaluation, country ownership and institutional strengthening. In an effort to better understand the reasons behind the paucity of ECD-friendly policies or coherent ECD strategies put in place by international development in Africa, four main factors were identified. Firstly, the lack of an adequate understanding of what ECD is all about and what it entails: despite their declared interest in the topic, only a few respondents seemed to be cognisant of the fact that ECD is a long-term and multi-stakeholder process and were not too clear on how their respective organisations could support ECD in the future.

Secondly, the low level of appreciation for ECD (and related lessons learned), also due to the lack of a continued dialogue and joint learning on ECD issues between central evaluation units and management teams at the headquarter on the one hand, and operational units and country offices on the other.

Thirdly, the unavailability of dedicated budgets for ECD as ECD has not been considered a programmatic area in and of itself but an add-on activity. The invisibility of ECD in programme budgets is all the more relevant as it has prevented many partners from (1) feeling accountable for ECD issues, (2) taking a formal stance on ECD and (3) assessing the efficiency and effectiveness of any activity aimed at enhancing evaluation capacity amongst their African partners.

Fourthly, a lack of staff with specific ECD responsibilities within funding organisations that could strategically conceptualise, manage and evaluate ECD strategies within numerous development organisations. The general lack of clear ECD strategies amongst international development partners has fostered the development of punctual tactics (rather than thought-through strategies) addressing a wide range of funders’ political and diplomatic interests.

From local evaluation capacity development to global evaluation capacity development: Opportunities and risks

International ECD funders seem to be currently oscillating between supporting short-term evaluation training initiatives – despite acknowledging the limitations thereof – and endorsing global flagship evaluation initiatives. The current effort to foster a transnational discourse on evaluation and harmonise intents amongst ECD funders is certainly commendable. However, it is critical to be vigilant so as to prevent or mitigate three possible risks associated with such global endeavours. Firstly, the standardisation of ECD approaches due to the mobilisation of vast financial and intellectual resources to two or three large international initiatives only. Secondly, the risk of low innovation in the ECD field due to the crowding out of smaller ECD organisations that, due to their limited funding, might see their voice lost on the global scene. Thirdly, a growing apathy to ECD characterised by an increased delegation of ECD conceptualisation responsibilities from development partners to multi-donor initiatives. This latter phenomenon, which could be referred to provocatively as ‘ECD outsourcing’ (Tarsilla 2014b) would risk discouraging the development of a check and balance system (a sort of global ‘ECD peer review’ mechanism) and is likely to disenfranchise practitioners in all countries in their efforts to contribute original and innovative ideas to the ongoing ECD discourse. Furthermore, ECD targeting has been quite erratic at the national level, with the exception of a few organisations (e.g. SDC and GIZ) whose ECD targeting strategy is not centred around the concepts of either evaluation demand or supply but rather of field-building (Hay 2010). Depending on the specific national context in which they operate, both agencies aim at targeting poles within the broader national system that might vary over time based on their needs but that equally need to be strengthened in a parallel and progressive fashion. This type of ECD strategy, classified as organic ECD (Tarsilla 2014a), is capable of cutting across sectors and succeeds in fostering productive exchanges and close interactions amongst those who gravitate around both sides. In doing so, as stated by one respondent: ‘it is important to be opportunistic every time an ECD programme is being designed so as to build upon whatever existing programme is contributing to the generation of better evidence’. Such normalisation of the evaluation function (Tarsilla 2014a) is particularly desirable as it is expected to maximise development results, by creating better synergies and avoiding duplication.

A shift from functional evaluation capacity development to transformative evaluation capacity development in Africa is needed

The tendency of international ECD initiatives in Africa to emphasise the principles of teaching (intended as imparting knowledge) and accountability – referred earlier as ‘functional ECB’ (Tarsilla 2014a) – over those of learning and social transformation has gradually become the norm. This is also confirmed by the fact that the content of ECD initiatives implemented in Africa is developed outside of Africa. As the ECD Locus of Production (Tarsilla 2014b) matters, reparatory actions are needed. However, if no mitigation strategy is implemented, the current ECD programmes will continue to perpetuate the donor-centric features of the current development system (Carman 2007; Fowler 1997; Hwang & Powell 2009) by injecting external knowledge into the circles of evaluation networks in Africa and discouraging the production of authentic African knowledge. In particular, what seems to be missing across many of the current ECD initiatives implemented in Africa is an in-depth knowledge of African approaches to ECD as well as of the strategies to modify and adapt to the African context methods and tools used in other countries. To this end, the recent conversations and presentations on made-in-Africa evaluation approaches held during the AfrEA evaluation conference represent an important milestone in this unprecedented pan-African knowledge-building effort for evaluation. All such design and operational deficiencies discourage the development of a more genuine form of ECD classified as transformative ECD or T-ECD (Tarsilla 2014a). T-ECD sets itself apart from other past experiences as it does not exclusively cater to funders’ decision-making but is more genuinely geared towards the fulfilment of the organisation's internal information needs and social aspirations, consistent with the staff interests and the organisational vision expressed in its strategic plans. Transformative ECD is longer in duration than past ECD efforts as it is aimed at both sustainability and empowerment (at the individual, organisational and systemic levels). Characterised by diffused championing and resulting from the harmonisation of efforts amongst ECD funders (both international and African), transformative evaluation is not a merely technical endeavour aimed at the delivery of products (e.g. training initiatives facilitating the transfer of technical knowledge and skills). Transformative ECD is adaptive and goal oriented and therefore not fully consistent with the currently common practice of using linear planning (e.g. logic models) in international development. Whilst some critics might label this emerging approach as being too costly or idealistic, the current effort to provide a global platform for exchanging on ECD might enhance the conversation around it amongst the large public and, therefore, make it more grounded and logistically feasible.

Why are there so few evaluations of evaluation capacity development initiatives funded by international partners in Africa?

As the planning and implementation of ECD have been so volatile, the dearth of ECD evaluations, although striking, does not come as a surprise for four reasons. Firstly, capacity outcomes are hard to measure and are often considered intangible (several partners claimed that robust metrics to measure improved capacity do not exist) (Wing 2004). Secondly, country programmes funded by international development partners include projects that are quite diverse in terms of both target populations (supply and demand) and approaches (ideally context-specific) and therefore their respective ECD activities are harder to compare through the use of common evaluative tools) (Taylor-Ritzler et al. 2013). Thirdly, there is an excessive reliance on assumptions and the perception that ECD in and of itself is good and that, as such, it does not deserve to be questioned or scrutinised (see the following section on the need for a more robust theory of change on ECD). Fourthly, the low sense of accountability over ECD programmes’ effectiveness (Benjamin 2012), due to the relatively marginal recognition of ECD in international partners’ evaluation policies, organisational strategies, theories of change and budgets, as well as the delegation of ECD implementation responsibilities to larger global initiatives.

Despite the relatively hopeless current scenario, evaluating ECD is indeed feasible. For instance, despite ECD not being a simple construct but rather the combination of multiple constructs (Clinton 2014), there are some variables that could be measured to evaluate the effectiveness of ECD initiatives. These may include: (1) the ‘dosage’ of an intervention (measured by the amount of invested resources as well as the number of people, organisations and communities targeted through such initiatives), (2) other mediating variables, such as the degree of evaluation use towards programme adaptation, the level of stakeholders engagement, the degree of stakeholders’ willingness and capacity to engage in future evaluations, and (3) process variables, such as organisational development, leadership and collaboration. Although the practice of evaluating ECD initiatives is quite scant to date, spending a few words on the design that it might be worth considering for such efforts in the future is necessary. As randomised controlled trials (RCTs) have become quite popular in Africa over the last decade, it is relevant to stress that applying such design to an ECD initiative (none of this kind has been planned in Africa to date) does not appear to be too feasible or useful for two main reasons. Firstly, ECD is embedded within organisational systems and societal dynamics and is affected by a number of factors in continued evolution (type of actors, needs and interests, political environment) that are difficult to control and base future predictions on. Secondly, given that RCTs are normally conducted whilst an initiative is still underway (Manderville 2007), the costs associated with it would be difficult to justify, especially if the construct being measured is capacity and not the use made of it. To the contrary, the use of a case study design (Morra-Imas & Rist 2009; Yin 2009) is recommended, as this would enhance an in-depth understanding of actual ECD processes in well-defined contexts. For those lamenting the scarce generalisability of findings generated through such design, one could also argue that the conduct (and comparison of respective findings) of multiple ECD evaluations adopting a case study design would contribute to a richer appreciation of what works and what does not in ECD.

Future engagement opportunities with African actors and institutions towards more effective evaluation capacity development

With respect to the possible synergies that could be put in place between ECD funders and African institutions to achieve more effective ECD programming in the future, it is necessary that international partners interested in liaising with AfrEA take the association's engagement strategy, currently under development, into account when planning their ECD interventions in the future. It would also be effective to consider what type of support to provide to regional evaluation associations (that is, associations coordinating the evaluation work in more countries located in the same region). As stated by the officer of an European ECD funder under in the course of an interview conducted as part of this study:

‘Besides avoiding the risk of political interference (associated with the support of activities promoted by national evaluation association), funding an ECD programme with a regional scope would allow addressing the relative lack of unity and coordination witnessed by many ECD funders and in-country practitioners with respect to the multi-donor ECD initiatives currently underway.’

Conclusion

These final conclusions (organised in the same order as the different phases of an ECD strategy development) aim to inform the ECD strategies of larger international ECD funders in Africa in the near future. The key assumption underlying this final section is that the feasibility and effectiveness of the ECD improvements suggested in this article will depend on the level of harmonisation and collaboration put in place between the current international ECD funders and a host of African practitioners, decision-makers and funders.

Evaluation capacity development conceptualisation: A new evaluation capacity development definition and evaluation capacity development theory of change are needed

As this article calls for a radical shift in the conceptualisation and implementation of ECD initiatives, there is a need for a new definition of the terms ECB and ECD, which have often been used interchangeably in the past. This appears even more relevant in light of the distinction emphasised in the article between functional capacity building, often regarded as an externally driven organisational function responding to funders’ interests (e.g. to ensure the submission of grantees’ progress report on quarterly basis for accountability purposes), and transformative ECD. Far from being a purely semantic issue, a more intentional and thought-through use of words, which would otherwise remain unquestioned technical jargon, is likely to enhance the clarity of ECD outcomes and implementation modalities in the future. Similarly, the harmonisation of the most recent evaluation and ECD terminology amongst some of the most widely spoken languages across Africa (e.g. Arabic, English, French, Hausa, Spanish, Swahili, Portuguese, Portuguese and Yoruba) would be quite helpful. The value added by a common ECD language would be quite limited, though, without the parallel introduction of a more robust ECD theory of change (ToC). What is particularly needed is an explicit updated theory about how ECD is intended to produce outputs, outcomes and impacts, along with a critical description of the factors affecting or determining the success of any ECD endeavour. By clarifying the central processes or drivers by which change comes about for individuals, groups or communities, a ToC would contribute to making a number of ECD assumptions explicit, including the commonly held one that ‘any evaluation training’ is good anyway and whatever could be done in response to promote the knowledge and conduct of evaluation (regardless of how long it has been conceptualised, planned and evaluated) is sound. Once the ECD language has been revised and a new improved ToC developed, ECD funders might want to introduce them in an official ECD policy or strategy, with clear indications on how to operationalise it, both at the headquarter level and overseas missions and offices.

Evaluation capacity development planning: Evaluation capacity development joint funding should be established at the national level and related goals ought to be defined in a more participatory fashion

ECD requires greater prominence in budgets, both for fully dedicated ECD programmes and ECD processes simply mainstreamed into regular programming. To this end, an evaluation capacity development consolidated appeal process (ECD-CAP), a sort of basked fund for ECD, could be put in place at the national level. Whilst the management of an ECD-CAP might be placed under the responsibility of a board made up of funders representatives or staff for a period of five years and some counterpart funding might be made available from ECD-CAP countries’ governments, it could be envisaged that the responsibility of such a mechanism be gradually transferred to African institutions and that this ‘relay’ mechanism accelerate based on the increased level of counterpart funding made available by African organisations and institutions. Overall, regardless of the specific amount of ECD resources allocated by international development partners to African institutions, it is important that future ECD partnerships and collaborative efforts go beyond the funding of ad hoc activities, such as those implemented in conjunction with the organisation of the AfrEA bi-annual conference. Some of the future ECD partnerships between international partners and African institutions will certainly feature financial support. However, it is more critical that such collaborative efforts focus on the creation of new knowledge-sharing platforms (e.g. Web-based) and innovative ECD implementation modalities by African universities, research and training institutions across Africa. In pursuing this new ECD agenda, it is also critical to recognise that any ECD endeavour funded by international development partners carries a dual set of objectives with it: one linked to the ECD funders’ needs and interests and another one (often overlooked) related to the aspirations and ambitions of the African organisations receiving ECD support. This seems to reiterate the argument that more should be done in order to promote transformative evaluation across the continent. However, there's more to the story. Differences in ECD goals and outcomes are observed not only across actors (e.g. ECD funders and partners receiving international ECD funding) but also within actors (e.g. the board of a non-governmental organisation receiving international ECD funding might pursue a different ECD outcome than the in-house evaluation officer or implementation supervisor). Therefore, it is important to acknowledge the variety of all such outcomes – both across and within. Such a tailored ECD strategy could be quite challenging to implement. However, if international ECD funders demonstrate to be sufficiently opportunistic (e.g. building on the use of certain case studies and terminology with which audiences might already be familiar) every time they design an ECD programme, their ECD programmes will be more successful. For instance, given the risk of RBM-isation of the evaluation function (Tarsilla 2014a), that is, the risk of RBM initiatives missing a ‘fuller and more exhaustive picture’ of programme effects, it is important to invest in a new and more effective ECD strategy promoting the link between RBM and evaluation. Consistent with this new strategy, evaluation ought to be seen as an ideal complement of RBM and not as a distinct approach based on diametrically opposite epistemological assumptions. For that to happen, a blended or ‘normalised’ approach will need to be embedded within the specific organisational settings in which the actors aimed at by such ECD initiatives operate.

Evaluation capacity development implementation: A further contextualisation of evaluation capacity development strategies’ and programmes’ content is needed

Evaluation practices that are truly Africa-based and locally contextualised should be disseminated through ECD initiatives in the future. To this end, the inclusion of evaluation findings (along with information on the way they have been used) is recommended as it would enhance the practicality of the discussed topics and, as a result, increase the effectiveness of the ECD initiative in question. As the provision of theoretically sound curricula has done very little for the ECD cause in Africa, a systematic effort to enhance the practicum opportunities associated with all types of ECD support should be made in the future. The increasing number of evaluation ToRs issued by large international development partners that call for the inclusion of local consultants in evaluation teams is encouraging in this sense but not sufficient. Furthermore, in order to avoid that formal compliance with such requirement does turn into a purely ECD cosmetic actions, it will be important to make sure that local team members be assigned not only data collection tasks (as in the majority of field evaluation today) but also design and analytical responsibilities.

Evaluation capacity development

Evaluating ECD processes (and not only the effects of a small number of training initiatives) and making the related finding public ought to become a systematic practice amongst international ECD funders in Africa. To this end, future ECD evaluations should build upon the lessons learned from the rare ECD evaluations conducted to this date. Similarly, African universities – often ignored by international development partners’ funding in the past – should strengthen their offering of evaluation courses and research opportunities at all levels, by also introducing specific ECD workshops and evaluation field projects for their students.

Dissemination of evaluation capacity development related information

Each partner should have a well-functioning ECD knowledge management system available to both monitor ECD processes and disseminate the related highlights amongst the public on a more regular basis in the future. In addition, the establishment of an inter-agency ECD knowledge management system is needed. Such a mechanism would help in tracking the different ECD initiatives funded by numerous funders across Africa and would enhance the dissemination of critical information (including who is being targeted, when, where and at what level) and, as a result, promote more complementarity and lessen duplication in the planning, coordination and implementation of ECD across the continent in the future. Similarly, a common repository of ECD-related information (including review of consultants’ work and the compilation of individuals and entities trained and mentored in the past as part of ECD initiatives) should be established to enhance a shared understanding of the ECD ecology of any given country amongst the international development partners working there.

Acknowledgements

A special note of appreciation goes to the Agricultural Learning and Impacts Network at Firetail for the research assistance and quality assurance provided during the earlier development of the report on which this article built. A special appreciation also goes to the Bill and Melinda Gates Foundation for providing critical inputs throughout the conceptualisation and dissemination phases of this article. The ideas expressed in this article the author's alone, and in no way reflect those of either organisation.

Competing interests

I declare that I have no financial or personal relationship(s) that may have inappropriately influenced me in writing this article.

References

African Evaluation Association, 2007, ‘Making evaluation our own: Strengthening the foundations for Africa-rooted and Africa led M&E: Summary of a special conference stream and recommendations to AfrEA’, in A. Abandoh-Sam et al.et al. (Eds.), Evaluate development, develop evaluation: A pathway to Africa's future, pp. 1–3, AfrEA, Niamey.

Bamberger, M., Rugh, J. & Mabry, L., 2012, Real world evaluation, Sage, Thousand Oaks, CA.

Bemelmans-Videc, M.L., Rist, R.C. & Vedung, E., 2003, Carrots, sticks, and sermons: Policy instruments and their evaluation, Transaction, New Brunswick, NJ.

Benjamin, L., 2012, ‘The potential of outcome measurement for strengthening nonprofits’ accountability to beneficiaries’, Nonprofit and Voluntary Sector Quarterly 42, 1224. http://dx.doi.org/10.1177/0899764012454684

Boesen, N., 2007, ‘Governance and accountability: How do the formal and the informal interplay and change?’ in J. Jütting, D. Dreschler, S. Bartsch & I. de Soysa(Eds.), Informal institutions: How social norms help or hinder development, OECD, Paris, France.

Boyle, R. & Lemarie, D., 1999, Building effective evaluation capacity: Lessons From practice, Transaction Publishers, London.

Buchanam, H., 2004, IPDET evaluation, Jua Consulting, Ottawa.

Carman, J.G., 2007, ‘Evaluation practice among community based organizations: Research into the reality’, American Journal of Evaluation 28(1), 60–75. http://dx.doi.org/10.1177/1098214006296245

Clinton, J., 2014, ‘The true impact of evaluation: Motivation for ECB’, American Journal of Evaluation 35(1), 120–127. http://dx.doi.org/10.1177/1098214013499602

Cousins, J.B., Elliott, C. & Gilbert, N., 2010, IPDET evaluation of program impact: Volume 1 main report, University of Ottawa, Ottawa, Canada.

Cousins, J.B., Elliott, C. & Gilbert, N., 2011, IPDET evaluation of program impact:Volume 2 case studies, University of Ottawa, Ottawa, Canada.

Cousins, J.B., Goh, S.C., Elliott, C.J. & Bourgeois, I., 2014. ‘Framing the capacity to do and use evaluation’, New Directions for Evaluation 141, 7–23. http://dx.doi.org/10.1002/ev.20076

Department for International Development (DFID), 2011, Multilateral Aid Review, viewed 23 April 2014, from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/67583/multilateral_aid_review.pdf

Department For International Development, 2012. Evaluation policy, viewed 29 June 2014, from https://www.gov.uk/government/publications/dfid-evaluation-policy-2013

Fowler, A., 1997, Striking a balance: A guide to enhancing the effectiveness of non-governmental organizations in international development, Earthscan, London.

Goldman, I., 2014, In South Africa, using evaluation to improve government effectiveness, viewed 01 September 2014, from http://ieg.worldbank.org/blog/in-south-africa-using-evaluation-improve-government-effectiveness

Greene, J.C. & Caracelli, V.J., 1997, ‘Defining and describing the paradigm issue in mixed-method evaluation’, New Directions for Evaluation 74, 5–17. http://dx.doi.org/10.1002/ev.1068

Hay, K., 2010, ‘Evaluation field building in south Asia: Reflections, anecdotes, and questions’, American Journal of Evaluation 31, 222–231. http://dx.doi.org/10.1177/1098214010366175

Hwang, H. & Powell, W.W., 2009, ‘The rationalization of charity: The influences of professionalism in the non-profit sector’, Administrative Science Quarterly 54, 268–298. http://dx.doi.org/10.2189/asqu.2009.54.2.268

Labin, S., Duffy, J., Meyers, D.C., Wandersman, A. & Lesesne, C.A., 2012, ‘A research synthesis of the evaluation capacity building literature’, American Journal of Evaluation 33, 307–38. http://dx.doi.org/10.1177/1098214011434608

Lopes, C. & Theisohn, T., 2003, Ownership, leadership and transformation: Can we do better for capacity development?, Earthscan/James & James, New York, NY.

Mackay, K., 2007, How to build M&E systems to better support government, World Bank, Washington, DC. http://dx.doi.org/10.1596/978-0-8213-7191-6

Manderville, J., 2007, ‘Public policy grant making: Building organizational capacity among nonprofit grantees’, Nonprofit and Voluntary Sector Quarterly 36, 282. http://dx.doi.org/10.1177/0899764006297668

Mertens, D. & Tarsilla, M., 2014, ‘Mixed methods in evaluation’ in S. Hesse-Biber (ed.), Mixed methods handbook, pp. 120–142, Oxford University Press, Oxford.

Minzner, A., Klerman, J., Markovitz, C. & Fink, B., 2013, ‘The impact of capacity-building programs on nonprofits: A random assignment evaluation’, Nonprofit and Voluntary Sector Quarterly 2013, 1–23.

Morra-Imas, I. & Rist, R., 2009, The Road to Results: Designing and Conducting Effective Development Evaluations, The World Bank, Washington DC.

National Development and Planning Commission, 2011, Resources spent on M&E and statistics final report, NDPC, Accra, Ghana.

Organization for Economic Co-operation and Development (OECD), 2005, Parisdeclaration on aid effectiveness and the Accra agenda for action, OECD, Paris, France.

OECD, 2006, The challenge of capacity development: Working towards good practice, OECD, Paris, France.

Parliamentarians Forum for Development Evaluation, 2014, Yaounde declaration by parliamentarians, viewed 14 June 2014, from http://www.pfde.net/index.php/news/34-yaounde-declaration-by-parliamentarians

Picciotto, R., 2009a, The country led evaluation (CLE) paradox, presentation delivered at the IDEAS Global Assembly, Cairo, Egypt, 18–20 March, viewed 17 April 2014, from http://www.mymande.org/content/country-led-evaluation-cle-paradox

Picciotto, R., 2009b, ‘Evaluating development. Is the country the right unit of account?’, in M. Segone, (ed.), Country-led monitoring and evaluation systems, pp. 32–55, UNICEF, New York, NY.

Picciotto, R., 2012, Country-led M&E systems - Robert Picciotto’Part 2, video, viewed 20 April 2012, from http://www.youtube.com/watch?v=UfNakPTODBs

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), Art. #25, 9 pages. http://dx.doi.org/10.4102/aej.v1i1.25

Stake, R., 1995, The art of case study research, Sage, Thousand Oaks, CA.

Stockdill, S.H., Baizerman, M. & Compton, D.W., 2002, ‘New directions for ECB’, New Directions for Evaluation 93, 109–119.

Suarez-Balcazar, Y. & Taylor-Ritzler, T., 2013, ‘Moving from science to practice in ECB: ECB Forum’, American Journal of Evaluation 35(1), 95–99. http://dx.doi.org/10.1177/1098214013499440

Tarsilla, M., 2014a, From RBM-ization to normalization: A field practitioner's reflection on ECD current trends, viewed 28 August 2014, from http://www.oecd.org/dac/evaluation/ecdnewsletter.htm-Rbmization

Tarsilla, M., 2014b (March), ‘Evaluation capacity development in Africa: Current Landscape, lessons learned and the way forward’, keynote speech presented at the African Evaluation Association Conference, Yaoundé, Cameroon, 03–07 March, 2014

Taylor-Ritzler, T., Suarez-Balcazar, Y., Garcia-Iriarte, E., Henry, D. & Balcazar, T., 2013, ‘Understanding and measuring evaluation capacity: A model and instrument validation study’, American Journal of Evaluation 34, 190–207. http://dx.doi.org/10.1177/1098214012471421

Toulemonde, J., 1999, ‘Incentives, constraints and culture-building as instruments for the development of evaluation demand’, in R. Boyle & D. Lemaire(Eds.), Building effective evaluation capacity: Lessons from practice, pp. 153–176, Transaction Publishers, New Brunswick, NJ.

Vedung, E., 2003, ‘Policy instruments: Typologies and theories’, in M.L. Bemelmans- Videc, R.C. Rist & E. Vedung(Eds.), Carrots, sticks, and sermons: Policy instruments and their evaluation, pp. 21–59, Transaction, New Brunswick, NJ.

Wiesner, E., 2011, ‘The evaluation of macroeconomic institutional arrangements in Latin America’, in R.C. Rist, M. Boily & F. Martin(Eds.), Influencing change: Evaluation and capacity building driving good practice in development and evaluation, pp. 23–40, World Bank, Washington, DC.

Wing, K.T., 2004, ‘Assessing the effectiveness of capacity building initiatives: Seven issues for the field’, Nonprofit and Voluntary Sector Quarterly 33, 153–160. http://dx.doi.org/10.1177/0899764003261518

Yin, R.K., 2009, Case study research: Designs and methods, Sage Publications, Thousand Oaks, CA.

Appendix I
Organisations included in the study

African Evaluation Association (AfrEA)

Africa Capacity Building Foundation (ACBF)

Japan International Cooperation Agency (JICA)

African Development Bank (AfDB)

Department for International Development (DFID)

Centre for Learning on Evaluation and Results in Anglophone Africa (CLEAR)

Development Assistance Committee at the Organization for Economic Cooperation and Development (OECD/DAC)

Finnish Ministry of Foreign Affairs

French Development Agency

German Development Agency (GIZ)

Irish Ministry of Affairs

Mastercard Foundation (MCF)

Rockefeller Foundation

Swedish International Development Agency (SIDA),

Swiss Agency Development and Cooperation (SADC)

United National Development Programme (UNDP)

United Nations Food and Agriculture Organization (FAO)

United Nations Evaluation Group (UNEG)

USAID Office of Learning Evaluation and Research (PPL/LER)

World Bank (WB)

Footnotes

1. Evaluation capacity building (ECB) and evaluation capacity development (ECD) are often used interchangeably, but a growing number of practitioners use them differently. For instance, Tarsilla (2014a; 2014b) defines ECB as the combination of ad hoc activities (e.g. evaluation training, mentoring and coaching) implemented in one specific and often narrow setting. Furthermore, Tarsilla qualifies ECD as a broader and longer-term process aimed at increasing not only individual knowledge, skills and attitudes but also organisations’ capabilities and system readiness.

2. A number of relevant articles (n = 54) published on ECB and ECD since 2010 were identified based on an electronic search for the two terms (ECD, ECB and ECB evaluation, ECD evaluation) on the following: (2) three peer-reviewed evaluation journal indexes (the American Journal of Evaluation, New Directions for Evaluation and the Canadian Journal of Program Evaluation) and (2) one database containing a large variety of social science journals (the Social Science Citation Index).

3. For more details on three of such initiatives, see http://www.theclearinitiative.org, http://www.mymande.org/evalpartners and http://ipdet.org.

4. OECD's Development Assistance Committee's (DAC) good practice paper on capacity development from 2006 thus stressed that ‘good understanding of context is fundamental’ (OECD 2006).

5. Whilst the use of a paradigm dating back from the 1980s (heavily influenced by the writing of political economists and European policymakers) might appear obsolete these days, qualifying what is meant by evaluation demand and supply might facilitate future discussions on ECD. Put simply, the request to use evaluation evidence for decision-making within a political or bureaucratic system is demand (Bemelmans-Videc, Rist & Vedung 2003; Boyle & Lemarie 1999; Lopes & Theisohn 2003; Mackay 2007; Toulemonde 1999; Vedung 2003; Wiesner 2011). Supply is the provision of evaluation services by professional practitioners (Porter & Goldman 2013).

6. In reality, especially when the demand for evaluation is low in one country, the monitoring function takes over and ‘masquerades’ as evaluation (Picciotto 2009a; 2009b; 2012).

7. The evaluation (approximate cost $420 000) pursued the three following objectives: (1) learning for improvement in the rationale, design, management, implementation and governance of the CLEAR Global Initiative, (2) accountability to current funders of funds invested in CLEAR and (3) as a good contributing knowledge on approaches to strengthen capacity in developing countries, designing and managing global initiatives.

8. As stated in a recent study on the state of the demand for evaluation in Africa: ‘M&E Parliaments are locations of latent demand for evaluation, where there is space for contestation around evidence’ (Porter & Goldman 2013:9).

9. The DPME has also initiated a training programme aimed at members of parliament, as well as parliamentary researchers, so that they are able to use the currently available M&E evidence. For more information on this initiative, see Goldman (2014).

10. Parliamentarians from seven countries (Cameroon, Ethiopia, Ghana, Kenya, Republic of Tanzania, Togo and Uganda) signed the Yaoundé Declaration of African Parliamentarians on Evaluation and committed to the creation of the Network of African Parliamentarians for Development Evaluation. For more details, see Parliamentarians Forum for Development Evaluation (2014).

11. This is the case of Mekerere University in Uganda supported by GiZ, or the Tegemeo Institute of Agricultural Policy and Development and the Institute of Statistical, Social and Economic Research in Ghana supported by the Gates Foundation in 2013 and 2014 (grant value assigned to both institutions: approximately $3 million).

12. Interestingly, the low salaries provided to consultants to conduct evaluations in their own countries (compared to their international counterparts on the same team) discourages the proliferation of context-savvy evaluations. In reality, a growing number of African evaluation practitioners seek evaluation assignments outside of their countries of origin, where they would frequently get much more than in their own. As a result, evaluations conducted in Africa miss the opportunity to benefit from the inputs of those in-country evaluators who posses the best understanding of local dynamics and contexts.

13. In response to such belief, an increasing number of researchers are calling for a closer scrutiny of the impact that ECD initiatives have on organisations’ capacity to use evaluation findings for programmatic improvements (Labin et al. 2012). Likewise, the idea is spreading that ECB and ECD should be turned into a more scientific endeavor and of that validated instruments should be developed and used in order to measure ECD (Suarez-Balcazar & Taylor-Ritzler 2013).

14. Interestingly, only one RCT (Minzner et al. 2013) has been conducted of a capacity building programme (aimed at nonprofit organisations) to date.

15. As per the draft of the AfrEA constitution (December 2013), the future objectives of the association include the following: (1) facilitating capacity building, networking and sharing of evaluation theories, techniques and tools amongst evaluators, policy makers, researchers and development specialists, (2) promoting Africa-rooted and Africa-led evaluation through sharing African evaluation perspectives (e.g. the Yahoo group List-Serve), (3) encouraging the development and documentation of high-quality evaluation practice and theory (e.g. through the African Evaluation Journal) and (4) supporting the establishment and growth of the national evaluation associations and special evaluation interest groups.

16. Ghana, Kenya and Senegal are a good illustration of that. Despite the presence of evaluation capacity in the three countries, none of them had a national evaluation system in place. In Ghana, evaluation accounted for less than 3% of the overall spending on M&E in 2010 and 2011 (National Development and Planning Commission 2011:4). In Kenya, 36 randomised control trials carried were out by an international research network: the Jameel Poverty Action Lab. In Senegal, evaluations are undertaken in alignment with donor project cycles (Porter & Goldman 2013).

17. Consolidated appeal process is a popular tool used in the field of humanitarian assistance to coordinate the international and local partners’ action in response to crisis. Through a participatory mechanism involving UN agencies, non-governmental orgnisations and sectoral clusters, an appeal for funding (accompanied by a thorough strategic plan agreed upon by those same actors involved in the appeal formulation) is submitted to a variety of institutional partners.

18. The duration of this ‘transitory’ mechanism could be five years (arguably a sufficiently long period of time to enhance evaluation awareness raising about a critical mass of individuals who could then – by this date – mobilise their own resources for a more genuine ownership over the in-country evaluation processes.


 

Crossref Citations

1. Strengthening capacity for monitoring and evaluation through short course training in Kenya
Hesborn Wao, Rohin Onyango, Elizabeth Kisio, Moses Njatha, Nelson O. Onyango
African Evaluation Journal  vol: 5  issue: 1  year: 2017  
doi: 10.4102/aej.v5i1.192