About the Author(s)


Mark Abrahams Email
Southern Hemisphere Consulting and Development Services, Cape Town, South Africa

Florence Etta
TY Danjuma Foundation, Abuja, Nigeria

Kambidima Wotela
Monitoring and Evaluation, Wits School of Governance, South Africa

Citation


Abrahams, M., Etta, F., & Wotela, K., 2017, ‘Editorial’, African Evaluation Journal 5(1), a242. https://doi.org/10.4102/aej.v5i1.242

Note: This edition was sponsored, supported and co-developed by the Centre for Learning on Evaluation and Results Anglophone Africa (CLEAR AA). This partnership allowed for a focus on evaluation capacity development.

Editorial

Editorial

Mark Abrahams, Florence Etta, Kambidima Wotela

Copyright: © 2017. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In its natural form, the world can be described as being in a developmental space. On one side are the arrangements that should actually make this possible (leadership and governance) and on the other side are the actual technical ingredients (development interventions). Usually, the former should demand monitoring and evaluation (M&E) and the latter should supply this function. Centre for Learning on Evaluation and Results Anglophone Africa (CLEAR AA) aims to enhance the use of evaluation on both sides (demand and supply). Demand will be strengthened by enhancing the need for evidence-based decision-making, accountability, transparency and oversight. Supply implies deepening and widening M&E systems. Individuals on both ends need relevant M&E skills, that is, data and information collection, processing, analysis and reporting for the supply side as well as integration into planning, implementation and management for the demand side. On the demand side, CLEAR AA is providing and strengthening skills in the use of evaluation in the legislature and civil society. On the supply side, CLEAR AA is enhancing the supply of evaluation practitioners by ensuring that they are equipped with appropriate and quality evaluation procedures and methods. To accomplish this, CLEAR AA is intervening at three different levels. Firstly, it is influencing individual behaviour by providing for adequate evaluation skills and resources. Secondly, it is encouraging organisations in key countries that can make a contribution to improved evidence-based decision-making to adopt a culture of learning and leadership in the use of evaluation. Thirdly, at a systems level, it is creating an enabling policy environment that supports individuals and organisations to apply good M&E practice.

Human capacity is at the heart of performance, for individuals, institutions and organisations, countries or indeed continents. All human and even non-human (read machine) endeavours depend on capacity. The situation in Africa in respect of capacity for growth and development has for a long time been seen as weak and in need of support. Evaluation capacity development (ECD) is critical for the success of development anywhere in the world, especially following the Paris Declaration of 2006 with development effectiveness and accountability enthroned as driving principles for development action. ECD is imperative for the achievement of these two principles particularly in Africa which has remained a candidate for development action for centuries. The launch of this special edition of African Evaluation Journal is significant for many reasons. It gives a full-length space for the discussions of the issues surrounding the problems and potential solutions of and for ECD. This can be used as a basis for debate, engagement and development of new and improved approaches and strategies for ongoing ECD.

In this edition, Wotela outlines three tiers of M&E. These are development interventions and public policy (top tier); M&E concepts, terminologies and logic (middle tier); and data collection and storage, data processing and analysis, reporting and some aspects of integrating the findings into planning, implementation and management (bottom tier). He contends that participants in M&E training programmes do not internalise what they have learned and that they hardly ever put their newly acquired knowledge into practice. The article discusses an approach to develop an M&E curriculum that institutionalises M&E within implementation and management of development interventions. Crawley introduces a six-sphere framework (SSF) for assessing M&E systems. According to him, successful ECD demands sound knowledge and understanding of the opportunities and constraints in establishing and sustaining an M&E system. Diagnostics are one of the tools that ECD agents can use to better understand the nature of the ECD environment. Conventional diagnostics have typically focused on issues related to technical capacity and the ‘bridging of the gap’ between evaluation supply and demand. In so doing, they risk overlooking the more subtle organisational and environmental factors that lie outside the conventional diagnostic lens. The author has developed a modified diagnostic tool that extends the scope of conventional analysis. This article outlines the SSF that can be used to extend the scope of such diagnostics to include considerations of the political environment, trust and collaboration between key stakeholders and the principles and values that underpin the whole system.

Wao et al. present interesting findings of an exploratory study focusing on the use of short course training to strengthen capacity for M&E. The study evaluated the effect of short course training on professionals’ knowledge and skills in the areas of mixed methods research, systematic review and meta-analysis and general principles of M&E. It provides preliminary evidence of the potential use of short course training as an approach to strengthening capacity in M&E in less-developed countries such as Kenya. It underscores the importance of participants’ self-stated objectives as an element to be considered in the enhancement of knowledge, attitudes and skills needed for acceptable capacity building in M&E.

Jansen van Rensburg and Blaser-Mapitsa introduce the element of gender responsiveness of the national M&E systems of Benin, South Africa and Uganda. The article shares reflections from the gender diagnostic study to enable more appropriate capacity building in the field of gender responsiveness in national M&E systems. The study found that the gender responsiveness of M&E systems across all three systems was unequal. They argue for a stronger understanding of the linkages between M&E and gender as an important starting place for bringing them together holistically. Morkel and Ramasobana share the findings of a review of M&E capacity development in Africa. The study was primarily a desktop review of existing literature, corroborated by a survey of a few senior representatives of organisations responsible for capacity building across the African continent. The review found that there remains little empirical evidence that indicates whether evaluation capacity building (ECB) processes, activities and outcomes are ultimately effective. There is also very little empirical evidence that helps to interpret how change happens and how this may shape ECB efforts. Training is acknowledged as only one element of ECB, and there is a need for a multi-pronged approach to ECB.

Blaser-Mapitsa and Korth also focus on diagnostics but insert the element of complexity. The article looks at what divergent purposes of M&E mean for how M&E systems are assessed and how context-appropriate diagnostic studies can be designed. The mixed method approach to diagnostics proposed in this article contributes to the call in the ‘Made in Africa’ debate for more contextualised methods and tools around the practice and the assessment of M&E. The article proposes the development of a synthetic tool that covers both M&E technical components and capacity on one hand, and an analysis of how these are embedded in a political and organisational context on the other hand. Bless, Tsotsotso and Gebremichael describe the findings of their study using the SSF diagnostic tool discussed earlier. The article locates the SSF within the current ECD literature and argues that existing evaluation capacity assessment tools are inadequate to understand pertinent issues affecting the use of evidence in the transport sector in South Africa. In this regard, the framework is recommended as an innovative tool to assist evaluation practitioners and scholars to better understand evaluation capacity constraints within a broader context that involves logistical, technical, contextual, social and political dimensions.

Kabuye and Basheka examine the relationship between institutional design (procedural rules, evaluation processes and institutional capacity) and utilisation of evaluation results at Kyambogo University in Uganda. They found that procedural rules, evaluation processes and evaluation capacity had a positive and a statistically significant effect on utilisation of evaluation results. This means that the dimensions of institutional design were important predictors of utilisation of evaluation results by a public sector agency. The issue of complexity is revisited by Ndhlovu, Smith and Narsoo. Their focus is a case study of the City of Johannesburg in South Africa. A five-dimension complexity model was used as an organising framework for effectively evaluating the City’s M&E capacity. It allowed consideration of a wider set of interventions for managing organisational change in rapidly shifting urban systems, such as the City of Johannesburg. The major findings of the study revealed a number of weaknesses: weak integration of M&E practices in planning, budgeting, service delivery and policy development oversight.

Jobin and Lawal outline the processes involved in establishing a national Voluntary Organisation for Professional Evaluators (VOPE) in Nigeria. The utilisation and application of Game Theory and New Institutional Economics to break down existing barriers allowed the reshaping of the ‘rules of the game’. The result was that all leaders came together under an umbrella organisation, to celebrate the evaluation year in 2015, and committed under the Abuja Declaration on Evaluation to register and establish an association, with an elected board, a written constitution and election bylaws.

This edition spans investigations in very many countries across Africa. It highlights challenges related to diagnostic tools, complexity in evaluation and systems issues as they relate to institutions like universities and public entities. ECD is discussed in relation to appropriate curriculum, the need to infuse a focus on gender as well as the establishment of VOPEs who can promote ECD in the African contexts.



Crossref Citations

No related citations found.