About the Author(s)


Hlali K. Kgaphola Email symbol
National Department of Social Development (DSD), Pretoria, South Africa

Christel Jacob symbol
MERL, Pact South Africa, Pretoria, South Africa

Citation


Kgaphola, H.K. & Jacob, C., 2020, ‘Monitoring and evaluation lessons from the design and implementation evaluation of the ‘You Only Live Once’ social behaviour change programme for adolescents: A partnership between the United States Agency for International Development, Department of Social Development, South African National AIDS Council, Pact SA and Mott MacDonald’, African Evaluation Journal 8(1), a468. https://doi.org/10.4102/aej.v8i1.468

Note: Special Collection: SAMEA 7th Biennial Conference 2019.

Original Research

Monitoring and evaluation lessons from the design and implementation evaluation of the ‘You Only Live Once’ social behaviour change programme for adolescents: A partnership between the United States Agency for International Development, Department of Social Development, South African National AIDS Council, Pact SA and Mott MacDonald

Hlali K. Kgaphola, Christel Jacob

Received: 18 Feb. 2020; Accepted: 29 June 2020; Published: 23 Oct. 2020

Copyright: © 2020. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Conducting evaluations in South Africa has become a common government practice because of the rise in demand for evidence-based policymaking. However, evaluation is often seen as an exercise to be undertaken at the end of a programme – summative – instead of playing a distinct role at all stages of the programme cycle – formative and process evaluation. Subsequently, programmes are often designed without the help of monitoring and evaluation (M&E) specialists to ensure robust and testable theories of change (TOCs) and implementation modalities, or monitoring systems that assess performance to enable adaptive management.

Objectives: This article presents findings from a case study regarding what the public sector can learn from formative evaluation to improve public sector programmes.

Method: The case study focuses on implementing and utilising the results of a formative evaluation of the ‘You Only Live Once’ (YOLO) programme to highlight frequently experienced limitations and potential solutions to utilise M&E as a form of effective programme management in the public service. It is aimed at the public sector to provide evidence that other forms of evaluation and monitoring systems are critical to enable effective public programming.

Results: Key lessons learnt include the significance of developing a clear and comprehensive M&E system at the programme planning and design stage, embedding the culture of M&E in programme implementation, evaluating potential modalities of implementation, rather than simply assuming modality robustness, and capacitation of implementation agencies to internalise and implement M&E requirements.

Conclusion: These lessons present the critical role of formative evaluation in ensuring that big-budget public sector programmes are designed and implemented effectively.

Keywords: Monitoring and evaluation; Design and implementation evaluation; Programme lifecycle; YOLO; Public sector programme evaluation.

Introduction

The conduct of evaluations in South Africa, as elsewhere, has become a common government practice as a result of the rise in demand for evidence-based policymaking (Amisi, Marais & Cloete 2018). In 2010, the Department of Planning, Monitoring and Evaluation (DPME) was established to ensure central coordination of monitoring and evaluation (M&E) in the South African government (SAG). Subsequently, the National Evaluation Policy Framework (NEPF) was adopted in 2011. This framework adopts a utilisation-focused approach, which aims to ensure that evaluations are designed and used for programme improvements and knowledge creation, amongst other things. The framework further emphasises the need to conduct evaluations throughout the intervention lifecycle, including confirming the robustness of the intervention design (‘design’ or ‘formative’ evaluation), and to assess progress and how implementation can be improved (‘implementation’ or ‘process’ evaluation), as opposed to focusing only on the assessment of outcomes and impacts (‘impact’ or ‘summative’ evaluations).

However, evaluation is often seen as an exercise to be undertaken at the end of a programme instead of playing a distinct role at all stages (UNODC 2020). As a result, M&E officials are not as involved in the programme lifecycle as they should be (see Figure 1 for the programme cycle) for various reasons, such as the lack of a culture of coordination, the public sector’s focus on activities rather than outcomes and existing legal frameworks that favour a siloed approach to public sector programmes (Presidency [South Africa], 2012 cited in Abrahams 2015).

FIGURE 1: Evaluation in the programme lifecycle.

According to the South African DPME, government programmes are often designed and implemented without adequate involvement of M&E officials to assist with the conduct of a situation analysis, a needs assessment or the design and implementation of an M&E framework, to name but a few (DPME 2013, 2014c). This results in programmes that are often based on faulty logic, which limits the extent of the outcomes and impacts that can be achieved.

All of these challenges point to the need for a more formative evaluation to ensure that programmes are based on evidence and therefore the involvement of M&E officials is done at the design stage of any government programme, and not merely at the end. Monitoring and evaluation officers are individuals who undertake functions such as developing a departmental monitoring framework, collecting and analysing monitoring data, reporting against predetermined objectives in the annual performance plan (APP) and conducting evaluations (DPME 2012). Their inclusion in formative design processes is critical to ensure that programmes are designed based on evidence and strong logic from the outset.

The main contribution of this article is to take the discussion regarding formative and process evaluation from a theoretical recommendation to a practical demonstration of how such evaluations can contribute to improved design and implementation of public sector programmes, which is integral to government effectiveness and accountability.

The article is based on a utilisation-focused evaluation of the process, implementation and results of the ‘You Only Live Once’ (YOLO) programme evaluation. The YOLO programme is a government-implemented programme within the Department of Social Development (DSD). This article addresses the following question: what can the public sector learn from design and implementation evaluation, such as that of the YOLO programme, to improve public sector programmes? It also answers the following sub-questions: ‘what were the effects of M&E limitations on YOLO that the evaluation was able to correct?’, ‘what can lessons from YOLO contribute to the broader M&E profession in the public sector?’ and ‘how can the findings of this article apply to other public sector topics?’ The evaluation process for YOLO highlights the importance of the involvement of M&E specialists throughout the programme lifecycle by describing the limitations of programmes that do not benefit from such evaluations and how findings can transform a public sector programme and significantly increase its impact.

This article proceeds as follows: it begins by situating the YOLO programme in South Africa’s response to the human immunodeficiency virus (HIV) and acquired immune deficiency syndrome (AIDS) epidemic, to explain the need for formative evaluation to inform accountable public policy. The authors provide definitions and a literature overview of formative evaluation to provide a background on an oft-used approach, as well as on the use of formative evaluation in South Africa’s public policy space. The authors then describe the methodology used in this study to generate lessons and findings regarding formative evaluation for public policy, using the YOLO programme evaluation experience as a case study. The YOLO programme evaluation and the larger context in which it was implemented are then briefly described, out of which the authors draw from their own experiences observations and assessments to draw critical lessons regarding formative evaluation for public policy. The concluding section connects lessons from this study to the larger policy space to contribute to enhanced programme design and implementation.

Background

South Africa’s human immunodeficiency virus and acquired immune deficiency syndrome epidemic and the ‘You Only Live Once’ programme response

South Africa has the largest HIV epidemic in the world, with an estimated 7.2 million people living with HIV in 2017 (UNAIDS 2018). In 2017 alone, there were 270 000 new HIV infections and 110 000 South Africans died from AIDS-related illnesses (UNAIDS 2018). More than 2 million children have been orphaned by HIV and AIDS in South Africa (United Nations Children’s Fund [UNICEF] 2016:3). Furthermore, 18% of all children in South Africa are estimated to have experienced the loss of one or both parents (UNICEF 2016:3). The situation is therefore dire, and thus the stakes for programmes that address the epidemic are high for the SAG.

As a response, DSD developed a youth social and behaviour change (SBC) programme (The Gold Model) in 2015, focused on orphans and vulnerable children, youth and adolescents (OCY&A) aged 15–24 years old. In 2016, the Government Capacity Building and Support (GCBS) programme supported the revision of this programme, resulting in the YOLO programme. The YOLO programme was consequently implemented in 2017 and 2018 through two modalities: (1) DSD through the South African National AIDS Council (SANAC) and (2) through GCBS. The programme runs over 12 sessions and is focused on building the resilience, knowledge, skills and values of young people to enable them to withstand pressures that lead to risk-taking behaviours which result in HIV infection and teenage pregnancies (DSD 2017). The total number of beneficiaries enrolled has comprised 107 040 children and youth across both modalities of implementation.

Given the critical importance of programmes, such as YOLO, to address South Africa’s HIV and AIDS epidemic as well as the substantial investments made by the SAG and other funders, formative evaluation is necessary to ensure not only accountability but also that the programme is designed from the outset to achieve the desired outcomes.

Formative evaluation

Formative evaluation is a blanket term for an evaluation from which findings help to determine programme design or implementation. For example, ‘design evaluations’ aim to analyse the [TOC], inner logic and consistency of a programme to see whether the TOC appears to be working (DPME 2014a). It is a type of evaluation that was first widely used in the education sector. Commonly contrasted with education summative evaluation, which is an evaluation of the outcomes of learning in the short, medium or long term, formative evaluation identifies a student’s or learner’s needs to plan how to better meet those needs (Burns 2008). A crucial feature of formative evaluation is that it is targeted at facilitating programme design or implementation improvements (Bennet 2011; Nieveen & Folmer 2013). This evaluation focuses on uncovering the shortcomings of a programme or learning plan during its development phase, with the purpose of generating suggestions for improving it (Nieveen & Folmer 2013).

The use of formative evaluation has been found to be effective for programme improvements. The Organisation for Economic Co-operation and Development (OECD 2008) found that the consistent use of formative assessment throughout the education system can assist stakeholders in addressing the barriers to its wider practice in classrooms. It has also been found effective outside of education, such as in the health sector. A 2006 study on a healthcare formative programme found that the use of this method can save time and frustration by highlighting factors that impede the ability of clinicians to implement best practices and also identify at an early stage whether desired outcomes are being achieved so that implementation strategies can be refined as needed (Stetler et al. 2006).

Formative evaluation is not a one-off activity with a single set of recommendations; undergoing the evaluation itself enables continuous improvement and learning in two ways. Firstly, by conceptualising and designing the evaluation terms of reference as well as commissioning and managing the evaluation collectively, key stakeholders (e.g. programme managers, M&E specialists and sector specialists) are involved, thereby building buy-in for evaluation results. Secondly, evaluation enables continuous improvement when stakeholders utilise evaluation findings and recommendations and incorporate lessons learnt from past evaluations into new or enhanced strategies and programmes (Goldman et al. 2012; Patton 2012; UNODC 2020).

Formative evaluation in South Africa

There has been an increased demand amongst South African policymakers for evidence on which to base programmatic and policy-relevant decisions (Porter & Goldman 2013). The National Evaluation Policy Framework (NEPF 2011) promotes the conduct of quality evaluations to improve programme effectiveness and impact (DPME 2014a, 2014b). Its strategy emphasises learning to build a culture of evaluation in public sector departments and to repress M&E resistance and ‘malicious compliance’ (Goldman et al. 2012).

The NEPF describes six different types of evaluations, linked to the traditional results chain (see Figure 2) (DPME 2011).

FIGURE 2: Types of evaluations.

As noted previously, SAG agencies have historically focused on impact evaluations and have ignored design and implementation evaluations (also known as ‘formative’ and ‘process’ evaluations). Thus, whilst there is a strong policy framework for evaluation in South Africa, most evaluation outcomes are not used to enhance M&E, resulting as they are from summative rather than formative evaluation, to the detriment of public sector programmes.

For this reason, this article reflects on the YOLO design and implementation evaluation to demonstrate to the public sector the considerable benefits of undertaking design and implementation evaluations and involving M&E specialists from the very beginning, rather than at the end. The authors of this article participated in the YOLO evaluation steering and technical committees, and reflect on the lessons that may be used to contribute to M&E learning within the public sector. Whilst the findings of this article are not unique to this sector, the primary intended audience comprises M&E practitioners in the public sector, to grow the body of applied evidence within our field.

Research method and design

This article considers the process of undergoing an evaluation of the YOLO programme to draw conclusions regarding the applicability of formative evaluation to public sector programmes by using a utilisation-focused approach.

‘You Only Live Once’ utilisation-focused reflection utilising the ‘You Only Live Once’ design and implementation evaluation as a case study

The utilisation-focused evaluation (U-FE) framework, originally articulated by evaluator Michael Quinn Patton, is one that is itself rooted in the goals of formative evaluation. According to Patton (2010), UFE is a decision-making framework for enhancing the utility and actual use of evaluations. It does not prescribe a specific method or design but is rather a guiding approach to ensure that the primary intended uses of the evaluation are based on the needs of the primary intended users. It can include a variety of evaluation methods within an overall structured participation paradigm (Ramírez & Brodhead 2013). The U-FE framework is based on one of the four evaluation standards in evaluation design – utility, which entails ensuring that evaluation is relevant and serves the information needs of its users (Patton 2010). Utilisation-focused evaluation begins with the premise that (Patton 2010):

[E]valuations should be judged by their utility and actual use. Therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that will be done, from beginning to end, will affect use. (n.p.).

This study utilised principles of U-FE based on the experiences of two M&E specialists participating in the YOLO design and implementation evaluation process as members of the evaluation steering committee and technical committee, as well as a review of evaluation reports. The study aims to contribute to what the public sector may learn from formative programme evaluation to improve programming by discussing the implications of the evaluation process and findings for M&E specialists within the public sector, as well as the M&E shortcomings on quality programming. It also reflects on the extended applicability of the evaluation findings to broader public sector programming.

The U-FE approach taken by the study authors guided the study’s design. The primary intended users of the study findings are public sector programme evaluation policymakers, and the intended use is to contribute to decisions regarding the role of formative evaluation in public sector programme designs and implementation. For this reason, the authors analysed the evaluation process from the perspective of M&E specialists for utility from an M&E perspective, how the evaluation process was undertaken and received by the YOLO programme stakeholders, and reviewed the evaluation report to determine its success in addressing the key questions. The evaluation findings were reviewed, and those relevant to the reflection were incorporated in this analysis.

The authors (a member of the DSD Evaluation Directorate and the Senior Monitoring, Evaluation, Research and Learning [MERL] Advisor of the GCBS Programme) collectively have worked for almost 30 years in the M&E space with the public service, research institutions, consulting firms with several non-profit organisations (NPOs) and development aid agencies, within South Africa and Southern Africa. Both authors were directly involved in the entire evaluation process, together with 26 key stakeholders representing DSD, the United States Agency for International Development (USAID), Pact South Africa, SANAC, Mott MacDonald and LiveMoya, from the evaluation design, commissioning and management of the evaluation to the reflection of all deliverables, and have an in-depth knowledge of the entire evaluation process as well as the content.

The ‘You Only Live Once’ design and implementation evaluation case study

The key evaluation question was, ‘to what extent is the design and implementation of the YOLO programme appropriate in achieving its immediate intended outcomes?’ – taking cognisance of the fact that two implementation modalities (SANAC and GCBS) were used. The sub-evaluation questions focused on the programme’s effectiveness, efficiency and sustainability, relevance and lessons learnt (LiveMoya 2018). To respond to these questions, a mixed-methods approach was taken, incorporating both quantitative and qualitative data analysis. This approach is recognised as a highly valuable methodological approach in evaluations of this nature, providing a practical method of understanding complex programmes with different causal pathways and multiple outcomes through triangulation (Greene, Caracelli & Graham 1989; Tashakkori & Creswell 2007). The evaluators collected primary data through focus group discussions (FGDs), surveys, interviews with key respondents and observations of programme implementation. The evaluation also used secondary data in the form of routine programme M&E data, programme documentation and relevant literature. Data were triangulated as shown in Figure 3.

FIGURE 3: Triangulation of data sources.

The evaluation sample comprised three target group respondents, including the beneficiaries of the YOLO programme, parents/guardians/caregivers of the beneficiaries and stakeholders involved in the programme roll-out (i.e. NPOs, funders and social workers). For the first group (beneficiaries), 8 FGDs were undertaken based on a quota sampling methodology, 1288 surveys were conducted based on a stratified proportional sample with random probability sampling and 6 site implementation observations were performed based on purposive sampling. Using a non-probability purposive sampling method, 45 key informant interviews were conducted with key programme stakeholders; 35 surveys (based on purposive sampling) were also conducted with the parents/guardians/caregivers of the beneficiaries. Further detail on sampling within each group of respondents is presented in the evaluation report.

Data quality control measures included training of fieldwork supervisors and data collectors on obtaining consent from an adult in the case of younger respondents (aged 15–17 years), data collection tools and substitution practices, consented audio recordings of interviews and FGDs being transcribed with redactions; piloting of all tools and instruments; and 20% back-checks on the interviews already conducted to verify the data collected. Quantitative data analysis involved tabulations and cross-tabulations of survey results as well as routine M&E data, including significance testing and regression analysis. Qualitative data analysis included coding, clustering and thematic harvesting for meaning and influence. The evaluation report itself was further triangulated by a stakeholder workshop (DPME recommended) involving the review of finding, analysis and recommendations by a broad group of stakeholders enhancing its validity and utilisation. Ethical approval was obtained from the University of Witwatersrand’s Non-Medical Ethics Committee to ensure that the evaluation methodology used addresses several ethical principles.

Ethical considerations

This article followed all ethical standards for research without direct contact with human or animal subjects.

Results

Formative evaluation utilisation-focused findings
Monitoring and evaluation framework

Need for comprehensive monitoring and evaluation framework: The lack of an M&E system deters the application of regular M&E throughout the lifecycle of the programme to enable corrective action to be taken in a timely manner, optimising resources and impact. This study further supports this argument, as it found that the programme lacked the clear, comprehensive and complete M&E framework necessary for tracking and measuring programme results, ensuring accountability and producing learnings for programme improvement. It was further argued that the programme lacked (LiveMoya 2018):

‘[A] final set of documents and a proper M&E framework that clearly articulates the targets with a succinct description, the frequency of reporting, the tools to be used and the analysis to be conducted’. (p. 59)

This further affects the appropriateness and quality of data collected, the quality of assessments of programme outcomes such as participant behaviour and attitude change, and the assessment of overall programme impacts. This is coupled with a heavy compliance-based culture of M&E that focuses overly on disconnected, high-level indicators developed for policy rather than implementation improvement purposes. Therefore, without a clear, comprehensive and complete M&E framework and system, there is an insufficient evidence base through which to assess the programme effects and adapt as necessary. This makes it difficult to make the case for programme funding to scale the programme optimally nationwide and hinders the National Treasury’s ability to make evidence-based budget allocations. It is also important that the M&E framework streamlines administrative requirements, namely, data capturing and how to complete paperwork (LiveMoya 2018). This highlights the necessity of involving M&E specialists in the entire programme process from design to completion, which although a well-known requirement is often lacking because of directorate division within departments operating in silos.

Model of implementation

A common challenge in intervention implementation is that competing priorities (usually political and economic) from various stakeholders tend to result in disjointed modalities of a single intervention. Likewise, the YOLO evaluation found that ‘the two implementers [of the YOLO programme], SANAC and GCBS, were driven by different implementation models and different targets’. Furthermore, it has been stated that ‘the funding for YOLO at Provincial DSD level has come from provincial budgets, and the implementation model has been at the discretion of the province’ (LiveMoya 2018:58). This further perpetuates the issue of different implementation modalities within one implementing agency (in this regard, SANAC).

Monitoring and evaluation specialists need to champion the utilisation of existing evidence from similar interventions in the design phase of the intervention as well as to build implementation modality assessments or evaluations into the initial phase of programme implementation, which will enable evidence-based adaptation in programming down the line. In support, this study found that the model (of implementation) should be assessed over time and in line with the recommended M&E framework to determine its viability in future implementation. This will limit the deleterious impact of competing priorities (i.e. expansion of implementation for reach purposes vs. conduct of regular formative assessments to draw key lessons) on the quality of programming. Even though this was done during the initial design phase of the YOLO programme, evidence from the evaluation, two years after the initial implementation, indicated that regular formative assessments should have been built into the implementation plan rather than as a once-off, as context changes and emergent evaluation findings and recommendations have design adaptation implications.

Understanding the need for monitoring and evaluation in programme implementation: The lack of M&E understanding by programme implementers usually tasked with collecting M&E information during the various stages of programme implementation can cause significant deterioration of even the best-designed M&E systems. As a result, data are often misrepresentative of the intended requirements or, as in this instance, incomplete. As stated by a key stakeholder (Mott MacDonald pers. comm., date unknown), ‘[n]ot training facilitators on implementing M&E requirements had been “a gap”’.

This results in facilitators being of the view that the paperwork is cumbersome and unnecessary, hence the facilitators (LiveMoya 2018:84; [facilitator and social worker, gender undisclosed, date unknown]) ‘made mistakes with the paperwork and left some blank spaces’.

The use of both formative (and process) evaluations can thus assist government programming in identifying such challenges earlier on and addressing them promptly. In turn, the improved M&E capacity will enable programme managers to report on the programme impact (accountability), which will assist the government, through the National Treasury, and programme funders such as USAID to know that their investment can yield returns.

Data management

Focusing on essential, rather than interesting, data collection: It is important that when implementation data collection tools are developed, adequate M&E capacity development and efficient means to capture the relevant data required for measuring programme success are provided. This study further supports this argument, as it found that the tools were too cumbersome (i.e. the length of the form and ease of use) and inconsistently applied by the various implementing partners and their respective implementing agencies (LiveMoya 2018). Although the pre- and post-evaluation forms were designed on the basis of benchmarked youth Social and Behaviour Change Communication (SBCC) programme tools that are based on the President’s Emergency Plan for AIDS Relief (PEPFAR) Determined, Resilient, Empowered, AIDS-free, Mentored, and Safe (DREAMS) programmes such as Vhutshilo, the tools were also found to be of poor quality. Formative evaluation could have caught this earlier.

Similarly, there is the temptation to take advantage of the opportunity to include additional data elements that may be interesting but are not essential, which often results in unnecessarily long instruments. Data collection tools should therefore be made as easy as possible and not be too long, with careful consideration of each data point required along with enough capacity development of all those involved in the instrument design and its utilisation. Paying careful consideration to broad stakeholder feedback is equally important, as the monitoring data collection process is often built into the intervention implementation and left to programme facilitators and beneficiaries to complete (such as enrolment forms and pre- and post-evaluation forms). For example, in the YOLO evaluation, one of the stakeholders stated that ‘[a] whole lot of useless data had been collected’ (MOTT stakeholder 1, gender undisclosed, date unknown).

Memorandum of understandings to improve tracking of unique identity documents (ID): An interdepartmental collaboration, in the form of a memorandum of understanding between the programme owners, in this case, DSD and the Department of Basic Education (DBE), can be formalised to enhance access to accurate ID numbers to enable proper tracking and tracing of beneficiaries receiving services from multiple government programmes. In addition, this can enable the assessment of inter-intervention influence on the outcomes of beneficiaries (i.e. YOLO outcomes in relation to education outcomes). Therefore, for programmes of this nature that target children of schoolgoing age, collaboration with key sector lead departments is necessary and could assist tremendously with intersectoral analysis. This study further supports this argument, as it highlighted the issue of the ID numbers as one of the key shortcomings, because unique identifiers are critical for assessing completion rates and the changes in the behaviour of participants through the pre- and post-evaluation forms. More broadly, they also enable the government to ascertain the impact it has made on the lives of citizens accessing all government programmes and/or services aimed at improving their livelihood.

Collaborations with key sector departments could also improve the overall data quality of beneficiary ID numbers, which have been noted to be, in the case of YOLO, ‘either missing or duplicated, which would have impact on payment made to NPOs’ (LiveMoya 2018:61).

The use of formative evaluation could have more speedily suggested ways to strengthen the quality of data collected, such as beneficiary ID numbers through such a collaboration with DBE. This would also present programme owners and implementers with an opportunity to validate their data collected on beneficiary ID numbers, as all DBE ID numbers are validated with the Department of Home Affairs (DHA). Such control measures will also ensure that implementers are held accountable for any erroneous data collected.

Monitoring and evaluation capacity development of implementers

Overall, there is a need to change the culture within organisations concerning M&E to ensure that it is embedded in programme implementation. This is supported by research stating that the lack of M&E capacity is one of the main challenges faced by government as a whole, resulting in ineffective policy responses in the country (Paine & Sadan 2015). The embedding can ensure that M&E is seen as an effective programme management process rather than as an ‘isolated process’ (Goldman et al. 2012). This ensures that implementing organisations do not view M&E as ‘tedious, unclear and hard to report on’ (LiveMoya 2018:61).

It also ensures that they have adequate skills and knowledge to undertake the M&E activities required. In support of this, the YOLO evaluation report states that M&E capacitation assists with enhancing the monitoring and reporting process of the programme and enables enumerators to be thorough and efficient. Formative evaluation could have provided such insights earlier, in the design phase prior to programme implementation.

For the implementation organisation capacitation to be effective, adequate skilled M&E specialists are needed from the programme owners to support and monitor the implementing partners. Capacity development in this context should be understood as the improvement of human resources and operational capabilities of systems, organisations and individuals so that they can perform better (Görgens & Kusek 2009). Because development is about people and their societies interfacing and developing within their environment, capacity development needs to be specific (Lusthaus, Adrien & Perstinger 1999). As such, there are a number of approaches to capacity development, including the organisational approach, the institutional approach, the systems approach and the participatory process approach (Lusthaus et al. 1999; Watson 2006). The purpose of capacity building and development in monitoring and evaluation is therefore to improve the performance of M&E.

Monitoring data collectors, who are often programme facilitators, also need to be trained on how to correctly complete the data collection tools thoroughly and promptly. Data collectors need to be trained on the overarching M&E system that the specific intervention feeds into, as there are often data quality implications when the full picture is not clear to all relevant contributors, for example, how to use the Community-Based Intervention Monitoring System, a data collection tool used by DSD on several projects and programmes. It is therefore crucial that the M&E capacity of organisations is funnelled down throughout the implementing organisation to ensure that it is not limited to management level, as found in the case study. Whilst a generic training programme can be developed, it is important that the training adequately addresses the specific needs of the implementing partners on an ongoing basis. This is important as the data captured affect the ability to measure the programme reach and impact, and are often used to determine payment to the implementing agencies. The focused training will therefore assist the implementing organisations to be able to explain why programme participant ID numbers are required, for what purposes their identity will be used and how it will be protected – a data collection issue noted in the evaluation. This will assist in addressing the data collection tools issue said to be ‘faulty, ambiguous, poorly completed and incomplete’ (LiveMoya 2018:60).

Discussion and recommendations

Implications for public sector programme design and funding

The NEPF advocates for a design evaluation before programme implementation as well as building in an implementation design element (DPME 2011). Although the design element of this programme was evaluated before programme implementation, the results of the evaluation described above indicate that the programme design should have been revised again even though the programme had only been implemented for two years. This illustrates the need to periodically evaluate the design of a new programme, as highlighted in the authors’ reflection. However, within the South African context, it is often difficult to allocate sufficient resources for the development and utilisation of an ideal M&E framework, including undertaking a diagnostic evaluation (situational analysis and needs assessment), baseline evaluation and a design evaluation (including the costing) before programme implementation, followed by a re-occurring outcome and/or impact evaluation. As a result, the design evaluation is often sacrificed for implementation and impact evaluations because of pressure to account for the investments already made. However, evaluating implementation and impact without periodically reflecting on the design of a programme may short-change the outcomes and recommendations of such evaluations, as they assume the design of the programme itself is sound.

Moreover, a broader evidence base (i.e. through the use of formative and process evaluation findings) can position departments to better advocate for funding. This ties into the current debate around developing M&E capacity within the public service to be able to design and conduct evaluations, rather than outsourcing to external, supposedly ‘objective’ service providers, as this is a costly exercise. The pressure to meet service delivery expectations often results in resources being used to render services (and the expansion thereof) as opposed to gathering evidence required to enhance the delivery of services and their outcomes. Therefore, more resources need to be spent on formative evaluation to improve programme outcomes, rather than on just outcome or impact evaluations.

The impact of monitoring and evaluation shortcomings on quality programming

The shortcomings of the YOLO programme, as outlined in the reflection section (‘You Only Live Once’ utilisation-focused reflection utilising the ‘You Only Live Once’ design and implementation evaluation as a case study), has resulted in the programme having no counterfactual or baseline with which to rigorously measure the programme impact. Positively, the conduct of the YOLO formative evaluation enabled the generation of evidence and data (which could serve as a baseline in future evaluations) and implementation improvements. The absence of fully fledged baseline, midline and end-line research components, which resource-constrained programmes often lack, could be attributed to several things, including the limited overarching (government departmental level) programme and project (government directorate level) management, including planning and designing, in the long term.

In the absence of baseline data, this reflection highlights that it is still worth investing in alternative types of evaluations to gather information about the programme. Without this evidence generation exercise, programme managers would not be able to determine and enable corrective actions to be taken in a timely manner (such as the identification of the need to further capacitate programme implementers on how to effectively collect and utilise M&E data). An implication of this is that, because of poor M&E understanding amongst programme implementers collecting M&E evidence, poor quality evidence was utilised.

In this instance, the YOLO programme attracted further external donor funding (USAID) to expand its roll-out. The continued roll-out and expansion was significantly informed by the evidence generated by the YOLO evaluation findings. Over and above this, a similar programme for a different age group (Chomy) was commissioned, incorporating the evidence from the YOLO evaluation. In addition, the DSD’s broader SBC compendium (a collection of inter-related programmes), of which YOLO and Chomy are a part, was enhanced. This clearly illustrates the importance of this type of evaluation as well as how evidence can be better used by public sector institutions in their decision-making processes.

Relevance to other sectors

Overall, the findings of the evaluation and reflection can be relevant to other public sector topics beyond the HIV and social service space, highlighting the need to shape M&E for better evidence for decision-making. As part of the post-apartheid legacy, the SAG has been under constant pressure to expand access to services for all citizens. A notable example is illustrated by the expansion of the Grade R programme before the impact evaluation in 2013. Evaluation evidence suggested that the expansion, whilst addressing access, overlooked the implications of quality (Taylor 2015). Formative evaluation would have helped identify the urgent need to improve the quality along with access, as opposed to focusing only on the latter. Lessons from the implementation of SBC programmes, such as YOLO, can also be utilised in the implementation of other prevention programmes (i.e. through embedding the culture of M&E in programme implementation and evaluating various models of implementation prior to programme roll-out) across sectors for programmes such as the Traditional Leaders Programme, Psycho-social Support Programme, Family Matters Programme and Sex Workers Programme (DSD 2017). Formative evaluations can therefore assist in ensuring that priority sector programmes are designed and implemented effectively.

Conclusion

This article used a utilisation-focused methodology to reflect on the lessons of the YOLO programme evaluation process and their implications for the broader M&E sector within the public service. The key findings of the reflection highlight a number of lessons. Firstly, they underscore the importance of assessing the costs and benefits of redistributing programme resourcing across M&E and programme implementation to ensure that a comprehensive M&E system is in place that adequately and timely responds to the M&E demands of the programme, ensuring utility and enhancing the programme’s potential. Secondly, the findings emphasise the importance of periodically conducting design and implementation evaluations and making use of the evidence generated to enhance the outcome of the programme roll-out. This enables the programme to collect data that will serve as baseline and mid-line data that will enable the measurement of the programme impact in the future. Thirdly, the findings highlight the importance of conducting, sharing and using evaluation findings from numerous interventions for programmatic design, implementation and improvement across the public service to strengthen M&E systems and processes and inform the decision-making processes of policymakers and implementers to deliver better outcomes for society.

Acknowledgements

The authors thank the United States Agency for International Development (USAID) for funding the design and implementation evaluation of YOLO. They are also grateful to the Department of Social Development (DSD) for granting access to the evaluation report. The authors thank LiveMoya for undertaking the design and implementation of YOLO.

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this research article.

Authors’ contributions

H.K.K. is Assistant Director at the National Department of Social Development and was responsible for M&E and overseeing the YOLO Design and Implementation Evaluation from DSD. C.J. is Senior Monitoring, Evaluation, Research and Learning Advisor of the Government Capacity Building and Support (GCBS) programme. Both authors contributed equally to this work.

Funding information

This research is premised on the design and implementation evaluation of YOLO, which was funded by the United States Agency for International Development (USAID).

Data availability

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

Abrahams, M.A., 2015, ‘A review of the growth of monitoring and evaluation in South Africa: Monitoring and evaluation as a profession, an industry and a governance tool’, African Evaluation Journal 3(1), Art. #142, 8 pages. https://doi.org/10.4102/aej.v3i1.142

Amisi, M.M., Marais, L. & Cloete, J.S., 2018, ‘The appropriateness of a realist review for evaluating the South African Housing Subsidy Programme’, South African Journal of Science 114(11/12), Art. #4472, 9 pages. https://doi.org/10.17159/sajs.2018/4472

Bennet, R.E., 2011, ‘Formative assessment: A critical review’, Assessment in Education: Principles, Policy & Practice 18(1), 2–25. https://doi.org/10.1080/0969594X.2010.513678

Burns, M.K., 2008, What is formative evaluation?, viewed 11 June 2020, from https://www.researchgate.net/publication/239588953_What_is_Formative_Evaluation.

Department of Planning, Monitoring and Evaluation (DPME), 2011, National evaluation policy framework, viewed 13 January 2020, from https://www.dpme.gov.za/publications/Policy%20Framework/National%20Evaluation%20Policy%20Framework.pdf.

Department of Planning, Monitoring and Evaluation (DPME), 2012, Functions of an M&E component in National Government Departments, viewed 15 June 2020, from https://www.dpme.gov.za/publications/Guides%20Manuals%20and%20Templates/Functions%20of%20an%20M%20and%20E%20component%20in%20National%20Government%20Departments.pdf.

Department of Planning, Monitoring and Evaluation (DPME), 2013, Draft DPME guideline 2.2.3 guideline for the planning of new implementation programmes, viewed 07 January 2020, from https://evaluations.dpme.gov.za/images/gallery/Guideline%202.2.3%20Implementation%20%20Programmes%2013%2007%2030.pdf.

Department of Planning, Monitoring and Evaluation (DPME), 2014a, Evaluation guideline 2.2.11 design evaluation, viewed 07 January 2020, from https://evaluations.dpme.gov.za/images/gallery/Guideline%202.2.11%20Design%20Evaluation%2014%2003%2020.pdf.

Department of Planning, Monitoring and Evaluation (DPME), 2014b, Evaluation guideline 2.2.12 implementation evaluation, viewed 07 January 2020, from https://evaluations.dpme.gov.za/images/gallery/Guideline%202.2.12%20Implementation%20Evaluation%2014%2003%2020.pdf.

Department of Planning, Monitoring and Evaluation (DPME), 2014c, Evaluation guideline 2.2.10 diagnostic evaluation, viewed 07 January 2020, from https://evaluations.dpme.gov.za/images/gallery/Guideline%202.2.10%20Diagnostic%20Evaluation%2014%2003%2020.pdf.

Department of Social Development (DSD), 2017, Implementing HIV prevention amongst young people in a geographic focused approach in South Africa, viewed 28 January 2020, from http://teampata.org/wp-content/uploads/2017/06/a-model-in-implementing-hiv-Prevention-amongst-youth-in-south-africa-Strategy-proposal-27-July-2015.pdf.

Goldman, I., Engela, R., Akhalwaya, I., Gasa, N., Leon, B., Mohamed, H. et al., 2012, ‘Establishing a National M&E System in South Africa,’ PREM Notes, No. 21, World Bank, Washington, DC, viewed 07 February 2020, from https://openknowledge.worldbank.org/handle/10986/17084.

Görgens, M. & Kusek, J.Z., 2009, Making monitoring and evaluation systems work: A capacity development toolkit, World Bank, Washington, DC.

Greene, J.C., Caracelli, V.J. & Graham, W.F., 1989, ‘Toward a conceptual framework for mixed-method evaluation designs’, Educational Evaluation and Policy Analysis 11(3), 255. https://doi.org/10.3102/01623737011003255

LiveMoya, 2018, YOLO design and implementation evaluation, Unpublished, Pretoria.

Lusthaus, C., Adrien, M.-H. & Perstinger, M., 1999, Capacity development: Definitions, issues and implications for planning, monitoring and evaluation, Universalia Occasional Paper, vol. 35, pp. 1–21, Universalia, Montréal.

Nieveen, N. & Folmer, E., 2013, ‘Formative evaluation in educational design research’, in J.V. Akker, B. Bannan, A.E. Kelly, N. Nieveen & T. Plomp (eds.), Educational design research: An introduction, pp. 153–169, Netherlands Institute for Curriculum Development (SLO), Enschede.

Organisation for Economic Co-operation and Development (OECD), 2008, Assessment for learning – The case for formative assessment, viewed 07 January 2020, from https://www.oecd.org/site/educeri21st/40600533.pdf.

Paine, G. & Sadan, M., 2015, ‘Use of evidence in policy making in South Africa: An exploratory study of attitudes of senior government officials’, African Evaluation Journal 3(1), 1–10. https://doi.org/10.4102/aej.v3i1.145

Patton, M.Q., 2010, Utilization-focused evaluation, viewed 07 January 2020, from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.967.8963&rep=rep1&type=pdf.

Patton, M.Q., 2012, Planning and evaluating for social change: An evening at SFU with Michael Quinn Patton, viewed 06 February 2020, from https://www.betterevaluation.org/en/resources/guide/Planning_Evaluating_Social_Change.

Porter, S. & Goldman, I., 2013, ‘A growing demand for monitoring and evaluation in Africa’, African Evaluation Journal 1(1), Art. #25, 9 pages. https://doi.org/10.4102/aej.v1i1.25

Ramírez, R. & Brodhead, D., 2013, Utilization focused evaluation: A primer for evaluators, Southbound Sdn. Bhd, George Town.

Stetler, C.B., Legro, M.W., Wallace, C.M., Bowman, C., Guihan, M., Hagedorn, H. et al., 2006, ‘The role of formative evaluation in implementation research and the QUERI experience’, Journal of General Internal Medicine 21(2), S1–S8. https://doi.org/10.1111/j.1525-1497.2006.00355.x

Tashakkori, A. & Creswell, J.W., 2007, ‘Editorial: The new era of mixed methods’, Journal of Mixed Methods Research 1(1), 3. https://doi.org/10.1177/2345678906293042

Taylor, S., 2015, Policy brief series: Evidence based policy – Making and implementation, viewed 04 February 2020, from https://evaluations.dpme.gov.za/images/gallery/Grade%20R_Policy%20Brief.pdf.

United Nations Children’s Fund (UNICEF), 2016, Biennial report South Africa 2014–2015, viewed 07 January 2020, from https://www.unicef.org/southafrica/SAF_resources_biennialreport2014_2015.pdf.

United Nations Office on Drugs and Crime (UNODC), 2020, Evaluation in the project/programme cycle, viewed 13 January 2020, from https://www.unodc.org/unodc/en/evaluation/evaluation-and-the-project-programme-cycle.html.

United Nations Programme on HIV/AIDS (UNAIDS), 2018, AIDSinfo, viewed 28 January 2020, from https://aidsinfo.unaids.org/.

Watson, D., 2006, Monitoring and evaluation of capacity and capacity development, Discussion paper No 58B, ECDPM, Maastricht.



Crossref Citations

No related citations found.