Article Information

Author:
Mark A. Abrahams1

Affiliation:
1Division for lifelong Learning, University of the Western Cape, South Africa

Correspondence to:
Mark Abrahams

Email:
marka@iafrica.com

Postal address:
Private Bag X17, Bellville7535, South Africa

Dates:
Received: 17 Apr. 2015
Accepted: 07 Aug. 2015
Published: 23 Sept. 2015

How to cite this article:
Abrahams, M.A., 2015, ‘A review of the growth of monitoring and evaluation in South Africa: Monitoring and evaluation as a profession, an industry and a governance tool’, African Evaluation Journal 3(1), Art. #142, 8 pages. http://dx.doi.org/10.4102/aej.v3i1.142

Copyright Notice:
© 2014. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A review of the growth of monitoring and evaluation in South Africa: Monitoring and evaluation as a profession, an industry and a governance tool
In This Original Research...
Open Access
Abstract
Introduction
Overview of prominent developments in the field of M&E, 1987–2014
M&E as a profession
Monitoring and evaluation as an industry
M&E as governance tool
Critical reflections and conclusions
The future?
Acknowledgements
   • Competing interests
References
Footnote
Abstract

South Africa is one several African countries with an official ministry responsible for monitoring and evaluation (M&E). Some of the other countries include Ghana, Kenya, Benin and Uganda. The development of M&E in South Africa has been stymied in part by its interdisciplinary nature, trying to find roots within historically a very discipline-based higher education system. Over the last ten years, however, there has been a huge increase in the number, scope and quality of evaluations conducted in this country. Government agencies and Non-government organisations (NGOs) often using international donor funds for their own projects, have been engaged in outsourcing evaluation studies, and currently all government departments have established their own M&E units. There are statutory bodies such as the Public Service Commission (PSC) and the Department for Planning, Monitoring and Evaluation (DPME) with the responsibility to monitor and evaluate the government’s service delivery and performance. The South African Monitoring and Evaluation Association (SAMEA), established in 2005, draws together M&E practitioners, trainers in M&E, development agencies as well as government officials at its biennial conferences and sustains a vibrant community via its listserv – SAMEATalk. This article reviews the growth of monitoring and evaluation in South Africa and reflects on the current or prominent nature of M&E in this country. It deliberates about M&E developing into a profession, its growth as an industry or business and its increasing adoption as a governance tool for development in South Africa. The paper concludes with some critical reflections on the growth of M&E in South Africa.

Introduction

‘Programme evaluation’, ‘evaluation research’ or in its most recent usage ‘monitoring and evaluation (M&E)1’ as a distinct discipline and a field of study was regarded 15 years ago as relatively new in South Africa (Louw 1998; Mouton 2010; Potter 1999; Potter & Kruger 2001). Its development in South Africa was limited in part by the interdisciplinary nature of monitoring and evaluation, trying to find roots within historically, a very discipline-based higher education system. In this time period evaluation practice in South Africa had been conducted by academics and professionals trained in, amongst others, psychology, sociology, economics, education, health, philosophy or political science. Within government the emphasis was more on monitoring. Later, informed by the New Public Management (NPM) movement that highlighted accountability of the public sector, there was a shift to include evaluation as a key performance management tool. The NPM was seen as a solution to address poor performance and to gain trust in the public sector (Mouton 2010).

Since 2000 all government departments in South Africa have established their own M&E units, but mostly focusing on monitoring. Over the last ten years there has been an increase in the number, scope and quality of evaluations conducted in this country. Government agencies and non-government organisations (NGOs), often using international donor funds for their own projects, have been engaged in outsourcing evaluation studies.

The Public Service Commission (PSC) and the Department for Performance (later Planning) Monitoring and Evaluation (DPME) are statutory institutions charged with the task of monitoring and evaluating government delivery and performance. DPME is based in the Presidency and headed by a Minister. Since 2011 DPME has established a National Evaluation System, including standards, competences, training and the conducting of evaluations at national and provincial levels, as well as part-funding of evaluations with departments.

Dedicated courses in programme evaluation are emerging at various higher education institutions, notably the Universities of Cape Town and Stellenbosch. Several new locally produced social science research textbooks, used by undergraduate and post-graduate students, have chapters on programme evaluation. These developments have contributed to the growth of a young but vibrant culture of evaluation research in South Africa. The launch of the South African Monitoring and Evaluation Association (SAMEA) in 2005 supported this momentum, currently with almost 500 members from various sectors of society.

It is in this context that this review of the growth of Monitoring and Evaluation in South Africa reflects on the current or prominent nature of M&E in this country. What follows is a brief historical overview of the major developments in the field of M&E in South Africa over the last 10 to 15 years. An attempt is then made to track and cluster some of the developments into thematic sectors such as the professionalisation of M&E, its development as an industry and the government utilisation of M&E as a governance tool.

The professionalisation of evaluation is a longstanding debate internationally and one that has resurfaced at international conferences (EES 2014), in evaluation journals (Podems & King 2014), various websites and locally during joint initiatives of SAMEA and DPME. The debate is informed by expected or required competencies of evaluators, standards and guidelines that should inform evaluation practices, its development as a discipline (Scriven 2001) and the role of voluntary organisations for professional evaluation (VOPEs) (Rugh & Segone 2013) in the pursuit of professional practices.

There has been a huge expansion in numbers and types of people and organisations that are involved in evaluation (Smith 2001). International and local financial auditing firms increasingly expand their services to include evaluation. Patton (2001) points to the corporations that focus on the generation of ‘intellectual capital’ through lessons learned and best practices. The current range of evaluation practitioners include the individual independent consultant, consultancy firms, academics employed at universities, international development agencies and consultants within governments with different roles and functions. Then there are those who commission evaluations. This has become a big business, an industry.

The NPM movement has informed results-based and evidence-based orientations to public management and specifically policy development. It has elevated M&E as a higher-order management function where policies are monitored and evaluated to assess appropriateness and their effectiveness. Evidence-based policy assessment relies on the existence of an effective M&E capacity in public organisations Cloete 2009). M&E as a governance tool has become a central feature of good governance.

Overview of prominent developments in the field of M&E, 1987–2014

The progress of the field of M&E in South Africa has been a comparatively recent phenomenon. Potter (1999) states that ‘evaluation research was relatively unknown until the early 1980s, and it is only in the 1990s that local scientists have demonstrated increased interest in the area’ (p. 225). Interestingly however, De Vos (1998) refers to an official document, Circular No. 6 of 1987, issued by the then Department of Health Services and Welfare, Administration: House of Assembly. According to De Vos, this initiative formally introduced the concepts of ‘programme development and evaluation’ in South Africa, but these were limited initially only to the white population in the country. This suggests part of the reason for the slow development of programme evaluation in South Africa, as well as the particular political and selective use of social science research pre-1994 during the Apartheid era.

Programme evaluation first emerged as a practice where project activities and outcomes had to be evaluated within the non-government (NGO) sector as a requirement for further donor funding (Mouton 2010; Potter & Kruger 2001; Swilling & Russel 2002). The contribution of NGO development work in South Africa has been significant. With the existence of so many ‘unmet’ needs in the country, NGOs have been able to offer products and services where the government was unable, and before 1994, unwilling to deliver them. Potter (1999) reported that, since the 1970s, an estimated R6 billion of overseas and local funding had been used by various NGOs to engage in development projects in various sectors of society. The scope and reach of NGO work in South Africa is broad and NGOs work in every sphere and sectors of society including, amongst others, health, welfare, education, entrepreneurship, community development, and skills training. NGOs generally operate on a small scale within a particular geographic area and with specific interest groups.

The NGO character has also evolved over the years. Many established before 1994 chose to be non-government so as to oppose the Apartheid government. Post-1994, however, many of them worked with or were funded by the government and had to apply for non-profit status. With their not-for-profit status, these organisations are currently also referred to as non-profit organisations (NPOs). The NPO moniker also reflects a ‘depoliticised’ character of the new relationship between the government and these organisations (Swilling & Russsel 2002).

There was a more relaxed and flexible relationship between donors and recipient agencies in South Africa before 1994. NGOs were only required to provide financial audits and annual reports to qualify for further support. When evaluations did occur, they were conducted by external evaluators. This scenario began to change in the 1980s when funders like the Kellogg Foundation insisted on evaluations as well as the use of local evaluators. Funding agencies such as the USAID, Department for International Development (DFID), the Netherlands also started implementing stringent accountability measures for their grants. Much has changed since 1994 and most projects deemed as high priority areas are subject to evaluation.

According to Lodge (1999), the South African government, charged with the accusation of being largely ineffective in reaching the poor prior to 1994, had also embarked on numerous interventions since 1994. One of government’s priority actions for redistribution was a land reform programme that settled in excess of 68 000 families on more than 300 000 hectares of farming land. It was also within the Department of Land Affairs (DLA) where the first M&E Directorate was established in 1995 (Naidoo 2012).

What had been difficult for the government to ascertain was the relative success of its policies and initiatives, not just in terms of numbers, but in terms of quality as in the objective of ‘improving the quality of life’ of the people. Evaluation was very limited within government, except for the DLA, and confined to people who attended conferences outside the country (Naidoo 2012). It was in this context that the PSC, restructured in 1997, designed its M&E systems and also became a pioneer in the evaluation field.

A more concerted attempt at managing government performance emerged from the National Treasury and the office of the Auditor-General who used the Public Finance Management Act of 1999 to regulate financial management in national government and provincial governments to ensure that all revenue, expenditure, assets and liabilities of those governments were managed efficiently and effectively. There was an increasing emphasis on service delivery and the gathering of non-financial information, in pursuit of greater value for money spent. National Treasury’s Framework for Programme Performance Information (FMPPI) sought to use a results-based management conceptual base with the structuring of departments’ budgets around high-level budget programmes, and a framework for indicators and reporting (Goldman et al. 2014).

Closely linked to South Africa’s and the ruling party’s desire to meet the needs of their citizens was the international interaction with other developed and developing countries around the Millennium Development Goals (MDGs). The MDGs were eight goals to be achieved by 2015 that responded to some of the world’s main development challenges. The MDGs were drawn from the actions and targets contained in the Millennium Declaration that was adopted by 189 nations and signed by 147 heads of state and governments during the UN Millennium Summit in September 2000. The MDGs required rigorous monitoring and evaluation systems, which helped to stimulate a growing interest in M&E in Southern Africa and indeed in Africa.

At this time, recognising the fragmented nature of monitoring and evaluation in government (Engela & Ajam 2010), the Presidency introduced a Government-wide Monitoring and Evaluation System (GWM&ES) in 2005, that was managed initially by an inter-departmental task team in the Department of Public Service Administration (DPSA) and later by the Policy Coordination and Advisory Service (PCAS) Unit located within the Presidency.

The GWM&ES was envisaged as a ‘system of systems’ in which each department would have its own autonomous functional monitoring system, out of which the necessary information could be extracted. An important departure point was that existing M&E capacities and programmes in line function departments should as far as possible be retained, linked and synchronised within the framework of the GWM&ES (Engela & Ajam 2010).

From the mid-2000s evaluation began to be more widely used, and an audit of evaluations carried out by the Programme to Support Pro-Poor Policy Development in 2011 found 135 evaluation-type activities carried out since 2006.

Parallel to these developments, South Africa’s Human Science and Research Council (HSRC) in 1993 invited the then president of the American Evaluation Association (AEA), David Fetterman, to give a series of talks and seminars. This was pre-1994 and before the first democratic elections in this country. After these seminars, attempts were made to organise those who had attended the sessions. The lack of trust amongst researchers from the (racially) different institutions scuppered this attempt.

In May 2002 Michael Quin Patton, a prominent evaluation expert and author of several evaluation research texts, visited South Africa to provide training and stimulated a lot of interest in evaluation. An electronic listserv, SAENeT, the South African Evaluation Network, was initiated after the above training events. More than 300 people subscribed to the SAENet listserv and the interaction between AfrEA. – African Evaluation Association and the Public Service Commission (PSC) of South Africa led to the conference held in Cape Town in 2004 where hundreds of people interested in evaluation from Africa and beyond gathered. Local interest in the AfrEA conference was also stimulated by the SAENet pre-conference training programmes offered by Donna Mertens (US) and Patricia Rogers (AUS) amongst others. Using the AfrEA‘Guiding principles for evaluation’ as its cornerstone, the SAMEA was launched in November 2005 with Jennifer Bisgard as its first chairperson.

In April 2005 an electronic survey questionnaire was sent to 410 people on the SAENet database. The survey was aimed at establishing the nature of members’ involvement in M&E, their interest in joining a professional M&E network or association, and the function that such an organisation should fulfil. Most of the respondents declared themselves as evaluation practitioners responsible for designing and implementing evaluations. The second largest group came from the government sector and used evaluation results to formulate policies and design projects. The respondents indicated that they worked in a range of development sectors including education, health, welfare, transport and more. They all indicated that there was a definite need for an association and that this would increase opportunities for capacity building.

Whilst this survey provided some picture of the involvement of those who participated, it was impossible to extrapolate from the results any meaningful scope and depth of M&E practices in the country. In August of 2005 a follow-up mail survey was conducted as preparation for the SAMEA conference and funded by UCT. The survey was sent to 350 people on the SAENet listserv or who indicated on their websites that they conducted evaluations. These M&E practitioners were asked to provide their opinions on the state of M&E in South Africa at that time. The vast majority of the respondents indicated that they were unsure about the state of M&E, but thought there were not enough people capable of doing good quality evaluations, that the quality of the reports were weak, that there was not enough competition in the field, not enough high quality training available, that the government was not setting a good example, and that M&E was not a coherent profession.

Much has changed since the early survey via SAENet and we now explore the nature of the development of M&E.

M&E as a profession

Professionalisation involves the development of skills, identities, norms and values associated with becoming part of a professional group (Levine 2001). It relies on a substantive body of knowledge and a shared understanding of the roles of participants that allow them to engage in their professional field. There is also usually a concept of on-going professional development and a process to develop and train new entrants to the field. Smith (2001) offers a sobering comment from Worthen, a US based evaluator, who stated that ‘evaluation will not acquire all the hallmarks of a full-fledged profession within the next two decades’ (p. 296). He suggests that evaluation should be considered as a discipline, or, as Scriven (Smith 2001) prefers, a transdiscipline notable for its service to other disciplines. Professionalism is also defined by the combination of all the qualities that are connected with trained and skilled people in a specific field, for instance a health professional. Highly skilled individuals have been drawn to the field of M&E because of the tremendous value it can add to growth and development within society. They come from disparate disciplines and they are required to provide expertise to engage with dynamic, diverse and complex social settings. The need for clearly stated codes and standards of practice such as in the fields of health, finance, and law is part of the on-going debate about the professionalisation of M&E. Some of the processes and attempts at professionalising M&E in South Africa are outlined below.

The increasing and open interaction on the SAMEA listserv after its launch, created an awareness of the growth of M&E training courses being offered at various institutions. The growth in the availability of M&E training opportunities was steady and reflected the areas of demand. The University of Pretoria offered advanced and post-graduate qualifications for M&E in HIV and/or AIDS and students across Africa attended these courses. Similar courses, with varied emphases on health, policy, education, governance or methodology were offered at universities in KwaZulu-Natal. Wits University, the University of Johannesburg, the University of Cape Town, Stellenbosch University, University of Fort Hare and the University of the Western Cape. The University of Johannesburg also had an established School of Public Administration. Most of the training opportunities started out at a post-graduate level, generally located within a ‘sectoral’ (as in discipline) department, for example in heath, public administration, sociology or education, amongst others. More recently, several undergraduate credit-bearing courses in M&E are on offer at various institutions of higher learning, and credit-bearing courses registered on the National Qualifications Framework are offered by private providers as well as the newly established National School of Government. Since 2004 numerous government officials have also been exposed to international training offered by the World Bank in the form of the International Programme in Development Evaluation (IPDET) in Ottawa, Canada. The workshop instructors included prominent experts such as Michael Quinn Patton and Ray Rist (Sing 2004).

What about the development of a body of South African knowledge on M&E? Whilst there is still a reliance on international texts to inform and bolster the growing body of knowledge in the field of M&E in South Africa, locally produced textbooks such as ‘Community Psychology: Theory, method and practice’ with its dedicated chapter, ‘Social programme evaluation’ by Potter and Kruger (2001), and ‘The practice of social research’ by Babbie and Mouton (2001), have been utilised in university-based courses. The latest addition to this pool of resources is the ‘Managing evaluation in South Africa and Africa’ text edited by Cloete, Rabie and De Coning (2014). In addition, the government has produced a range of documents to support the M&E system, including the National Evaluation Policy Framework, guidelines, standards, competencies, quality assessment processes, training courses, all contributing to the resources available for improving the quality of evaluation practice.

Another aspect of professionalisation is publication. SAMEA has been centrally involved in the establishment of the African Evaluation Journal launched in 2014. Some South African experiences of M&E are captured here, in other sectoral journals such as education, health and public management and more widely in international journals and research reports of international development agencies such as UNICEF and the World Bank to name a few.

Identity formation through professionalisation involves the establishment of a body representing the profession, and in a sense SAMEA has played that role. SAMEA however relies heavily on individual members to volunteer their time and expertise on the board of directors responsible for general governance and arranging the biennial conference. The board members are full-time employees of government, academic institutions, NGOs, or private providers and rely on a part-time administrator and a limited budget to maintain and service the membership of SAMEA. More recently, SAMEA appointed a part-time operations director to attend to the numerous demands on the organisation. There is now a structured form of on-going collaboration between SAMEA, the PSC and DPME with a separate memorandum of understanding (MoU) with each government department. This collaboration has assured SAMEA’s ability to successfully run its biennial conferences and sustain its membership. According to the DPME MoU, SAMEA as a national association is an independent voice, a critical friend that provides expert advice to the DPME – the custodian of M&E within government (Basson 2013). Individual board members’ involvement with external bodies has strengthened SAMEA’s international relationships with organisations such as the AEA, IDEAS (International Development Evaluation Associations), (UKES) UK Evaluation Association, UNICEF and AfrEA.

In 2010, SAMEA initiated a Competency Open Forum in Cape Town to engage civil society and government in a discussion on evaluation competencies (Podems & King 2014). This effort did not gain momentum, but more recently SAMEA and DPME have commissioned research into international developments of the professionalisation of M&E to inform the debate and to guide strategies for on-going efforts towards the professionalisation of M&E in South Africa. The activities linked to the debate formed part of the 2015 Year of Evaluation agenda organised by SAMEA and DPME. The conferences, the professional organisation, the growing body of knowledge, organisational linkages, the engaging policy environment and cooperative resolutions mentioned above are signs of the ‘professionalisation’ of M&E in South Africa.

Monitoring and evaluation as an industry

The term ‘industry’ is deliberately used to signal the tremendous growth and application of the M&E field in social development in South Africa. The term becomes appropriate if one considers the economies of scale, the competitive nature of the tender processes, and the political, social and economic ramifications of the involvement of very many stakeholders. M&E practitioners are employed individually or as part of established service providers in most sectors of the economy; they respond to the numerous requests for services in South Africa and beyond.

In addition to M&E developing as a career for some, the relationship between corporate South Africa and the now democratic state has undergone a major transition. Business is expected to play a more supportive role in the social development efforts of government. Corporate social investment (CSI) reflects the growing pressure on corporate South Africa to contribute to social upliftment. CSI spending rose from an estimated R1.5 billion in the 1998/1999 financial year to over R6 billion in 2012/2013 (CSI Handbook 2013). Businesses tend to use NPOs to deliver CSI services in education, health, welfare and general development. They also require rigorous evaluations of development efforts to justify spending and to use the results for marketing and branding purposes.

South Africa has also been selected as a base for a number of international development/aid agencies who operate in the Southern hemisphere and on the rest of the African continent. International service providers, local service providers, university-based research units, individuals based at university and/or independent consultants all form part of a growing pool of expertise who offer training in M&E or engage in the tender processes linked to M&E activities.

Generally, the industrial sector is known and valued for its emphasis on ‘good quality’. Concepts and processes related to ‘quality assurance’, ‘quality checks’, ‘good standards’, ‘high quality’ as well as ‘value for money’, ‘cost-effectiveness’, ‘efficiency’, ‘branding’ and ‘profit-making’ come from this sector. M&E has taken on aspects of the industrial sector practices to ensure high quality control and improvement in the social development environment.

A different side of the industry is ‘watch-dog’ institutions such as National Treasury, the PSC, the Public Protector, the Auditor-General’s Office and DPME itself, who are set up to oversee government’s work and minimise risks, promote accountability and to curb corruption. However, the occurrence of corruption remains an on-going challenge for all these institutions.

NPOs, dependent largely on donor funding, must show that they are making a difference. NPOs often work in partnership with government departments to ensure access to communities or other organisations, and sometimes use volunteers to bolster their capacity. Given the complex scenario about the rates of unemployment in South Africa, lack of adequate housing, poor healthcare systems experienced by large numbers of people and other social ills, it becomes extremely difficult for one non-government organisation focusing on one aspect of the lives of one community to isolate its specific contribution to the well-being of that community. This only adds to the complexity the evaluation enterprise experiences, particularly at the grassroots level where government services and civil society interventions co-exist within diverse political, cultural and traditional domains.

M&E as governance tool

The intense pressure on the South African government to deliver services to the much needy population started immediately post 1994 when the ANC won the first democratic elections in South Africa. Goldman et al. (2014) point to the public sector reform initiatives introduced through National Treasury that emphasised efficiency, economy and effectiveness issues, and the organisational development approach introduced by the Department of Public Service and Administration. The latter attempted to promote a service focus by introducing and implementing performance management systems for public servants.

As early as 1995, the White Paper on Transformation in the Public Service introduced the concepts of M&E where the purpose was for departments and provincial administrations to develop strategies designed to promote ‘continuous improvement in the quantity and equity of service provision’ (Goldman et al. 2014). The institutionalisation of individual staff performance evaluation resulted from this initiative. The M&E of public policy, however, remained fragmented, undertaken sporadically by line function departments for purposes of annual departmental reports (Cloete 2009).

The Department of Labour and the PSC were the first official structures to monitor and evaluate government performance and communicate their findings to the various ministries and heads of departments (Naidoo 2012).

In 2005 the Cabinet adopted the Government-wide M&E System (GWMES) as a cross-cutting framework to look at monitoring and evaluation of the activities of all departments in government. The central underlying purpose was for effective executive decision-making in support of implementation, for informing evidence-based resource allocation and on-going policy refinement. The decision to introduce this system was also motivated by the need to report progress against the MDGs, pressure from donors requiring systematic evaluation of projects and emerging international accountability doctrines such as the Paris Declaration (Cloete 2009). The framework was later supported by the National Treasury’s FMPPI and the South African Statistical Quality Assessment Framework (SASQAF). In 2007, the initial GWMES proposal was revised and updated (Cloete 2009). The management of the system was the responsibility of a PCASs Unit located in the Presidency.

This system is now firmly in the hands of the Department of Planning, Monitoring and Evaluation (DPME) established in 2010 and situated in the Presidency, and given the planning responsibility from 2014. Using the National Development Plan’s (NDP) ‘concept of a developmental and capable state’, DPME promotes performance M&E as one of the key management interventions that can build government capacity and increase the impact of its service delivery initiatives (The Presidency 2014). A key initiative by the DPME to improve government performance was the introduction of an outcomes approach (Phillips 2012). This involved whole-government planning linked to key outcomes; clearly linking inputs and activities to outputs and the outcomes. The DPME is further mandated to facilitate the development plans for cross-cutting outcomes of government and to monitor and evaluate these plans. They must also monitor the performance of national and provincial government departments as well as municipalities and carry out evaluations in partnership with other departments.

DPME’s custodial role for M&E is reported to be similar to the functions of National Treasury for financial manage­ment (FMPPI) and the human resources management responsibility of the DPSA (The Presidency 2014). To this end, it has produced a National Evaluation Policy in 2011 with the expressed purposes to improve policy or programme performance; to improve accountability; to improve decision-making; and to generate knowledge for learning. It has established a national forum for the heads of M&E in national departments and a provincial forum for the heads of M&E from the provincial Premiers’ offices with the intention of sharing information and initiatives.

Some of the reported challenges faced by the system and DPME include inadequate information management systems; lack of a culture of coordination; a public sector focus on activities rather than outcomes; and existing legal frameworks that favour the silo approach (The Presidency 2012).

These M&E developments highlight an intense process aimed at promoting and fostering good governance. The integrated M&E system, predicated on an evidence-based philosophy, aspires to improve the quality of government decision-making and the quality of implementation, outcomes and impacts in South Africa.

It is clearly in the area of governance where the most recent growth in M&E has occurred. Mouton (2010) suggests that, although programme evaluation was introduced to the country by the international donor community, it was not until this practice was accepted by the public sector and institutionalised through the policy mechanisms mentioned above and the accompanying legislative mandates that a culture of M&E has emerged.

Critical reflections and conclusions

M&E as a ‘profession’ is growing steadily internationally and very fast in South Africa.

There is however is no compulsion to join the SAMEA. Despite the huge increase in M&E activities, training, studies and publications over the last decade and more, the membership of SAMEA has remained constant at ± 400 since its inception in 2005. New people join every year, but it has been unable to sustain longer-term membership with the Association. In May 2010, SAMEA had 348 active and 1054 inactive members in their directory (Mouton 2010). Currently there are 401 active members: 36% from government, 31% private, 11% NGO/civil society, and 8% academics with more than 1500 linked to the listserv. With a large(r), sustained and active membership SAMEA will be able to offer leadership and guidance where appropriate and will be in a better position to serve its membership through engagement with relevant and pertinent professional concerns.

At its inception, SAMEA adopted the African Evaluation Guidelines (AEG) – based on the programme evaluation standards used by the AEA – as a checklist to be used to assess and improve the quality of evaluations (Patel 2013). According to Patel, the AEG checklist under Utility, Feasibility Propriety and Accuracy (UFPA) should be used to assist in planning evaluations, negotiating clear contracts, reviewing progress and ensuring adequate completion of an evaluation. These guidelines are unfortunately not being promoted, popularised, shared, used or enforced. In 2012 the DPME, with the participation of SAMEA produced ‘Standards for evaluation in Government (DPME 2014) drawing upon the OECD DAC standards, the Joint Committee on Standards for Educational Evaluation (JCSEE) and the Swiss Evaluation Society (SEVAL). One can also assume that the professionals involved in M&E in South Africa have been exposed to a range – given the large number of disciplines – of ethics-in-research courses that may emphasise different aspects, depending on the discipline and focus of the research. The growth in M&E activities has therefore resulted in a concomitant growth in ‘standards’ drawn from different sources, each hoping to bolster and support quality evaluations in South Africa. There is a need for current M&E practitioners and training institutions to be aware of the different sets of guidelines or standards, their historical developments, the similarities and differences and their utility value in the South African contexts.

The judicious use of ethical standards is the ideal, but Schwandt (2008) also warns of the growing threat of ‘technical professionalism’. Technical professionalism, according to him, foregoes the contribution to the public values for which the profession stands and is replaced with the professional reduced as a supplier of expert services. This kind of evaluation practice can result in society viewing evaluation primarily as a technical undertaking, that is, the successful application of tools, systems, or procedures for determining outcomes or effects of policies and programmes; rather than evaluation being acknowledged as an independent kind of questioning and informed critical analysis.

The promoters of the National Evaluation Plan of 2012 and its forerunner, the GWME framework (2007) should be cognisant of the existing gap between a policy and its implementation. Within this ‘gap’ there exists a limited understanding of the social problem, and a policy design that denies adequate opportunity for implementing agents to make sense of the policy. Policy implementation processes should make sure that ‘the policy message is not simply de-coded’ by implementing agents, but rather there is an active process of interpretation that draws on the individual’s rich knowledge base of understandings, beliefs and attitudes (Spillane Reiser & Reimer 2002). Top-down (only) policy implementation is often uncritically received and complied with but incapable of surfacing underlying tensions and perceptions that inform day-to-day practices. A top-down model of policy formulation and implementation is further entrenched by the central location of the DPME in the Presidency, according to Latib (2014). He states that, whilst the DPME’s central location allows it to contest policy and decisions across the government system; it can also result in closing policy-relevant dialogue, deliberations and contestations. He argues for an open system that promotes inclusivity and allows for interaction on policies and decisions that will lead to more effective M&E. This will require a willingness on the part of the DPME to make technical M&E information available in more accessible forms, to share this information widely and to create grassroots forums where M&E findings and results can be interrogated. DPME’s close collaboration with the PSC and particularly SAMEA will enable government evaluation practices and results to be more widely shared and debated.

The current growth in M&E is welcomed by most individuals who understand the underlying purpose of M&E within a programme or development initiative; who have some grasp of how it fits in with the overall intent of programmes; who understand why particular activities and outcomes are being measured and others not; and the role they have to play in order to reap the benefits of the various M&E systems. However, countries, governments, political parties, provinces, departments, individual contractors, developers and others, more often than not, are faced with challenges of lack of resources, lack of capital, lack of technical skills, natural and unnatural disasters that impede planning and implementation of growth and development initiatives. Newly formed local governments in South Africa have struggled to remain within budget for a number of reasons. The national government through its change strategy documents; provide many instances of how service delivery such as access to formal housing has improved dramatically, but they also acknowledge that the government had not delivered optimally in relation to public expectations (Chabane 2012).

The future?

South Africa has grown in leaps and bounds with regards to M&E. The opportunities for learning are plentiful – if not locally, then from international institutions. Local texts are being written; more local knowledge is being constructed. More people choose to be involved in this area of work. There is a structure (SAMEA) that can facilitate the bringing together of ideas and information and bring synergy to a disparate field of research. There is space for creative thinking amongst civil society actors, academics, professionals and government. What is lacking however, both in South Africa and across the world, are examples of successfully implemented GWMESs over a long period of time. As complex as these systems are, they are further bedevilled by political and ideological cycles created by the necessary democratic processes through elections every five to seven years, depending on the country and or levels of government.

The present South African government faces the same dilemma. This is also an opportunity for South Africa to nurture, maintain and grow a national evaluation system that is successful over time. South Africa has joined the international debate regarding the professionalisation of M&E. Whatever the outcome of this debate, it will allow for divergent perspectives to emerge, for new voices to be heard and create space for new and creative forms of engagement with the challenges of evaluation. The debate will also influence how M&E as an industry is shaped, how the standards used will enable and guide practice, and how these will improve the quality of evaluations. Ultimately, and for the majority of the population in South Africa, a successful M&E system should result in improved and relevant policies, a responsive public service, better and high quality service delivery and vastly improved quality of life for all.

Acknowledgements

This article has been reviewed by a number of people. Each contribution has enriched its scope, depth and quality. It remains a work in progress. Its strengths are a tribute to all those who commented and I take full responsibility for its limitations.

Competing interests

The author declares that he has no financial or personal relationship(s) that may have inappropriately influenced him in writing this article.

References

Babbie, E. & Mouton, J., 2001, The practice of social research (S.A. edition), Oxford University Press, Cape Town.

Basson, R. 2013, South Africa: South African Monitoring and Evaluation Association (SAMEA). Voluntarism, Consolidation, Collaboration and Growth. The case of SAMEA, in Rugh, J & M. Segone(eds.), Voluntary Organization for Professional Evaluation (VOPEs) Learning from Africa, Americas, Asia, Australasia, Europe and Middle East, pp. 262−274, UNICEF.

Chabane, C., 2012, Speech by Minister in the Presidency for Performance Monitoring and Evaluation, on the Budget Vote of the Department for Performance Monitoring and Evaluation.

Cloete, F., 2009, ‘Evidence-based policy analysis in South Africa: Critical assessment of the emerging government-wide monitoring and evaluation system’, Journal of Public Administration 44(2), 293–311.

Cloete, F., Rabie, B. & De Coining, C. (eds.), 2014, Evaluation management in South Africa and Africa, SUN PRESS Imprint, Stellenbosch.

CSI Handbook, 2013, An authoritative guide to CSI in South Africa, 16th edn., Trailogue, Johannesburg.

De Vos, A.S. (ed.), 1998, Research at grassroots. A primer for the caring professions, Van Schaik Publishers, Pretoria.

DPME, 2014, Standards for M&E in government, The Presidency, Pretoria.

EES, 2014, European Evaluation Society Biennial Conference in Dublin, Ireland. Session on professionalization, 2−4 October.

Engela, R. & Ajam, T., 2010, Implementing a government-wide monitoring and evaluation system in South Africa, Independent Evaluation Group, The World bank, Washington, DC. (ECD Working Paper Series no. 21).

Goldman, I., Phillips, S. Engela, M., Akhalwaya, I., Gasa, A, Leon, B., Mohamed, H., & Mketi, T., 2014, ‘Evaluation in South Africa’, in F. Cloete, B. Rabie & C. de Coning (eds.), Evaluation management in South Africa and Africa, pp. 344−371, SUN PRESS Imprint, Stellenbosch.

GWME (Government-Wide Monitoring and Evaluation), 2007, Framework for managing programme performance information. National Treasury, Formeset Printers Cape (Pty) Ltd, Cape Town.

Latib, S., 2014, ‘Bringing politics and contestation back into monitoring and evaluation’, Journal of Public Administration 49(2), 460–473.

Levine, F.J., 2001, ‘Professionalization, certification, laborforce: United States’, in N.J. Smelser & P.B. Bates (eds.), International encyclopedia of the social and behavioral sciences, pp. 1192−1204, Elsevier, Oxford.

Lodge, T., 1999, South African politics since 1994, David Phillips Publishers, Cape Town.

Louw, J., 1998, ‘Programme evaluation: A structured assessment’, in J. Mouton, J. Muller, P. Franks & T. Sono (eds.), Theory and method in South African human sciences research: Advances and innovations, pp. 255−268, Human Sciences Research Council, Pretoria.

Mouton, C. 2010, ‘The history of programme evaluation in South Africa’, MPhil thesis, Faculty of Arts and Social Sciences. Sociology and Social Anthropology Department, University of Stellenbosch.

Naidoo, I., 2012, ‘Monitoring and evaluation in South Africa. Many purposes, multiple systems’, in M. Segone (ed.), From policy to results. Developing capacity for country monitoring and evaluation systems, pp. 303−322, UNICEF.

Patel, M., 2013, ‘African evaluation guidelines’, African Evaluation Journal 1(1), 5. http://dx.doi.org/10.4102/aej.v1i1.51

Patton, M.Q., 2001, ‘Evaluation, knowledge management, best practices, and higher quality lessons learned’, American Journal of Evaluation 22(3), 329–336. http://dx.doi.org/10.1177/109821400102200307

Phillips, S., 2012, ‘The Presidency outcome-based monitoring and evaluation approach’, PSC NEWS. Official magazine of the Public Service Commission, February/March.

Podems, D. & King, J.A., 2014, ‘Professionalizing evaluation: A global perspective on evaluator competencies’, The Canadian Journal of Program Evaluation 3, 1−4.

Potter, C., 1999, ‘Programme evaluation’, in M.T. Blanche & K. Durrheim (eds.), Research in practice, pp. 409−428, University of Cape Town Press, Cape Town.

Potter, C. & Kruger, J., 2001, ‘Social programme evaluation’, in M. Seedat, N. Duncan & S. Lazarus, (eds.), Community psychology: Theory, method and practice, pp. 189−211, Oxford University Press, Cape Town.

Rugh., J. & Segone, M. (eds.), 2013, Voluntary Organization for Professional Evaluation (VOPEs) Learning from Africa, Americas, Asia, Australasia, Europe and Middle East, UNICEF.

Schwandt, T.A., 2008, ‘Educating for intelligent belief in evaluation’, American Journal of Evaluation 29(2), 139–150. http://dx.doi.org/10.1177/1098214008316889

Scriven, M., 2001, ‘Evaluation: Future tense’, American Evaluation Journal 22(3), 301–307. http://dx.doi.org/10.1016/S1098-2140(01)00154-0

Sing, L., 2004, ‘Building skills for development evaluation’, PSC NEWS: Official magazine of the Public Service Commission, November/December issue, pp. 44−46.

Smith, M.F., 2001, ‘Evaluation: Preview of the future #2’, American Journal of Evaluation 22(3), 281–300. http://dx.doi.org/10.1177/109821400102200302

Spillane, J.P., Reiser, B.J. & Reimer, T., 2002, ‘Policy implementation and cognition: Reframing and refocusing implementation research’, Review of Educational Research 72(3), 387–431. http://dx.doi.org/10.3102/00346543072003387

Swilling, M. & Russell, B., 2002, Size and scope of the non-profit sector in South Africa, Graduate School of Public and Development, WITS.

The Presidency, 2012, National evaluation policy framework, Department Performance Monitoring and Evaluation, Pretoria.

The Presidency, 2014, Performance monitoring and evaluation: Principles and approach. Department Performance Monitoring and Evaluation, Pretoria.

Footnote

1. Some authors highlight the distinct differences between monitoring and evaluation, whilst others point to the programmatic nature of the research activity that requires both foci. M&E, as used in this review, encompasses both types of activities with the recognition of the multiple purposes to which it can be applied.


 

Crossref Citations

1. THE ROLE OF MONITORING AND EVALUATION AS A QUALITY CONTROL MECHANISM FOR EFFECTIVE GOVERNANCE: A SOUTH AFRICAN LOCAL GOVERNMENT PERSPECTIVE
Xolisile Gideon Ngumbela
Journal of Law and Sustainable Development  vol: 12  issue: 2  first page: e3252  year: 2024  
doi: 10.55908/sdgs.v12i2.3252