Article Information

Authors:
Futhi Umlaw1
Noqobo (Nox) Chitepo1

Affiliations:
1Department of Planning, Monitoring and Evaluation, South Africa

Correspondence to:
Futhi Umlaw

Email:
futhi@presidency-dpme.gov.za

Postal address:
PO Box X944, Pretoria 0001, South Africa

Dates:
Received: 30 Mar. 2015
Accepted: 14 Aug. 2015
Published: 30 Sept. 2015

How to cite this article:
Umlaw, F. & Chitepo, N., 2015, ‘State and use of monitoring and evaluation systems in national and provincial departments’, African Evaluation Journal 3(1), Art. #134, 15 pages. http://dx.doi.org/10.4102/aej.v3i1.134

Copyright Notice:
© 2015. The Authors. Licensee: AOSIS OpenJournals.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

State and use of monitoring and evaluation systems in national and provincial departments
In This Original Research...
Open Access
Abstract
Introduction
Background
Research method and design
Results
   • Enabling institutional environment
      • Organisation of monitoring and evaluation
      • Staffing of dedicated units
      • Dedicated budget allocations
      • Roles and responsibilities
      • Integration of systems
      • Barriers, incentives and organisational culture
      • Capacity
      • Information systems
      • Support for further development and sustaining an enabling environment for monitoring and evaluation
   • Indicators
      • Quality of indicators
   • Reporting
      • Who receives the reports?
      • Level of duplication
      • Effectiveness of reporting
      • What is reported?
      • Adequacy of information for reporting
      • Methods of communicating the results
   • Link between planning and monitoring and evaluation
      • Use of monitoring and evaluation in decisions at each phase of the planning and policy cycle
      • Use of evidence from monitoring and evaluation
Conclusion from the survey
Limitations of the study
Conclusion and recommendations
Acknowledgements
   • Competing interests
   • Authors contributions
References
Abstract

Since 2009, South Africa has seen a major shift in emphasis concerning monitoring and evaluation (M&E) systems. This shift was partially stimulated by the South African government being faced with a number of pressures, key amongst which were persistent poverty and inequality and widespread service delivery protests. These pressures resulted in a greater willingness by government to address the poor quality of public services, and other governance problems that needed a greater focus on M&E to address these challenges. This led to the establishment of the Department of Performance Monitoring and Evaluation (DPME) in early 2010. A comprehensive survey on the state and use of M&E systems in national and provincial government was conducted by the DPME as an attempt to understand the M&E landscape since 1994. The results were used to make informed policy and programme decisions. This paper outlines the findings of the survey.

Introduction

The survey on the state and use of M&E systems was conducted with 96 national and provincial departments to provide a descriptive baseline on the underlying components of an M&E system. Survey questions were based on the understanding that all spheres and sectors of government are expected to extend their capacity to collect, analyse, use and disseminate reliable information on what they achieved, in order to enhance accountability and learning, establish reliable evidence required to improve achievement, and provide a better platform for the coordination of effort across institutional boundaries.

The qualitative and quantitative survey generated sufficient detail for a meaningful picture of the status and use of M&E to emerge. It covered a range of issues designed to provide information on:

  • the extent to which an enabling environment has been established
  • specific systematic and design issues related to the use of M&E and its links to policy development, planning and management decision-making
  • detailed areas of practice, specifically the formulation and use of indicators and reporting.

The findings provided a descriptive picture that formed part of a situational analysis for the DPME’s strategic planning process. The key finding is the fairly wide discretion that departments have to design, organise, resource and use M&E, which has resulted in diversity within spheres, sectors and even departments in terms of policy, approach, concepts, frameworks and organisational arrangements.

South Africa has seen a major shift in emphasis concerning monitoring and evaluation (M&E) practices since 2009. The South African government that came to power following the 2009 elections faced a number of pressures, which included persistent poverty and inequality, and widespread service delivery protests at the municipal level. The pressure to improve service delivery, and extensive exposure to similar international contexts, emphasised the need for institutionalised M&E capacities, systems and practices that may inform policy and programme decisions and thereby improve service delivery and alleviate development problems. In 2012, the DPME, in collaboration with the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ), conducted an assessment of the state and use of M&E systems of national and provincial government departments. This paper presents the results of the comprehensive survey that offers critical understanding of the current public sector M&E landscape. The results may inform future decisions to enhance institutionalised M&E capacity, systems and practices.

Background

The Department of Performance Monitoring and Evaluation in the Presidency (DPME) was established on 01 January 2010 to ensure that government performance makes a meaningful impact on the lives of the people in South Africa. The mission of the DPME is to work with partners to improve government performance in achieving desired outcomes and to improve service delivery through changing the way government works (Department of Performance Monitoring and Evaluation 2009).

Since its creation, the DPME has focused on a limited number of cross-government outcomes that streamline all efforts to promote change. A series of different roles were created within the DPME. These include:

  • assessing the management performance of departments
  • development of an evaluation policy
  • robust M&E related to the achievement of outcomes
  • hands-on monitoring by an inspectorate which has led to the approach of front-line service delivery monitoring
  • the recent initiative to strengthen oversight and identify appropriate support strategies for the poor performance of local government.

The broader focus of the department is to facilitate, influence and support effective planning, M&E of government programmes aimed at improving service delivery, outcomes and impact on society.

In an effort to strive for continuous improvement, it became critical for the department to understand the environment in which it was operating and to ascertain the level of maturity of the public sector in terms of its M&E systems and practices.

Consequently, in 2012 the DPME, in collaboration with GIZ, conducted a study on the state and use of M&E systems of national and provincial government departments.

This study was to be understood in relation to other results-orientated prescripts, initiatives and assessments, both institutional and programmatic, which had been undertaken by other institutions in government. Since 1994, M&E has been introduced to government as part of a series of reforms to strengthen its systems and operations, backed by a range of statutes and other prescripts:

  • The Department of Public Service and Administration (DPSA) introduced an employee Performance Management and Development System (PMDS).
  • National Treasury, through its regulations, introduced the use of output targets and performance reporting against these in departmental strategic plans, annual performance plans (APPs) and annual reports. This regulation is supported by various National Treasury guidelines on the formulation of performance targets and reporting against these, such as the Framework for Managing Programme Performance Information (FMPPI). These guidelines are results-based and require departments to identify activities leading to outputs, outcomes and finally impacts on citizens. The National Treasury guidelines emphasise the need for strong logical links (or theories of change) between the activities and the intended outcomes and impacts.
  • The Auditor General followed by auditing reported performance against the pre-determined objectives in the APPs, as part of the annual audit of departments, which is included in the annual report of departments.
  • In 2005, Cabinet adopted the government-wide M&E system (GWMES) and in 2007, the Presidency released the Policy Framework on the GWMES. The GWMES framework is supported by National Treasury’s FMPPI, Statistics South Africa’s South African Statistical Quality Assessment Framework (SASQAF), and the 2011 National Evaluation Policy Framework (NEPF) produced by the DPME.
  • The GWMES focused on the coordination of stakeholder M&E systems. This document complements the GWMES by proposing basic M&E principles to underpin the institutionalisation and implementation of M&E in government. Whilst this basic principles document should have been produced first, government had to go through the experience of the GWMES and various M&E initiatives, including the outcomes system, to identify the need for the basic principles document.
  • The Public Service Commission, an organ of state that promotes good governance through M&E within the public sector, has also been instrumental in shaping the M&E arena in government and has been responsible for a number of guidelines on M&E as well as institutional assessments and programme evaluations.
Research method and design

In November 2012, a comprehensive study of the state and use of M&E systems in 96 national and provincial departments was conducted via an electronic-based survey questionnaire.

The survey targeted officials responsible for M&E in all national and provincial departments. The sample consisted of officials in the senior management service and M&E specialists (71%), who were included because of their expert knowledge of M&E, their extensive experience (80% served eight or more years in the public service), and their good grasp of the work of their departments.

A document review and analysis was conducted to establish the research design. The core documents identified by the DPME included the following prescriptions and guidelines on M&E applicable to the public service and to which departments are expected to conform:

  • The policy framework for government-wide M&E (DPME 2007).
  • The Framework for managing programme performance information (National Treasury 2007).
  • The role of Premiers’ Offices in government-wide monitoring and evaluation: A good practice guide (DPME 2008).
  • From policy vision to operational reality: Annual implementation update in support GWME policy framework (DPME 2009a).
  • Improving government performance: Our approach (DPME 2009b).
  • The South African Statistical Quality Assurance Framework (SASQAF) (Statistics South Africa 2010).
  • The National Evaluation Policy Framework (DPME 2011).
  • The National Development Plan 2030: Our future – Make it work (National Planning Commission 2012).
  • Generic functions of an M&E component in national government departments (DPME 2012a).
  • Generic functions of monitoring and evaluation components in the Offices of the Premier (DPME 2012b).
  • Generic roles and organisational design considerations for M&E components in provincial government departments (DPME 2012c).

A scan was also made of Kusek and Rist’ s (2004) Ten steps to a results-based monitoring and evaluation system to establish additional dimensions and areas that would be worth including in the study. Pre-specified dimensions and areas that formed the basis for the study included audit frameworks developed by the DPME.

The survey was administered through an electronic-based questionnaire that encompassed the following four components:

  • Enabling institutional environment.
  • Indicators and information planning.
  • Reporting.
  • Link between policy and planning and use of M&E information.

The questionnaire was based on the reviewed DPME policy frameworks listed above. The final questionnaire was divided into seven sections and included 97 questions, most of which were closed, requiring choice from a fixed set of options. The questionnaire was digitised and responses were analysed using the Microsoft Excel and SPSS statistical packages.

The core questions and results from the four components of the survey are presented in the following section.

Results

Enabling institutional environment

This section focuses on key features of the organisational environment that are commonly regarded as important enabling conditions for M&E. It provides a general picture of the extent to which an enabling environment for M&E has been established in departments.

The following question was posed to assess the enabling institutional environment: ‘Is there an enabling institutional environment for M&E in the department, province, sector or public service?’

Responses to the question are grouped into the following areas:

  • Organisation of M&E.
  • Staffing of dedicated units.
  • Dedicated budget allocations.
  • Roles and responsibilities.
  • Integration of system, specifically policy development, planning, budgeting and reporting as well as performance management.
  • Organisational culture; approach to M&E; values, incentives and barriers.
  • Information systems, specifically their reliability and the technology used.
  • Capacity of senior managers.
Organisation of monitoring and evaluation

The majority of departments (89%) have a dedicated unit for M&E that is staffed by senior officials who are either the director (D), chief director (CD) or deputy director general (DDG). The distribution of responses on the alignment of the dedicated M&E unit to the functions of research, policy, planning and other functions is set out in Figure 1.

FIGURE 1: Distribution of responses on alignment of monitoring and evaluation Unit.

In those departments that have a dedicated unit for M&E, 75% have joined the unit with planning, whilst only 45% of units have joined with policy and 34% with research (Figure 1). Just under a third of departments with a dedicated unit have more than one M&E unit.

Staffing of dedicated units

In departments that have a dedicated M&E unit, there is a very wide variation in unit staffing levels, ranging from no posts allocated to 140 posts, with the majority of departments reporting a post allocation of 10 staff members or below. It is possible, however, that different departments interpreted this question somewhat differently, depending on the extent to which the M&E unit is seen as a distinct, dedicated unit.

A third of departments (34%) reported that all allocated posts were filled at the time. Twenty-one per cent indicated a vacancy rate of between 41% and 60%, and 39% had a vacancy rate of 41% to 80%, which is high. In essence, this means that about half of all dedicated M&E units had not filled their vacancies (Figure 2), which would impact on their ability to perform at optimum.

FIGURE 2: Distribution of vacancy rates for monitoring and evaluation posts.

The survey shows that the most senior official responsible for M&E in departments is a director (38%) or CD (30%). This responsibility is carried by DDGs only in very few departments (6%) and hardly ever (2%) by assistant directors. An official from the senior management service (DDG, CD or D) leads the M&E unit in 74% of the departments.

Dedicated budget allocations

Sixty per cent or more of departments did not have a dedicated budget for research or evaluations. If there is no dedicated budget, the authors raise the concern that it is possible for M&E to be amongst the first casualties when budgets need to be cut, and yet, research and evaluation are often especially crucial in situations where shortages require increased attention to the strategic use of resources. Those departments that had a dedicated budget, budget specifically for the costs of collecting, analysing, communicating and maintaining M&E information (60%), the verification of information (54%) or information systems development (52%). The lack of dedicated budget for the remaining 40% may correlate with some of the information system’s identified gaps and weaknesses. However, most departments indicated that they had very limited information system technology, but nearly half had no dedicated budget to further develop the system.

Roles and responsibilities

An M&E unit in a government department has to perform certain roles related to the M&E function, as stipulated in the DPME’s core M&E documents. These include playing an active role in M&E of policy, programmes and projects; establishing and running performance information systems within their sections; using performance information to make decisions; and reporting and analysing the performance of their units. The responses in this section indicate the expectation that line managers should play an active role in M&E.

Twelve departments indicated that the responsibility for the design and management of indicators, and for data collection, collation and verification processes within the department is not clearly allocated to specific officials. Three of these departments did not have a dedicated M&E unit. However, almost all departments indicated that line managers are expected to play active roles in providing and using M&E information, specifically through M&E, for their areas of responsibility (93%) and in reporting analytically on the performance of their units (93%). Only 3% of departments stated that line managers are not specifically expected to use performance information to make decisions. Seventy-six per cent of departments indicated that line managers are expected to play an active role in establishing and running information systems.

One of the key barriers to establishing effective M&E practice is the absence of shared models, norms, roles, responsibilities and standards. Almost all (between 93% and 97%) departments noted that M&E is regarded as a responsibility of line managers. However, it is also noted that this responsibility is not adequately formalised, and that in under half of all departments, the line managers lack key knowledge, skills or understanding required to fulfil the expected role.

Integration of systems

Policy development, planning, budgeting and reporting: Most departments reported full integration of M&E with reporting (72%) and planning (61%). However, only 26% reported this for policy development and even fewer (20%) for budgeting. Nearly half of the respondents (46%) regarded integration with policy development as either non-existent (22%) or very limited (24%). The picture for budgeting is less extreme, with 11% believing it is non-existent or very limited (21%); 48% of the departments reported that integration is limited.

This suggests that the focus of M&E is generally monitoring outputs at operational level rather than enabling departments to probe the effectiveness of strategy and policy in terms of the outcomes and impact resulting for the public. This also suggests that whilst line managers are expected to play an active role in M&E and in using M&E information for decision-making, in almost all departments the responsibility may well be focused on planning, output monitoring and reporting. This suggests a relatively limited role linked to ‘checking’ implementation beyond reporting against pre-specified outputs rather than analysing contributions towards outcomes and impact as a basis for adaptive management and learning.

Performance management: The majority of respondents (66%) indicated that organisational and individual performance expectations are either consistently linked (28%) or usually linked (38%). In a third of departments (34%), there is a link between organisational and individual performance assessments, but it is inconsistent. The weakness of this link was highlighted by some as a key barrier to the effective use of M&E.

However, whilst 47% believed individual performance assessment and organisational performance were usually (34%) or consistently (13%) linked, 53% reported that such a link between individual performance assessment and organisational performance existed only sometimes (35%) or not at all (18%). This suggests that, potentially, in 87% of departments an individual performance assessment may take place that is not consistently linked to organisational performance. It should be noted that this question assumes that departments have a formal performance assessment system in place.

The lack of alignment between consistency in establishing a link between individual and organisational performance expectations and the subsequent performance assessment is marked enough to suggest that further investigation might be useful.

Barriers, incentives and organisational culture

Key system-based barriers are likely to impact on the capacity of implementing the M&E function. These barriers are mostly oriented to internal controls and M&E operational issues rather than focused on the strategic analysis of contributions to public outcomes and policy development or on the capacity of the actual M&E function.

Rewards and incentives: The majority of departments (77%) use financial rewards to recognise good performance. This can be of concern if individual achievement is not linked to organisational achievement and money is thus spent without the organisation necessarily showing positive results. Only 12% to 18% indicated that they use public acknowledgement and awards, promotion and access to opportunities for further study or learning as rewards for good performance. In 15% of departments, no standardised approach is used and in 6%, no rewards are given. Time off is very seldom used as a reward. The results show a very high level of reliance on one method of rewarding good performance.

Regarding performance that is below or above expectation, responses indicated that the least likely response to poor performance is hiding the information that would show this. Seventy-seven per cent of the respondents also indicated that ignoring results is an unlikely response. These are positive indications as high levels of such practices threaten any attempt to institutionalise effective M&E-based decision-making. However, 23% of departments indicated that the general practice in their departments is to ignore poor results. This would need to be addressed as a priority as it does not matter how good the established M&E system is in collecting and reporting relevant information, if it is simply ignored.

The majority of departments (82%) highlighted that holding the responsible official accountable for results, as well as learning and improvement planning, are likely responses to performance that is lower than expected. This is closely followed by the identification of diagnostic analysis, organisational learning, improvement planning and applicable action as the next most likely response (79%). Fifty-seven per cent identified analysis of the reasons for lower levels of performance, but without improvement planning or appropriate action as a likely response. More than half of the respondents indicated that learning would be documented, to be used for improvement (55%), but a similar percentage reported that managers tend to reject the results (54%). Additional responses to the open-ended questions suggest that punitive responses may predominate, followed by a requirement for remedial action to be taken, but there were few suggestions made of systemic and diagnostic analysis of possible causes as the basis for remedial action. In general, the responses suggest that the manager is held responsible for performance below expectations, but uses M&E selectively for decision-making. There can be little doubt, however, that a better understanding of the level of behaviour and a more nuanced picture of the variation in practice within departments would be valuable.

Responses to information on under-performance relative to expectations indicate a wide-spread practice of taking cognisance of these results, but ignoring their implementation. This suggests a very difficult environment for M&E and has implications for capacity-building interventions. Current M&E system-based and culture challenges are unlikely to be addressed through an isolated focus on the M&E unit’s capacity or individual skills development of line managers.

Organisational system-based barriers suggest that key challenges lie outside of the capacity of the M&E function itself. These include firstly, the fact that it is an accountability system orientated to internal controls rather than the analysis of contribution to public outcomes and impact. Secondly, M&E focus on operational issues rather than strategic analysis and policy development. The problem may lie in the frameworks that shape planning, reporting, M&E in the public service, which leads to an over-emphasis of tracking activities and outputs. The frameworks do not appear to encourage departments to link the focus of M&E on internal operational issues with a focus on their public value and delivery.

Barriers to the effective use of M&E for decision-making and accountability: The survey raised a question regarding the major barriers to the effective use of M&E for decision-making, learning and accountability in departments. The responses to this question are analysed in two groups: barriers that primarily arise from issues related to M&E systems and those that are related to organisational culture and/or values.

The responses in relation to M&E system-based barriers to the effective use of M&E for decision-making, learning and accountability are relatively mixed. The majority (61%) of departments focus on activities and outputs rather than outcomes and impact as a primary barrier to the effective use of M&E for decision-making, learning and accountability. Forty-eight per cent believed a lack of resources allocated to M&E is a barrier and 41% was of the opinion that a lack of relevant information when needed is to blame.

A third of departments (33%) described their key barriers as that they spend too little time on M&E, that they lack reliable and comprehensive information, that they have weak departmental capacity and that decisions are being made under too much pressure to enable effective use of evidence.

The responses to M&E system-based barriers suggest, therefore, that for the majority of departments key challenges lie outside of the capacity of the actual M&E function. The focus of M&E is on operational issues and not on strategic analysis and policy development. This is sufficiently widespread an impression to suggest that part of the problem may lie in the frameworks that shape planning, reporting, M&E in the public service, which may be leading to an over-emphasis on tracking activities and outputs and not impact.

Responses to questions related to culture and/or value-based barriers to the effective use of M&E for decision-making, learning and accountability are distributed rather than clustered in one or two areas (Figure 3). The responses, however, support the challenge on how M&E is understood, integrated into departmental systems and used by senior managers, rather than pointing to problems with the acceptance of M&E. The culture barriers are listed as little respect for evidence-based decisions, ignoring results, and responsible officials not being asked to explain results that do not meet expectations. Departments do not seem to take the measurements and assessments seriously; otherwise they would act on them.

FIGURE 3: Distribution of responses on culture or values-related barriers.

Another significant barrier that was mentioned by more than half the respondents is that problems are not treated as opportunities for learning and improvement. This is linked to senior management failing to champion M&E, and M&E being regarded as the responsibility of the M&E unit rather than that of all managers. In the absence of a strong M&E culture, perceptions of M&E as policing, controlling and yet having limited influence are also mentioned as barriers to implementation.

A preference for relatively consistent evidence-based decision-making is a key element in establishing an enabling environment for effective M&E. Forty-four per cent of respondents indicated that their departments prefer evidence-based decisions. However, a third (34%) indicated that it is seldom a preference. Nineteen per cent reported that evidence-based decisions are consistently preferred, whilst 3% indicated that this is never the preferred basis for decisions. This means that 63% rely on evidence for decisions, whilst the remainder have not bought into using facts or evidence.

Capacity

Capacity of senior managers: It is noteworthy that no department admitted to not understanding the importance of M&E or basic methods of data collection. Only 3% of the respondents rated their manager’s capacity to use M&E information as non-existent; a further 1% doubted the manager’s ability to analyse collected data. This suggests that urgent attention needs to be given to these four departments, despite the low percentage they represent in the overall data set. It is also a probable signal for further investigation and urgent action that between 40% and 50% of departments reported that senior managers seldom have the capacity to understand basic methods of data collection, how to use M&E information effectively for management decisions and how to analyse the data collected. Thus, departments believe they understand what the data says, but do not know what to do with or about it.

Institutional capacity: Respondents reported that use of M&E to inform key decisions is actively promoted (56% ‘mostly in place’ or ‘well institutionalised’) and that functioning consultation processes to ensure that information needs of users are met are ‘mostly in place’ or ‘well institutionalised’ (49%) (Figure 4). This picture is reversed in regard to institutionalisation of mechanisms for sharing knowledge (79% ‘not at all’ or ‘limited’) and the existence of an accessible electronic information system (84% ‘not at all’ or ‘limited’). In these two areas, i.e. mechanisms for sharing knowledge (33%) and accessible electronic information technology (IT) system (51%), departments reported that enabling capacities do not exist at all. This would mean that most information is still shared on a personal level:

FIGURE 4: Distribution of responses on extent to which departments have institutionalised capacity in a number of areas.

  • The fact that a slight majority of departments (51%) did not have ‘any’ (18%) or had only ‘limited’ (33%) institutionalised consultation processes on information needs in place is a significant indicator of weakness in the enabling environment for M&E. Overall, the results suggest that M&E is actively promoted in about half of all departments, but there are significant gaps in institutionalised capability to support its effective access and application.
Information systems

Reliability: Around 50% of departments rated the reliability of their information system to collect (56%), collate (55%), verify (45%) and store (42%) the information needed for effective M&E as generally reliable. Reliability in all four components of the system would be needed for overall reliability. The collection and collation of information is compromised if there are no processes for maintaining the integrity of the information through reliable verification and storage. Thirty-five per cent rated the reliability of collection as non-existent or limited, 32% the collation, 41% the verification and 43% the storage of information.

Technology: The majority of departments (64%) appear to rely on spreadsheet software to store and manage data. Twenty-eight per cent reported having established a central data repository that can be accessed by stakeholders. A possible explanation for the challenges departments reported with information systems is that only 18% of departments appear to have a system for integrating the input, collation and storage of information. This suggests they still transfer and collate information manually (77%) across the system with all the challenges for maintaining data integrity that this entails. Only 19% of the departments use statistical packages for data storage and processing. The responses to this question suggest that spreadsheets would need to be in standard use for information that needs to be collated across the public service in order to ensure compatibility of formats, at least until alternative technologies are more widely used in the public service.

The weakness of information systems is a key barrier. Collation, verification, storage and reliability of information collection are limited. Some of the limited reliability can be attributed to the current information collection systems. Few departments have systems for integrating the input, collation and storage of information in aneffective manner.

Support for further development and sustaining an enabling environment for monitoring and evaluation

Responses by departments to the open-ended question regarding what assistance would help in further developing and sustaining an enabling environment for M&E include the following:

Enhancement of capacity building initiatives:

  • An enabling environment for effective M&E, specifically a deeper understanding of its potential amongst political and administrative leadership; increased ability to facilitate and use effective M&E at senior management level; the application of coherent norms, systems and frameworks across the public service.
  • Adequate allocation of staff at the right vacancy level and with appropriate skills levels defined and based on standardised roles and responsibilities for M&E.
  • Training for all staff related to standardised roles and responsibilities for M&E.
  • Training customised to specific sectors, technical and professional environments such as health, social development and education.
  • Adequate budget allocations.
  • Accessible web-based systems.

Standardisation of the following across the public service:

  • Web-based reporting system, software and IT platform for internal departmental reporting, but also across the public service to ensure alignment in implementation and achievement towards common goals, objectives and outcomes.
  • Norms and standards related to M&E structures, staffing, IT budgets.
  • Understanding of and approach to M&E.
  • Frameworks for planning, policy development and research to enable an effective link to M&E and integration between departments and spheres.
  • Alignment of planning, M&E, policy and research.
  • Standardised training for all, informed by standardised roles and responsibilities for M&E.
  • Training for senior managers based on uniform responsibility for M&E.
  • Improved evaluation plans.
  • Reliable, accessible information systems.
  • Improved information quality.
  • Mobilisation around the importance of M&E at all executive and administrative levels.
  • Establishment of viable information-sharing forums.
  • Communication of feedback from oversight bodies – the Management Performance Assessment Tool (MPAT) and the Auditor General’s office have been effective in paying more attention to M&E.
Indicators

Departments were asked to indicate if they had indicators and targets integrated into their APPs to measure and monitor various aspects of their operations, including availability of financial and human resources, institutional capacity to meet targets, improvements in management practice, front-line service delivery, risks and outcomes.

A higher percentage of departments are collecting comprehensive information on activities (66%) and outputs (59%) than on outcomes (31%), impact (25%) and customer satisfaction levels measured amongst those the department serves (16%) (Figure 5).

FIGURE 5: Distribution of responses on levels of information available from indicators.

One factor that could be feeding this focus on activities and outputs is the predominant involvement of internal stakeholders (officials) in deciding what information is important (between 70% and 90% include these stakeholders) and the very limited role given to those who are intended to benefit from the services (the public [7%] and the intended beneficiaries [31%]). It is apparently comforting for departments to measure activities as conforming to demands made, without having to be accountable whether such activities lead to ultimate positive change in society.

Regarding the question of who is generally involved in deciding what indicators and targets should be used for M&E of results, the findings suggest that in the majority of situations, the official responsible for the result area and the relevant team are the determining voices in the decision on what is targeted and measured. Some departments (38%) have made some effort to include the public or those expected to benefit in determining the indicators to be used (Figure 6).

FIGURE 6: Distribution of responses on who is primarily involved in deciding what indicators and targets to use.

This is significant as most indicators are not purely technical and need to be based on the values and needs of the public. Outcome and impact indicators are specifically based on choice of ‘value’, what will be used to evaluate success or effectiveness. Given the strong policy orientation to consultation in the South African public service, these results suggest that there is a need for more attention to be given to inclusion of these groups in determining what to measure. Another noteworthy result is the high level of inclusion of transversal departments (74%), consultation with partner organisations (61%) and partner organisations (48%).

Quality of indicators

Departments were asked to indicate the adequacy and quality of M&E information, in an effort to determine the balance of quantitative and qualitative indicators. The question was asked whether information enables departments to adequately assess both the quantity and quality of the results achieved at each level of planning.

The distribution of responses for this question indicates a lack of adequate balance of measures of quantity and quality that follow increasing levels of complexity from outputs to impact. Seventy-five per cent of departments achieve this balance adequately or very well for outputs, but this declines to 54% for outcomes and 36% for impact. As highlighted in previous sections, outputs are focused on and are believed to be achieved, no matter how complex and of what quantity. Their impact, however, is low, considering the amount of effort and resources invested in the M&E system. Responses on the perceived comprehensiveness of information collected are somewhat at odds with the responses reported here. Half the respondents indicated that indicators enable them to have comprehensive information on outputs, 31% for outcomes and 25% for impact. These percentages are more in line with the other responses regarding the adequacy and reliability of information available for M&E. It is possible that respondents believed that there are no efficient measures available – or for them to complete – of the impact of M&E on recipients, society or the public.

It is important to note that only 1% of departments indicated that there was no balance of quantitative and qualitative indicators at the level of outputs. This suggests a very high level of basic compliance with developing indicators at activity and output level and that a significant number of departments think that this compliance automatically means that outcomes are also being achieved ‘adequately’ or ‘very well’ (see Figure 7).

FIGURE 7: Distribution of responses on whether indicators enable assessment of quantity and quality of results.

In terms of quality of information collected and its reliability for decision-making and accountability, the responses seem to be rather impressive. Respondents judge the quality of the information collected through indicators as a reliable basis for decision-making. Only 3% reported poor quality of information. However, only 2% reported excellent quality.

Despite the limitations in comprehensiveness and balance of qualitative and quantitative information noted in responses, 61% regarded the quality of the information as a basis for reliable decisions as ‘good enough’. This raises questions about the norms and standards being applied by respondents in many of the questions requiring value judgments to be made in the absence of a consistent set of norms and standards for the public service. Just over a third of respondents (34%) indicated perceived limitations in the quality of their information (Figure 8).

FIGURE 8: Perceptions of the quality and reliability of information for decision-making and accountability.

The question was asked whether data were verified independently − at each level of planning − to ensure accuracy and reliability. Responses indicate that data quality and accuracy is always a concern in the public sector domain. Findings in this regard possibly reflect the fact that use of inputs, activities and outputs are all subject to very specific reporting requirements and audits of one kind or another. Between 70% and 78% of respondents indicated external verification takes place for input, activity and output data. The corresponding 22% to 30% who answered ‘no’ regarding independent verification for these three areas of data may be a result of varying interpretations of the term ‘independently verified’, which implies external verification. The absence of many of these verification requirements is one possible reason for the drop in the number of departments verifying outcome data (58%) and a further significant drop in the number of departments that report verifying impact data (38%). These results indicate that departments may have to work on providing, improving and verifying outcome and impact data (see Figure 9).

FIGURE 9: Proportion of responses on what data is independently verified.

Reporting

The findings on reporting have been organised into the following subsections:

  • Who receives reports?
  • Level of duplication.
  • Effectiveness of reporting.
  • What is reported?
  • Adequacy of information for reporting.
  • Methods of communicating results.

These results are presented below.

Who receives the reports?

Four relatively distinct groupings who always receive the reports were identified: executive authorities (75%), Parliament (65%) and cluster meetings (50%); statutory bodies (40%); general staff of the department (36%); and transversal departments (35%). The public (25%), civil society (9%), donors (8%) and international agencies (5%) seldom are regular recipients of reports.

Level of duplication

The level of duplication between reports on performance that the department has to provide to different stakeholders is claimed to be moderate (39%) to high (32%), whilst the level of duplication between reports on administrative and management practices that a department has to provide to transversal departments is even higher (moderate 43% and high 37%).

Effectiveness of reporting

The basic allocation and execution of responsibility for providing routine reporting is in place (80% to 91%), but those receiving the reports are not satisfied (77%). Senior management does not consistently discuss reports on management practices (26%) and this suggests that these reports are compiled by a few officials and submitted for compliance, rather than used as a tool for assessing and improving management practice. Information is collected and submitted to external transversal departments, before it is discussed by those who should act on the information. This suggests the need for promoting a more active engagement with the information by internal managers rather than extracting selected information.

Reports that are based on the APP are considered a valuable (54%) or even extremely valuable (23%) basis for reviewing strategy and plans. Whilst most of the departments indicated that there is adequate information for monitoring activities and outputs (67%), far fewer indicated that there is limited information for evaluating efficiencies (36%), effectiveness (35%), sustainability (24%) or impact/satisfaction levels of those served (32%). This raises the question about whether the APP focuses on activities and outputs rather than outcomes and impact.

With regard to the frequency within which a department is asked to provide ad hoc reports each quarter (6 per quarter, 56%; 6–12 per quarter, 29%; more than 12 per quarter, 11%), it will be necessary to explore whether these requests arise because existing reports are either inadequate and do not cover the right issues at the right level of detail, or whether there is a genuine need for ad hoc reporting beyond the standard reporting cycles and framework.

What is reported?

A pattern emerges with regard to what departments generally report on (in relation to what was planned), that is their routine focus of capturing performance information related to completed activities (77%), use of budget (75%) and achieved outputs (79%). Analysis of performance information to identify what has been learned (23%), impact achieved (24%), what worked well (32%) or did not work (36%) and actions to be taken to improve results (39%) is routinely included in the reports of about a third of the departments. Two-thirds of the departments also try to establish whether expected results were achieved or not (64%) and the reasons for not achieving them (68%).

Only 10% of departments routinely include satisfaction levels of groups served and the quality, relevance and sustainability of the benefits provided, yet these are the issues that government wants to make most impact on to reduce service level protests and improve the lives of the wider society.

Adequacy of information for reporting

Most departments responded positively to the question whether or not they have adequate indicators to produce reports. Information required for reporting is readily available. However, it excludes information on outcomes, impact, satisfaction levels of those served or the analyses of performance information as it relates to quality, relevance and sustainability. Departments stated that they seldom or never report on such information.

Methods of communicating the results

A number of departments use different methods to communicate information to external stakeholders. Websites (81%) and presentations (68%) are frequently used. These are followed by brochures and targeted reports (59% respectively), interactions with communities (51%) and the media (48%), newsletters (48%) and newspapers (47%). Less than a third of departments use conference papers (24%), fact sheets (19%) and ‘barometers’ (8%). The most frequently used methods seem to be print (brochures, reports, newsletters, newspapers/journalists, conference papers). Publication on the website means that communicated results are accessible to members of the public who have access to the internet, beyond those community members already exposed to the reports.

Link between planning and monitoring and evaluation

Use of monitoring and evaluation in decisions at each phase of the planning and policy cycle

Table 1 summarises the responses by departments regarding the extent to which key decisions at different stages of the policy and planning cycle are informed by M&E information. A model based on four stages is used to structure the responses, although the questionnaire did not make the stages explicit. The stages of the planning cycle used to organise and analyse responses are:

TABLE 1: Decisions informed by information from monitoring and evaluation.
  • an analysis phase
  • a design and planning phase
  • an implementation, monitoring and adaptation phase
  • and finally, an evaluation phase, which overlaps and feeds into the analysis phase in the new cycle.

Although this is seldom a simple linear process, the sequenced phases are useful to identify the way in which M&E can contribute at each phase to learning and improvement. A short label has been given to each of the decisions in the question, and this has been used in Table 1 to present the distribution of department responses.

More departments noted the use of M&E information in the planning and design phase of the cycle than in other phases; 85% indicated that it is ‘often’ or ‘always’ used for target setting. The percentage for ‘often’ or ‘always’ drops to between 61% and 72% for the remaining uses of M&E information in this phase. One aspect of use in the implementation and monitoring phase also stands out as very widely practiced; 85% use M&E information to monitor progress with implementation. This suggests that decisions at the planning and design phase are often not based on the prior use of M&E information relevant to the analysis of options, needs and causes. The focus on ‘target setting’ in this phase, and on ‘monitoring progress’ in the next phase (implementation and monitoring) further suggests a rather narrow use of M&E for internal monitoring and control. This impression is reinforced in the evaluation phase, where M&E information to track achievement of expected results is indicated as ‘often’ or ‘always’ used, whereas the level of use for more diagnostic purposes (the reasons for not achieving expected results) drops to 41% of departments.

Use of evidence from monitoring and evaluation

This section further probes the issue of the use of M&E information. Responses are correlated with those in the previous subsection to assess consistency. The first two questions are very broad and focus on whether M&E appears to have a more significant impact on policy or on general management decisions. The responses suggest that significantly more departments have used M&E to make general management decisions rather than policy changes.

Half of the respondents have used M&E to change policy in the last three years. This would broadly correlate with the pattern of responses to the use of M&E information in the policy development and planning cycle. The other 50% may not be ignoring M&E information; information may be used to identify that policy does not need to change. If this question is used again, this should be taken into account in its formulation. The second question was rather general and consequently, a far higher number of departments report using M&E information for ‘other significant changes in what [departments] do’ than the responses received to the more specific questions related to the policy and planning cycle.

Conclusion from the survey

The survey on the state and use of M&E systems, although it has some limitations, gives a general overview on a number of important aspects of the M&E systems in the public sector. The study suggests that the M&E systems of most departments focus on quantitative measures of the achievement of pre-specified activities and outputs. These measures do not contribute to relevant, sustainable and adequate public outcomes and impact. In general, in a results-based framework, it is not advisable to assess and analyse annual achievement of outputs in isolation from an assessment of the extent to which the outputs are contributing to strategic public benefit, social change and improvements.

In addition to confirming the challenges regarding the focus and quality of M&E for the majority of departments, the study underlines the suggestion of low expectations of M&E and the reporting based on it. This is supported by the contrast between the low percentage of departments that routinely focus on issues of outcome and impact, and the high proportion of departments that indicate high levels of satisfaction with the information available and the reports based on this information.

Some of the identified challenges – where results dip below 50% of departments in the functional range – show that a lot more work has to be done, for example improving utilisation of M&E information for the analysis of needs, situation and options; balancing quantitative and qualitative indicators; and establishing indicators and systems that will provide reliable information on impact.

Around 15% of departments that use M&E effectively have reasonably well-institutionalised monitoring of internal management issues − budgets, activities, outputs and administrative issues, as well as some evaluation capacity driving policy development and planning. Another 35% have functional monitoring with marked unevenness in some areas, often particularly related to weak information systems and lacking skills or capacity.

Limitations of the study

The timeframe in which the study was conducted is a limitation. Departments had not completed a survey of this nature before within such a short period of time and the time pressure meant that not all departments could respond to the survey. However, the 62% response rate provides a reasonable overall picture of the status and use of M&E in the public service.

A number of the respondents indicated that they found some of the questions to be ambiguous. This appears to be linked to the variation in concepts and terminology in use in the public service. The general nature of the questions, the very wide diversity in structures, systems and approaches for implementation of M&E as well as limited use of a specific standardised conceptual and operational framework in the public service mean that concepts and terms can often be interpreted in a variety of ways. The following, for example, often need to be contextualised in order for meanings to be specific and clear:

  • ‘Responsibility’ often needs to be defined in terms of the specific formal ways in which responsibility is allocated, acknowledged and acted upon.
  • ‘Reliability’, for example, of information systems, can mean different things depending on how the requirements of the system are defined and understood in each organisational context.
  • ‘Capacity’ can be understood narrowly as knowledge and skills or more widely as a whole set of factors that include access to information, infrastructure and other resources.
  • ‘Integration’ can refer to fairly informal practices or highly formalised systems.

Respondents commented that key terms should have been defined.

Another limitation noted by departments concerns the inevitable subjectivity related to a single source self-reporting methodology. Some respondents commented on the subjectivity of the survey responses. This is an acknowledged result of the methodology adopted, but is regarded as an acceptable risk, given that the judgements were to be made by the responsible official nominated to respond on behalf of the department.

Conclusion and recommendations

A comprehensive survey report was produced which presents the results descriptively in the form of tables and graphs. It contains sufficient detail for a meaningful assessment baseline on the state and use of components of an M&E system to feed into a situational analysis in the DPME’s planning process.

Since the study, the DPME has made some efforts to respond to the findings. These include:

  • generic guidelines on M&E for national and provincial departments
  • a comprehensive toolkit on the National Evaluation System
  • standardisation of measurements with clear explanation of all terms, and new goals and strategies regarding M&E drawn across all departments, particularly aimed at positively changing the lives of communities and society
  • institutionalisation of standard departmental assessments for national, provincial and local government providing a standard approach to monitoring of front-line services.

Departments should shift their focus from activities (and their measurement) to outcomes and impact on society. Otherwise, all changes will result in ‘more of the same’: much activity, measured output, individuals rewarded financially and departments not actually achieving any change in society. It feels as if departments cannot understand why communities have poor service delivery protests, seeing that the departments work so hard and have so many activities. The DPME plans to conduct a ‘customer’ satisfaction survey every 6 months to measure whether activities have translated into visible and tangible improvements in society. Government would do well to communicate such successes (once achieved) via their website and social media or press, as it will add credibility and goodwill.

The results of the survey are currently being used to inform policy development and to design a study to assess whether the M&E systems of national and provincial departments in the four specific sectors of education, health, housing and environment are fit for purpose as measured by their own stated intentions. This study will focus on the qualitative diagnosis of key aspects in the departments that constitute the administrative centre of government.

Lastly, since the South African government has prioritised M&E as a key mechanism for impacting positively on the lives of South Africans, the DPME has embarked on a wide range of initiatives to improve the institutionalisation of effective M&E. The capacity of all spheres and sectors of government is being enhanced on standardised roles and responsibilities for M&E. The M&E knowledge and skills of senior managers are being strengthened, which includes collecting, analysing, using and disseminating reliable information on what is achieved in order to enhance accountability and learning, establish reliable evidence required to improve achievement, and provide a better platform for the coordination of effort across institutional boundaries.

Efforts are also underway to standardise a web-based reporting system, software and IT platform for reporting across the public sector. Norms and standards related to M&E structures, staffing, IT and budgets are being refined. In addition, frameworks for planning, policy development and research to enable an effective link to M&E, and the integration between departments and spheres are being addressed.

Overall, there is a higher level of consciousness than before on what elements constitute a functional M&E system in departments. This factor alone has meant that the environment is more responsive than before.

Acknowledgements

The authors would like to thank Dr Babette Rabie of the School of Public Leadership, University of Stellenbosch, and the two anonymous referees who with their helpful comments and recommendations helped to strengthen this paper. They are also grateful to the DPME, and to the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) for funding the study on the state and use of M&E systems in national and provincial departments.

Competing interests

The authors declare that they have no financial or personal relationships which may have inappropriately influenced them in writing this article.

Authors contributions

F.U. (DPME) and N.C. (DPME) contributed equally to writing this paper.

References

Department of Performance Monitoring and Evaluation, 2007, Policy framework for the government-wide monitoring and evaluation system, The Presidency, Pretoria.

Department of Performance Monitoring and Evaluation, 2008, The role of Premiers Offices in government-wide monitoring and evaluation: A good practice guide, viewed 09 February 2015, from http://www.thepresidency.gov.za/pebble.asp?relid=14811

Department of Performance Monitoring and Evaluation, 2009a, From policy vision to operational reality 2007: Annual implementation update in support GWME policy framework, viewed 20 February 2015, from http://www.thepresidency.gov.za/learning/reference/implementation.pdf

Department of Performance Monitoring and Evaluation, 2009b, Improving government performance: Our approach, The Presidency, Pretoria.

Department of Performance Monitoring and Evaluation, 2011, The national evaluation policy framework, The Presidency, Pretoria.

Department of Performance Monitoring and Evaluation, 2012a, Generic functions of an M& E component in national government departments, viewed 12 February 2015, from http://www.thepresidency-dpme.gov.za/keyfocusareas/gwmeSite/Pages/GWMEGuidelines.aspx

Department of Performance Monitoring and Evaluation, 2012b, Generic functions of monitoring and evaluation components in the Offices of the Premier, viewed 9 February 2015, from http://ww.thepresidency-dpme.gov.za/keyfocusareas/gwmeSite/Pages/GWMEGuidelines.aspx

Department of Performance Monitoring and Evaluation, 2012c, Generic roles and organisational design considerations for M& E components in provincial government departments, viewed 13 February 2015, from http://www.thepresidency-dpme.gov.za/keyfocusareas/gwmeSite/Pages/GWMEGuidelines.aspx

Kusek, J. & Rist, R.C., 2004, Ten steps to a results-based monitoring and evaluation system, World Bank, Washington, DC.

National Planning Commission, 2012, The national development plan 2030: Our future Make it work, The Presidency, Pretoria.

National Treasury, 2007, Framework for managing programme performance information, National Treasury, Pretoria.

Statistics South Africa, 2010, South African Statistical Quality Assessment Framework (SASQAF), 2nd edn., viewed 16 February 2015, from http://www.statssa.gov.za/standardisation/SASQAF_Edition_2.pdf


 

Crossref Citations

1. How performance management regulations shape evaluation practice in South African municipalities
Takunda J. Chirau, Caitlin Blaser-Mapitsa
Evaluation and Program Planning  vol: 82  first page: 101831  year: 2020  
doi: 10.1016/j.evalprogplan.2020.101831